Latest Breaking News
In reply to the discussion: Michigan may soon allow self-driving cars on the road [View all]jessewalkeratpdx
(2 posts)Let me say I am not against autonomous vehicles; based on the available empirical evidence, they will probably lower accident rates significantly and and improve our lives in major ways. No; I am after something else.
My first point is rather that, at least in computing, we have well developed models for reasoning about both the opportunities and limitations of the technology, and we have to accept both to maximize the benefits and minimize the problems that can stem from its adoption. This theory informs us there will always be unanticipated behavior in all sufficiently complex systems -- this is inherent in the complexity of these systems, and not just due to sloth or inattention or inexperience or negligence or whatever. Airplanes still crash; trains still derail; nuclear power plants still experience meltdowns, even with the most careful considerations and procedures in place.
My second point is this implies that there will always be some risk of harm to the human beings using a new technology, and so our society needs a discussion to reach a consensus about the cost-versus-benefits trade-offs around this technology. We have obviously reached something of a consensus for human driven vehicles, as we accept 30,000+ deaths each year from automobile, as well as much higher injury numbers and huge dollar amounts in property destruction, to reap the benefits this form of mobility gives us.
As an example, when vehicles do become fully autonomous, then it seems unlikely that passengers can be responsible for the accidents that inevitably will occur. So does this mean owners of these vehicles don't need collision insurance -- any liability falls upon the manufacturer? Most of us would probably like this, but what about for a situation which no one could reasonably anticipate? So maybe this technology won't be economically viable unless liability is spread around the ecosystem more broadly? We can anticipate that manufacturers are likely to prefer a model based around shedding liability for collisions to the wider ecosystem, and will squeal like greased pigs unless they get their way.
As a second example we can expect that firmware running in these futuristic vehicles will be vulnerable to malware (firmware in existing vehicles already suffer from this issue), as the engineering approach being adopted across the automotive industry is closer to that used in the commercial software development than avionics. If we want to lessen the malware threat then we probably need the automotive industry to change its basic engineering approach, but this will lead to higher costs. Would we accept higher costs to reduce the risk of ransomware interrupting our cross-country trip to demand 100 Bitcoins or face death? Probably. But how much additional cost are we willing to accept to substantially reduce malware attacks against our vehicle?
In our culture we tend to think about innovation from either a Utopian or a Luddite perspective. Manufacturers usually embrace the Utopian position, to maximize their markets, while the Luddites tend to react against social injustices arising from technology whose advocates failed to give even cursory consideration to the broader implications. I hope we can avoid both extremes.