Journalists are treating self-driving vehicles as a given and pundits are already chin-scratching about the social implications. Whoa, Nelly! Put down that Koolaid.™
Here's a hypothetical. Big Mack the Robot Truck is barreling down the Nevada freeway. Bubba the driver is curled up asleep behind the seats.
Several minutes before, a station wagon with a family of seven lost control on a soft shoulder, flipped over, and crashed in an arroyo several hundred yards from the highway. Everyone in the car was killed except a baby, who was thrown out the window, landed in a thicket of Johnson Grass, and is now crawling back towards the road.
If Bubba had been driving he might have spotted the smoke from the crash and gone into alert mode for possible freeway consequences up ahead. Big Mack's sensors note the plume of smoke but its response algorithms "disregard it" because it is well off the highway. Bubba might have recognized that little white smudge up ahead as a crawling human; Big Mack "disregards it" as blowing trash or a desert rodent that would cause no harm to the truck. Thus, braking procedures are not implemented and...
The grandparents of the deceased infant sue the driver, the manufacturer, and the trucking company. A jury, happy to punish a "mere robot" and driver dumb enough to trust one, awards $20 million in damages.
The parties hoping to profit from driverless vehicles will have to factor in the business costs of one or two "freak accidents" such as the above. Is it worth the ethical and PR risk? Or, they'll have to bribe legislators to pass laws limiting liability for robot vehicle accidents; again, risky if discovered, PR-wise.
Is the above scenario plausible? Are robotic detection-and-judgment algorithms "smart" enough to handle all crazy situations at or near the speed limit? So many articles of the "driverless cars are here" persuasion seem to assume so.