Guest “don’t you just?” by David Middleton
Before we get to self-driving vehicular manslaughter, don’t you just love futurists?
Sep 13, 2018
Driverless Cars Will Dramatically Change Where And How We LiveJim Morrison
Driverless cars aren’t coming. They’re already here. Much of he technology has been around for decades and many features are available on new cars today. Experts agree fully autonomous vehicles (AVs) will soon be ubiquitous and they will significantly disrupt many industries and change where and how we live. The only questions are: When? And how?
Nearly all of the necessary technology had been developed and was ready to go in the 1990s, according to Jason Schreiber, senior principal at Stantec Urban Places.
“We did get a lot of backbone planning done for connected vehicles,” Schrieber said. “Those protocols exist and there are cities that a ready for them. The technology just wasn’t scalable to the point that it was affordable, until now.”
[…]
Consumers will benefit
A 2017 report from RethinkX claims AVs will save the average family $5,600 every year. How? Families won’t pay for cars, insurance, sales tax, excise tax, fuel or repairs. They’ll just pay per trip.In addition to that, in spite of the public perception that autonomous vehicles will be dangerous, they are widely regarded as much, much safer than cars driven by humans.
There were 40,100 highway deaths in the U.S. last year and the three biggest causes were alcohol, speeding and distracted driving according to the National Safety Council.
[…]
William F. Lyons Jr., president and CEO of Fort Hill Companies, a Boston-based architecture and infrastructure design firm said AVs don’t drink or use drugs, speed or get distracted.
“AVs have traveled 130 million vehicle miles during testing with 2 deaths,” Lyons said. “And they’re constantly improving the technology. There is no question they will be safer than human drivers.”
[…]
Forbes
The dude’s name really was Jim Morrison.
“AVs don’t drink or use drugs, speed or get distracted.”
Uber self-driving car involved in fatal crash couldn’t detect jaywalkers
The system had several serious software flaws, the NTSB said.Steve Dent, @stevetdent
11.06.19 in TransportationUber’s self-driving car that struck and killed a pedestrian in March 2018 had serious software flaws, including the inability to recognize jaywalkers, according to the NTSB. The US safety agency said that Uber’s software failed to recognize the 49-year-old victim, Elaine Herzberg, as a pedestrian crossing the street. It didn’t calculate that it could potentially collide with her until 1.2 seconds before impact, at which point it was too late to brake.
More surprisingly, the NTSB said Uber’s system design “did not include a consideration for jaywalking pedestrians.” On top of that, the car initiated a one second braking delay so that the vehicle could calculate an alternative path or let the safety driver take control. (Uber has since eliminated that function in a software update.)
[…]
engadget
Sounds like the AV got distracted. AV’s don’t deal with the unexpected very well… And they’re easy prey for aggressive drivers…
Intel Says Aggressive A-Hole Self-Driving Cars Could Help Improve Traffic Safety
by Shane McGlaun — Thursday, May 02, 2019
All drivers have been there before where someone whips in front of you from a merge lane into a gap barely large enough for their car, and you want to scream. Intel and its subsidiary Mobileye think that one way to solve some of the problems that self-driving cars have today is by making them much more aggressive and essentially turning them into a-holes that will shoot into that a small gap in traffic, with a level of precision. One of the challenges for autonomous cars right now is that the AI inside makes them act like your (stereotypical) Grandmother.
[…]
Intel wants to cure that nervous behavior using something it calls the Responsibility-Sensitive Safety (RSS) program. RSS is meant to help the autonomous vehicle act like an assertive human driver. According to Intel, the more assertive autonomous cars will make for safer and more freely-flowing traffic.
The challenge with the AI in self-driving cars today is that they only make decisions when the calculations the vehicles constantly run show crash probability is extremely low. That cautiousness equates to missed opportunities to make turns when a gap presents itself and leads to frustrated passengers. In the RSS system, the AI is deterministic, not probabilistic. Being deterministic gives the autonomous vehicle a playbook of sorts that gives rules defining whats sale and unsafe in a driving situation.
This rulebook will allow the AI inside the vehicle to make more aggressive maneuvers right up to the line that separates safe and unsafe.
Hot Hardware
[…]
AI A-hole AV’s… A sort of Skynet Terminator AV?
“Yeah, that’s the ticket!”
[A]utonomous vehicles … are widely regarded as much, much safer than cars driven by humans.
“AVs have traveled 130 million vehicle miles during testing with 2 deaths,” Lyons said.
Forbes
That’s 1.54 per 100 million vehicle miles traveled.
In 2018, the fatality rate per 100 million vehicle miles traveled – a figure that factors out increases or decreases in total driving – was 1.14. That was down from 1.16 in 2017 but tied for the fourth highest of the previous 10 years.
USA Today
1.54 is 35% more than 1.14.
About 1/3 of US traffic fatalities are due to drunk driving. Rather than putting Skynet Terminator AV’s on the road, maybe the better pathway is to put a breathalyzer in every vehicle. AI would save more lives by recognizing drunk drivers before they can start the engine than by failing to recognize jaywalkers because they aren’t supposed to be there.
Maybe the futurists should have paid better attention to Star Trek...
Note on comments
Before anyone comments that the article didn’t say “Self-Driving Uber Killed Pedestrian for Jaywalking,” please Google the word “hyperbole” first.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Deterministic software? Versus probabilistic software? A playbook of sorts?
I worked several decades in software development and data management, but that is gobbledegook to me.
But I may have missed something. So, the question is: was the Boeing 737 max controlled by deterministic code, by probabilistic code, or by a playbook? I guess a great many people would want to know.
I thought the problem was, that Boeing didn’t inform it’s customer clearly and thus the pilots where not schooled correctly for the new system. Management failure not software failure.
It was a software problem compounded by management failure.
Thank you Dave.
To all WUWT: Please read the report that is linked in the post. It will take you about 15 minutes. The NTSB writes excellent, easy to understand reports (I read many of them). Then make your comments.
The NSTB report is very discouraging. I have been pretty “skeptical” with regards to when fully autonomous vehicles will be ready for general use. The report leaves me more pessimistic. The vehicle involved had the technical capability to see and correctly predict the path of the pedestrian nearly 5 seconds prior to the collision. The system had several clear software flaws: When some object is detected by one of the systems (lidar, radar, camera) it is classified…and a path prediction made if previous history is available. The huge problem was that if the object classification was changed (due to new object information), then any previous motion history was disregarded. Based on different classifications made by the system by different sensors (vehicle, pedestrian, bicycle, other), the system spent the 5 seconds before the collision changing the classification, and hence always disregarding the available path history. Once the system did classify the object as a pedestrian, and that it was in the path of the vehicle, then the worst software “feature” kicked in…it did nothing for 1 second. Then it slowed down…slowly. Crash. Dead.
The system was clearly designed for testing only. The in vehicle operator was supposed to take command when bad things were or about the happen. That was the purpose of the 1 second delay (also to minimize false alarms). Clearly, the vehicle operator did not help in this case. They could also see the pedestrian, but took no action as the vehicle gave them no clues that it was going to run into the pedestrian.
The huge problem I have with all this is that these were not complex problems. The system had the technical capability to “see” , predict and take the actions required to avoid the crash. The issue is that the developers were depending upon the intervention of the vehicle operator to avoid this kind of problem. Now we get the central issue. The success of autonomous vehicles is completely dependent upon the relationship between the vehicle and humans. This test shows that the developers completely failed to understand this complex relationship. Autonomous vehicles will do just fine with other autonomous vehicles, and a relatively static environment (known routes, good weather, etc). Add in the chaos of human behavior, the chaos of weather, and the chaos of everything else (equipment failure, animals, sun reflection, hidden potholes, etc, etc, etc) and you have an extremely difficult problem to solve. Note how long it took for the semi autonomous systems in aircraft to be leveraged to their full advantage.
Aside from slow speed vehicles, 25 mph, along known routes, in good weather, it will be a very long time before any of us are in an autonomous vehicle in real world conditions. An once we are there, take a look at degrading operator “skill”…..again, review how commercial aircraft have handled vehicle autonomy (hint….operator assistance, NOT complete autonomy…coupled with many hours in the simulator).
How many vehicle operators of “autonomous” vehicles will spend time in vehicle simulators. Answer, none.
Semi autonomous vehicles are here (I have one). I love it. It will not be driving anytime soon.
Ethan Brand
Actually the computer on an Airbus/B737 Max fully controls all/some actuators (that you can turn off the motorized trim altogether doesn’t change that).
You can’t override it. And it caused some incidents or accidents.
Modern airplanes are not B747 Classic and don’t have the autopilot on or off. What people call the autopilot being off on Airbus is what people think as autopilot being on: more stuff is controlled automatically with “autopilot turned off” than was with “on” on previous generations.
The word autopilot should be avoided on these computer controlled planes.
I’ve driven a car with a breathalyzer before. Putting them in every car is a bad idea.
It’s not just that you have to blow to start the car, the machine randomly goes off while you’re driving to ensure the driver is still sober… Otherwise, a drunk could just have a sober friend blow to start the car and take off, or they could blow before they start drinking and just leave the car running until they’re done.
At any rate, if the “breathalyzer in every car” works the same way, I guarantee you’ll have a dramatic increase in distracted driving crashes because that machine going off while you’re driving down the road is VERY distracting.
The machine gives you some time to get off the road, but depending on the situation, it may not be easy to do so. Either way, most people are not going to pull over multiple times on a trip to blow into their breathalyzer and are going to do it on the move. Distracted driving is easily as dangerous as drunk driving. I don’t think proponents of this technology have put nearly enough thought into the unintended consequences that would follow.
Yes, I don’t see any reason to have it for the significant majority of people who have never driven while intoxicated.
David, Wow! I always had a hunch Jim Morrison wasn’t really dead…
Remember the phone booths and the stench and trash often found when entering, that is what a driver less car will be like. Why the urge to get people out of their privately owned cars on the pretense of saving money. How about the convenience of not having to wait for the car to arrive and getting into your own car and go where you want and no one has to know but you.
According to NHTSA 1/3 of highway deaths are caused by drunk drivers. Possibly lipstick, coffee, cell phone and sight seeing and day dreaming and sleep should be banned from the driver’s seat. All a result of human behavior.
Just try separating women from their lipstick. Talk about trouble.
All ubers and public transportation can be shut down by strike or government. Who wants that?
Headline of the Morrison article: “Driverless Cars Will Dramatically Change Where And How We Live”
My modestly changed version: “Driverless Cars Will Dramatically Change Where And How We Live and Die.”
So, who’s liable for the death? Uber? The software vendor? The pedestrian? The vehicle owner (if not Uber)? The vehicle manufacturer?, etc. Who pays? (Can insurance be had to protect and vehicle owner from liability claims?)
The legalities (criminal & civil/personal, plus jurisdictional issues) of self-driving vehicles will be insurmountable.
You obviously never studied law. The entity who can pay out the most money is always liable.
What if it were not a jaywalker, but a child that ran into the street? Is it okay to just schmuck the kid? One has to wonder if the goal is actually safety.
As for Star Trek, the entire series featured computers that routinely killed or tried to kill humans. A lot of scifi does. Maybe it’s not so fictional…
What if it were not a jaywalker, but a child that ran into the street? Is it okay to just schmuck the kid? One has to wonder if the goal is actually safety.
As for Star Trek, the entire series featured computers that routinely killed or tried to kill humans. A lot of scifi does. Maybe it’s not so fictional…
(Speaking of computers, this may post twice because the first email is not real and the comment went into moderation. The computer just follows the rules, I know.)
Children can/should be taught not to run into the road. It is no different with a human driver, a child that runs in front of a moving car is in mortal danger.
Suddenly, I’m in moderation?????
One of my comments was shown as “Awaiting moderation,” but doesn’t seem to have been posted. And other of my comments I thought were up, but now I don’t see them. I’m not sure what’s going on…
So your headline is an hyperbole. Just because the word is in the dictionary,… so what. An hyperbole can also be inflammatory and false. Which your headline is. Bad habit which undermines the entire article.
There was an interesting story in Car & Driver magazine this month about autonomous vehicles, and some of the problems with them. Yes, they can be (and have been) hacked. They can also be (and have been) confused by painting lines on the road with a spray can or even placing stickers on the road. They don’t deal well with weather, including rain and dust, and their fallback when things go awry is to pull off to the side of the road and stop. It’s pretty obvious to me that the technology just isn’t ready for prime time.
Human drivers can (and are often) confused by normal lines painted on the road; in construction zones or during rain it is very easy for humans to misinterpret lines.
There are no autonomous vehicles on the road. All such vehicles have a human driver to keep them from crashing. Apparently this happens quite often. We are a long way from fully autonomous vehicles.
https://www.computerworld.com/article/3021908/googles-driverless-cars-still-need-a-human-driver.html
https://www.bloomberg.com/news/articles/2019-02-13/apple-s-autonomous-cars-need-much-more-human-help-than-rivals
AI can not duplicate human decision making and it never will. AI is good at crunching numbers and recognizing preprogrammed situations. Accidents happen because of exception to rules. Humans can intervene where AI will either do something stupid or just crash.
What you have described is of course software, not AI. The fact is AI has become a buzzword used to mean “software controlled”. Real AI is not even on the horizon.
Agreed. Strong AI is a Chimera.
The term should be SI: Simulated Intelligence.
Absolutely J in C. I was going to say the same thing.
I would recommend this article: https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/blogger-behind-ai-weirdness-thinks-todays-ai-is-dumb-and-dangerous.amp.html
It is one of the few which actually talks about the fundamental limitations of AI.
The point is simple: AI can work in highly constrained situations, but it works badly in unconstrained ones because AI (actually, machine learning) doesn’t understand context very well at all.
The problem with the Uber programming wasn’t as simple as “wasn’t programmed to recognize jaywalkers”.
Rather, the problem is that the self driving algorithms make all manner of shortcuts in order to constrain the training needs and inputs in order to derive a functional algorithm before the next millenium arrives.
The unfortunate lady was a corner case: not a human per se, not a recognized object, no algo to handle movement of this unknown object type and so the self driving system just ignored the input.
How these systems work under active attack – pranksters or criminals monkeying with different type of signs, shapes, colors and movements doesn’t bear thinking about.
Pranksters changing signs would cause cause for human drivers as well.
Yes and no. Outright removal or replacement can be a problem for both, but selective additions can seriously throw off machine vision systems.
See this: https://www.schneier.com/blog/archives/2017/08/confusing_self-.html
where researchers showed how adding a few strips would make a stop sign look like a 45 mph speed limit sign – something that would never fool a human.
Stop signs hidden by foliage would be a yuge issue as well.
There will be a limited use for AVs some day. It will be seen that they can be assigned by the courts to repeat offenders of reckless driving and repeat drunk drivers. See a need, fill a need. The repeat offenders may be motivated to use them since they can blame the software vendor if something happens. It also makes you wonder how the insurance scam teams will adapt to AVs.
It is a step toward “transportation as a service”. Today, you can get an Uber ride in a city usually in under two minutes. You would not have to own a car any more.
That’s been true of Taxi’s for decades prior to the invention of Uber. And yet people still own cars in great numbers. And that’s the cities. Out in the countryside things are very different, Taxi’s/ubers/public transit just aren’t as convenient nor as quickly available as they are in the city. While “transportation as a service” might serve a decent sized niche in the big cities, out in the rural heartland, that niche is much smaller.
Please. It’s “taxis” not “taxi’s”, unless you’re referring to something belonging to a taxi. Just sayin’.
Grammer Nazis attacking over typos can kiss my @ur momisugly$$
(and BTW, it’s saying. sayin’ makes you look uneducated. see two can play your game).
It does not scale. You can get a rideshare in under 2 minutes because ride sharing services account for a small percentage of trips. When all you have are sharing services wait times will increase. Imagine people lining up in Chicago’s financial district at 4:00pm. Do you think your wait will be two minutes?
According to NHTSA 1.54 deaths per 100 million miles corresponds to around the year 2000. Not a bad start.
Was driving around yesterday here in Barrie, Ontario and with all the snow coming down it was impossible to see lane marking. Slippery, so acceleration and stopping compromised and of course lots of traffic. Does AI just quit and tell the driver to take over? What about normal driving and the speed the car travels? Do they hold to speed limits or wait for input from driver to go faster or slower say to get by a merging transport truck or to slow down, be nice and let the guy in? What about nagging back seat drivers? 😉
What about nagging back seat drivers?
AI will have that covered. Just bring your Alexa enabled device into a self-driving car for an AI back street driving experience. 😉
should be: back seat driving experience
When I was a college student working summers, I was driving a panel van down a narrow city street lined with parked cars and houses having small front yards. A hundred feet or so ahead I saw two children playing with a ball. The ball suddenly rolled onto the street between two parked cars some fifty or so feet ahead of me. Knowing that where there is a rolling ball shortly there will be a someone chasing it, I braked the van and came to a stop a two or three feet from where a child suddenly darted out. When automatic driving machines can do that, I will believe.
“All drivers have been there before where someone whips in front of you from a merge lane into a gap barely large enough for their car, and you want to scream. ”
Ummm… Merge lanes require both drivers to ‘work together’ to get the merging vehicle into the lane. This theoretical angry driver who wants to scream should have slowed down slightly to allow a gap slightly larger so the merging car could fit in better. Maybe the author didn’t read his driver’s training manual and is confused between a yield and a merge.
I’ve been a “car-nut” since around 1947 and at age 11 there were three cars I saw that did it.
MGTC, Lincoln “Continental” and the 1938 Graham Hollywood.
These are still appealing today.
Then I had a series of sports cars as daily drivers. Raced and rallied them and eventually (beginning in 2002) had sport cars as “collector” cars.
I’m convinced that the promotion of electric cars and now self-driving cars has been by those who hate cars.
The anti-car movement includes urban socialists headed by, well,…the only reason why there are no longer “village idiots” is that they have all become “city planners”.
Sheesh!
Statistically invalid to compare 2 in 130M to 40,100 in 45,714M. Also when the former in “test” mode , the other real time.
Regardless of how complex the software might be, driverless cars will always be more dangerous than human-driven cars, and driverless cars not be allowed on public roads.
There are many possible hazards that the software on a driverless car may not recognize, that human drivers learn to avoid through experience.
For example, what about stop signs that can be obscured by vegetation, which a driverless car might ignore, but a human driver who has been there before knows about it and stops?
What about traffic lights in an intersection with bright sunlight behind them? Could a driverless car detect the color (wavelength) of the relatively weak light from the traffic light in the much brighter sunlight? A human driver would have the instinct to use the sun visor to screen out the sunlight to make out the traffic light, and move his/her head to see the traffic light more clearly.
What about road construction projects, where temporary orange signs and cones are used to direct traffic around them? Would a driverless car know how to follow directions around the project, or would it follow its GPS and plow right into construction workers or into a ditch across the road?
What about driving in ice or snow? A human driver can feel the car slipping, and make corrections as necessary (particularly with years of experience), but how does a driverless car know that it is skidding, and what corrections to make?
This recent accident with the jaywalker illustrates another point–in New York City, many people will cross streets even after the pedestrian crossing light is red, betting on “safety in numbers”, that a driver will not deliberately run over a large group of people just because the light just turned green. Would a driverless car conclude that if the light is green, it should plow ahead and expect the pedestrians to get out of the way?
There are far too many dangerous situations that a driverless car can’t handle, which human drivers routinely deal with and avoid accidents. Driverless cars should not be allowed on public roads, period.
Would a driverless car know how to follow directions around the project, or would it follow its GPS and plow right into construction workers or into a ditch across the road?
Driverless cars would need to be smarter than GPSes. I’ll never forget when a relative of mine was driving on the upper level of a multi-level section of road (in a large metropolitan area) the GPS was telling them to turn only there was literally no place to turn on the road they were on – just concrete barriers along the sides of the road. The road below the road they were on, however, is where the turn was.
Watch a 2005 DARPA Grand Challenge video:
Death Race 2019!
How many points for a jaywalker?
This is a test of an idea to solve the problem of insufficiently funded public pensions:
Step 1. Effective immediately, allow retirees to cross on a red light.
Step 2: In January 20222 it becomes mandatory.
Ah yes. I remember experts predicting back in the 60’s that “soon” computers would be developed with the same abilities as the human brain.
“soon” in this context means sometime after the road systems have been re-engineered to support AVs. And because 100% of roads will never be upgraded, AVs will be constantly entering and exiting controlled environments where it is safe for them to operate. So “semi-autonomous” or “part-time autonomous” would be a better description.
AVs will have to share the road with non-automated entities such as pedestrians and bicyclists, who in my experience are predictably unpredictable.
It’s not too far-fetched to claim that AVs do a better job than a good percentage of human drivers, but I think a better solution is to impose a higher standard of driver training.
“soon” for ubiquitous AVs is well after Tesla has gone bankrupt, but sometime before we get practical fusion energy.
How many of those miles were on a dedicated test track (i.e. ideal conditions)? I bet if you look at the miles driven on test tracks run by automobile makers, you find an even lower fatality/mile ratio.
In our post-modern world, personal responsibility no longer exists. If fact, to the post-moderns, it never existed.
Stop blaming me. It makes me sad.