Self-Driving Uber Killed Pedestrian for Jaywalking

Guest “don’t you just?” by David Middleton

Before we get to self-driving vehicular manslaughter, don’t you just love futurists?

Sep 13, 2018
Driverless Cars Will Dramatically Change Where And How We Live

Jim Morrison

Driverless cars aren’t coming. They’re already here. Much of he technology has been around for decades and many features are available on new cars today. Experts agree fully autonomous vehicles (AVs) will soon be ubiquitous and they will significantly disrupt many industries and change where and how we live. The only questions are: When? And how?

Nearly all of the necessary technology had been developed and was ready to go in the 1990s, according to Jason Schreiber, senior principal at Stantec Urban Places.

“We did get a lot of backbone planning done for connected vehicles,” Schrieber said. “Those protocols exist and there are cities that a ready for them. The technology just wasn’t scalable to the point that it was affordable, until now.”

[…]

Consumers will benefit
A 2017 report from RethinkX claims AVs will save the average family $5,600 every year. How? Families won’t pay for cars, insurance, sales tax, excise tax, fuel or repairs. They’ll just pay per trip.

In addition to that, in spite of the public perception that autonomous vehicles will be dangerous, they are widely regarded as much, much safer than cars driven by humans.

There were 40,100 highway deaths in the U.S. last year and the three biggest causes were alcohol, speeding and distracted driving according to the National Safety Council

[…]

William F. Lyons Jr., president and CEO of Fort Hill Companies, a Boston-based architecture and infrastructure design firm  said AVs don’t drink or use drugs, speed or get distracted.

“AVs have traveled 130 million vehicle miles during testing with 2 deaths,” Lyons said. “And they’re constantly improving the technology. There is no question they will be safer than human drivers.”

[…]

Forbes

The dude’s name really was Jim Morrison.

“AVs don’t drink or use drugs, speed or get distracted.”

Uber self-driving car involved in fatal crash couldn’t detect jaywalkers
The system had several serious software flaws, the NTSB said.

Steve Dent, @stevetdent
11.06.19 in Transportation

Uber’s self-driving car that struck and killed a pedestrian in March 2018 had serious software flaws, including the inability to recognize jaywalkers, according to the NTSB. The US safety agency said that Uber’s software failed to recognize the 49-year-old victim, Elaine Herzberg, as a pedestrian crossing the street. It didn’t calculate that it could potentially collide with her until 1.2 seconds before impact, at which point it was too late to brake.

More surprisingly, the NTSB said Uber’s system design “did not include a consideration for jaywalking pedestrians.” On top of that, the car initiated a one second braking delay so that the vehicle could calculate an alternative path or let the safety driver take control. (Uber has since eliminated that function in a software update.)

[…]

engadget

Sounds like the AV got distracted. AV’s don’t deal with the unexpected very well… And they’re easy prey for aggressive drivers…

Intel Says Aggressive A-Hole Self-Driving Cars Could Help Improve Traffic Safety

by Shane McGlaun — Thursday, May 02, 2019

All drivers have been there before where someone whips in front of you from a merge lane into a gap barely large enough for their car, and you want to scream. Intel and its subsidiary Mobileye think that one way to solve some of the problems that self-driving cars have today is by making them much more aggressive and essentially turning them into a-holes that will shoot into that a small gap in traffic, with a level of precision. One of the challenges for autonomous cars right now is that the AI inside makes them act like your (stereotypical) Grandmother.

[…]

Intel wants to cure that nervous behavior using something it calls the Responsibility-Sensitive Safety (RSS) program. RSS is meant to help the autonomous vehicle act like an assertive human driver. According to Intel, the more assertive autonomous cars will make for safer and more freely-flowing traffic.

The challenge with the AI in self-driving cars today is that they only make decisions when the calculations the vehicles constantly run show crash probability is extremely low. That cautiousness equates to missed opportunities to make turns when a gap presents itself and leads to frustrated passengers. In the RSS system, the AI is deterministic, not probabilistic. Being deterministic gives the autonomous vehicle a playbook of sorts that gives rules defining whats sale and unsafe in a driving situation.

This rulebook will allow the AI inside the vehicle to make more aggressive maneuvers right up to the line that separates safe and unsafe.

[…]

Hot Hardware

AI A-hole AV’s… A sort of Skynet Terminator AV?

“Yeah, that’s the ticket!”

[A]utonomous vehicles … are widely regarded as much, much safer than cars driven by humans.

By whom?

“AVs have traveled 130 million vehicle miles during testing with 2 deaths,” Lyons said.

Forbes

That’s 1.54 per 100 million vehicle miles traveled.

In 2018, the fatality rate per 100 million vehicle miles traveled – a figure that factors out increases or decreases in total driving – was 1.14. That was down from 1.16 in 2017 but tied for the fourth highest of the previous 10 years.

USA Today

1.54 is 35% more than 1.14.

About 1/3 of US traffic fatalities are due to drunk driving. Rather than putting Skynet Terminator AV’s on the road, maybe the better pathway is to put a breathalyzer in every vehicle. AI would save more lives by recognizing drunk drivers before they can start the engine than by failing to recognize jaywalkers because they aren’t supposed to be there.

Maybe the futurists should have paid better attention to Star Trek...

Note on comments

Before anyone comments that the article didn’t say “Self-Driving Uber Killed Pedestrian for Jaywalking,” please Google the word “hyperbole” first.

4 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

267 Comments
Inline Feedbacks
View all comments
John Robertson
November 13, 2019 9:51 am

The dinosaur in the room,has not been mentioned here.
45 seconds,was the key .For 45 seconds the software was confused,knew it was confused,yet had no instruction to NOTIFY the safety driver of that confusion.
45 seconds may not sound like much,but try shutting your eyes for even 10 seconds at any speed.
AI as seen on TV does not yet exist,intelligent machines?Meh.
Who is promoting this nonsense and why?
And that burning question,who is liable?
I see the courts are happy to allow a firearms manufacturer (Remington)be held liable for the use of their product by others,so the self driving car makers are doomed before they even start mass production.

lb
November 13, 2019 10:33 am

I wonder how driverless cars would react to chaff.
Imagine throwing a handful of confetti (the aluminum-coated version) out the window or over your shoulder if you’re driving a bicycle.
I bet hope the AI will stop the car.

November 13, 2019 10:52 am

In the Star Trek universe, a research center named “The Daystrom Institute” is frequently mentioned. However, the show never tells us whether it’s a center for advanced computer design, or a place where AIs with fractured egos go for Rest&Repair.

Linda Goodman
November 13, 2019 11:41 am

I don’t have time to read all the comments today so I’m sorry if this is repetitive but nothing is done for We the People, including self-driving cars, but safety is always the excuse.

What self-driving cars will do is control not only how fast we travel but where we’re ‘allowed’ to go, or even if we can go at all. It’s the perfect complement to an ecofascist technocracy that makes formerly free lands off-limits to all but the privileged ‘elite’. They’ve even drawn the maps.

venril
November 13, 2019 12:22 pm

The except from Methusela’s Children hints at the deeper concerns. Someone will be able to tell you where you can and cannot travel. Which neighborhood you may set as a destination and which you cannot. An which will never even appear on your map. I guarantee you’ll never be able to travel to de Blasios, or Cher’s or Kennedy’s neighborhood with out the proper permissions set. Or the restricted parks where the nomeklatura will keep their dachas. Forget about suburbia, no reason to go there, peasant.

Destination set: Peoples Agricultural Collective #28492, ETA 6 hours. You have been re-assigned, Peasa… *cough*, Citizen. Work will make you free!

And if you think that little Pol Pottita, AOC, would hesitate to stoop to such measures, think again. Someone will have to harvest all that grain by hand, once ICE are illegal. Guaranteed government jobs …

littlepeaks
November 13, 2019 12:46 pm

I got a great idea. Why don’t they find the country with the worst drivers and use that country to test their autonomous self-driving vehicles? If they would do OK over there, they’d probably be very safe anywhere else. (Yes, we have awful drivers in this country, but their not the worst).

michael hart
November 13, 2019 1:38 pm

There’s only one truck of peace. Can AI do that?

tty
November 13, 2019 3:13 pm

Well, now I have read the NTSB report on the Uber accident, and the most mysterious thing is that anyone would want to put this piece of junk on the road, and that any authority could be mad enough to allow it.
Consider this:

1. The software did not recognize the possible existence of pedestrians on the road. They were simply classified as “unknown”
2. “Unknowns” were default considered as stationary
3. When an object was reclassified, track history was dumped and the “new” object was default considered as stationary.
4. The original Volvo emergency anti-collision system was automatically deactivated when the Uber software was active
5. The Uber software however did not contain emergency anti-collision functionality. If the system found a collision to be imminent it would only apply mild braking and turning and instead sound an alarm to the supervisory driver to fix the problem.
6. To ensure that this would not work, it first waited an extra second to make sure it wasn’t a false alarm.

What happened was that the system detected the jaywalker in fairly good time (about 5 seconds, 100 yards), but repeatedly reclassified her as unknown/vehicle/bicycle, each time defaulting the object back to stationary as it moved into the track of the vehicle. Just 1.2 seconds (25 yards) before the crash it recognized that the object was now in the path of the car, and consequently “froze” for one second, in case it was a false alarm. It wasn’t, so 0.2 seconds (5 yards) before the crash it sounded the alarm to the supervisory driver and started mildly braking and turning. The driver apparently discovered what was happening before the alarm since he took over 0.12 seconds (3 yards) before collision (nobody reacts that fast) and started braking 0.7 seconds after hitting the pedestrian (so after 0.8 seconds, a fairly typical (though perhaps a bit over average) reaction time in an emergency).

Reply to  tty
November 15, 2019 6:53 am

Well, now I have read the NTSB report on the Uber accident, and the most mysterious thing is that anyone would want to put this piece of junk on the road, and that any authority could be mad enough to allow it.

I don’t think the authorities that authorized Uber’s operations in Arizona had any idea how the Uber system functioned. And I think the Uber management had a “move fast and break things” mentality. Not a good mentality for autonomous vehicles.

WXcycles
November 13, 2019 4:30 pm

Ever run into a swarm of insects? Such as sugarcane beetles or grasshoppers? Let alone bitumen splatters, stone chips and dust or mud accretion? Static sensors can not cope with being obscured even on a fairly brief 2 to 3 hour trip down the highway. I rode motorbikes a lot when I was young, usually for no more than about 4 hours at a stint, due to bloodshot wind-blown eyes and battered kidneys from the rubbish rear shocks. But I was always amazed how coated in bugs and bug-guts my motorbike, leathers and helmet were after just a 4 hour ride down the highway.

Plus massive potholes torn up by trucks on the abysmal Bruce Highway. Plus the wild pigs with a line of trotters behind them that run out in front of you, the train crossing, the single-lane bridges with give way signs, the kangaroos, cattle and large birds of prey eating resulting road-kill. Not to mention the bad weather, poor lighting, night time, and night time in the rain, not to mention floodways covered in water of unknown depth. Rock falls, gravel and mud on the road, wet slimy leaves in corners in the rainforest canopy. The slippery diesel spills in corners on steep ranges that are dropped randomly by trucks that refueled at the last town.

Several times on the Atherton Tablelands I found myself rapidly catching up to a car, and was getting ready to pass at a high relative speed, only to have the car turn right in front of me, as some brain-dead farmer returning from town turned off the highway into his property’s hidden driveway, but did not bother to use an indicator, or even to use breaks (and thus break lights). They just take their foot off the accelerator, and when they reach their turn-off. they just turned without any warning. If an automated car had to deal with that at highway speeds, I’d be dead now. As it was, I only just recognized the hazard and survived it by my own emergency breaking and swerving actions. You can not program for complacent oblivious lazy local farmers who don’t and won’t follow road rules on what they think is their own road, they don’t even use the mirrors for their own safety!

The notion of trusting a randomly degrading and obscured sensor array to drive a car safely with me or mine in it at highway speeds, is to accept high-speed fatal accidents as an unfortunate but necessary feature of fabulous technological progress.

Sorry, I can’t believe such sensors could ever deal with highway realities better than I, or keep me safer than I can. Only an inexperienced fool would ‘trust’ an automated car to drive them on the highway every day.

John Endicott
Reply to  WXcycles
November 15, 2019 5:18 am

Ever run into a swarm of insects? Such as sugarcane beetles or grasshoppers? Let alone bitumen splatters, stone chips and dust or mud accretion?

Yes. Many years ago on a road trip to Florida, the bug splatters on the cars windshield (as well as the entire front grill) was so thick that we have to pull over a couple of times just to wash and scrape them off enough so we could see enough to drive (windshield wash/wipers weren’t cutting it as they would just smear the bug splats ever worse).

I rode motorbikes a lot when I was young, usually for no more than about 4 hours at a stint, due to bloodshot wind-blown eyes and battered kidneys from the rubbish rear shocks. But I was always amazed how coated in bugs and bug-guts my motorbike, leathers and helmet were after just a 4 hour ride down the highway.

Why don’t bikers smile? so they don’t have to pick the bugs out of their teeth.

November 13, 2019 4:32 pm

It does sound attractive to be able to push a button at home when it’s time to leave for work.
Step into the POD that arrives and nap until it ejects me (or a facsimile of me) at work.
I won’t know or care what happened on the way to work.
I surrendered my control.
Who’s responsible for that?

Jeff Alberts
Reply to  Gunga Din
November 13, 2019 9:22 pm

How is it different from taking a cab, or getting on the subway, or bus?

Reply to  Jeff Alberts
November 14, 2019 3:04 pm

Is it a “what” or a “who” at the wheel?

Sure, people make mistakes. But people can also respond to the unexpected as no AI can.

And just how large does a computer need to be to run a “competent” AI? The size of a CRAY?
The power consumption?
WiFi connection? How many second by second decisions for hundreds, thousands, mill … , of controlled vehicles will it take to overwhelm the system? (I didn’t use the word “crash” on purpose even though many would be the the results.)

Reply to  Gunga Din
November 14, 2019 2:57 pm

I surrendered my control. Who’s responsible for that?

If you get on an airplane and something bad happens, who is responsible? Well, potentially many people (e.g. pilots, airplane manufacturer or airplane manufacturer parts provider, air traffic control, etc.) In any case, you certainly aren’t (unless you bring on something that causes something bad).

jorgekafkazar
November 13, 2019 4:34 pm

How long will it be before terrorists release a fleet of modified self-driving cars into a city?

old construction worker
November 13, 2019 7:45 pm

I have something for you to think about. Say you live in a city that have a1/2 million people with self-driving cars all trying to go to work between 6:30am and 9:30am. Question: Which self-driving car gets to move from an inside lane to an outside lane to exit or make lane changes to make a right or left hand turn? Will self-driving cars make a decision to speed up or slow down to make lane changes? Will a self-driving car speed up or slow down to let another self-driving car merge? I’ll make a prediction. When everyone has a self-driving car it will add another hour to get to work.

Jeff Alberts
Reply to  old construction worker
November 13, 2019 9:24 pm

I think the idea is that they’ll, at some point, communicate with each other, so no individual vehicles will have to make cascading decisions.

jorgekafkazar
Reply to  Jeff Alberts
November 15, 2019 7:39 am

So the time spent communicating will go up by the square of the number of cars on the road? Or will it be logarithmic?

RoHa
November 13, 2019 11:02 pm

“About 1/3 of US traffic fatalities are due to drunk driving.”

As Torsten Ehrenmark pointed out long ago, this means that in 2/3 of the cases, the drivers were sober. This proves that sober driving is twice as dangerous as drunk driving .

RoHa
November 13, 2019 11:05 pm

You can learn more about Ehrenmark here:

https://sv.wikipedia.org/wiki/Torsten_Ehrenmark

November 14, 2019 1:47 am

As very few if any of us use a motor vehicle 100 % of the day and night,
so this must mean that there will be far less vehicles on the road.

As we all know both governments and businesses regard the car as the
German expression so clearly say, as a “”Milch Cow””.

So a massive decrease in revenue to all the mentioned persons will occur.

So what will they do ?

MJE VK5ELL

Reply to  Michael
November 14, 2019 10:56 am

So a massive decrease in revenue to all the mentioned persons will occur.

So what will they do ?

One great thing from an accounting standpoint is that autonomous vehicles allow a record for where all vehicles have traveled on every single journey. This allows–at least from a technical stanpoint–the costs of road building and maintenance to be charged to the users of the road…and only those users.

Joe G
November 14, 2019 3:38 am

If you watch the footage a normal driver may not have seen her. If you are crossing the street the onus is on you to make sure it is safe- even in a crosswalk.

Reply to  Joe G
November 14, 2019 11:04 am

If you watch the footage a normal driver may not have seen her. If you are crossing the street the onus is on you to make sure it is safe- even in a crosswalk.

I don’t agree. I think this accident would have easily been avoided by a human driver with even slightly less-than-average competence. It was a straight, flat road on a clear evening. Elaine Herzberg had already crossed essentially two lanes of roadway. She was pushing a bicycle with grocery bags dangling, so she wasn’t moving fast. Even a relatively poor driver could have at least slowed down enough not to kill Ms. Herzberg. And I think even an average driver would not have struck her at all in that situation.

tty
Reply to  Joe G
November 15, 2019 2:29 am

The “check driver” did notice her. He took over just before the crash, though to late to do anything about it. A normally alert driver might well have avoided the crash, particularly if the Volvo anticollision system hadn’t been disabled by Uber.

Reply to  tty
November 15, 2019 9:10 pm

The “check driver” did notice her. He took over just before the crash,

The “check driver” was female, and didn’t brake until after the collision. From the IEEE Spectrum article referenced above, referring to the Uber software:

It suppressed any planned braking for a full second, while simultaneously alerting and handing control back to its human safety driver. But it was too late. The driver began braking after the car had already hit Herzberg.

BC
November 14, 2019 11:37 am

I wish people would stop talking about AI, because until a computer can have an original thought there is no such thing. Computers calculate results or determine their actions simply by processing hard-coded instructions, and how can you possibly code all of your nuanced thought processes into a set of instructions. For example, how can you tell a computer that by observing the ‘behavior’ of the car in the next lane and by noting the fact that you are at a location where a lot of the traffic needs to change lanes to make an upcoming turn, you need to increase your vigilance and prepare for the unexpected? And how can you hard-code an instruction saying: if you see someone on the side of the road with their back to you and they are just finished a conversation with someone you have to prepare for the fact that they may be too focused on their conversation to check for traffic before stepping onto the road? These are not written instructions. They are deductions we make from having knowledge of our environment and of our fellow humans. Until we understand fully how our brains work it is simply not possible to have machines imitate us. But even then there is the problem or working out how to give the computer its instructions. For example, if you tried to have a computer determine if AGW is a problem, how can you tell a computer that the guarded look in the eyes of a certain infamous proponent of AGW tell you that you need to be suspicious of the person and consider what their motive might be. And, for that matter, how can the computer understand motive. It can’t consider that the person might have a financial motive for pushing the AGW scare or that another person doing exactly the same thing might just be a little bit simple rather than having a selfish motive. Try to write instructions for these things and see how far you get.
Finally, I have to say that the Star Trek episode The Ultimate Computer is my favourite. The person who played Daystrom, William Marshall, played the part with absolute perfection.
https://en.wikipedia.org/wiki/William_Marshall_(actor).

lb
Reply to  BC
November 14, 2019 1:09 pm

“”because until a computer can have an original thought there is no such thing”

Dang, you’re right. Should have thought of that myself! 😉

Reply to  BC
November 14, 2019 3:09 pm

I wish people would stop talking about AI, because until a computer can have an original thought there is no such thing.

A common definition of intelligence is:

the ability to acquire and apply knowledge and skills

Driving a car requires lots of intelligence, without requiring “an original thought.”

jorgekafkazar
Reply to  Mark Bahner
November 15, 2019 7:41 am

Common definitions are not always appropriate for a particular situation.

Reply to  jorgekafkazar
November 15, 2019 1:05 pm

Common definitions are not always appropriate for a particular situation.

But in this case it certainly is. If you want an artificial intelligence (AI) driving your car, you probably don’t want an AI that has the original thought of, “Hey, rather than driving around Everglades on this road, why don’t I just cut through the Everglades to reduce the distance and save time?”

Instead, you want an AI that can simply apply its knowledge and skills to staying in the proper lane of the road it’s on.