Live trials of self-driving cars, such as the Nissan Leaf's 230-mile trip to Sunderland, split opinion within the industry. We weigh both sides of the argument
John Evans
19 October 2019

Later this year, a Nissan Leaf will travel from Cranfield University to Sunderland, a distance of 230 miles. It will navigate roundabouts, A-roads and motorways, all through live traffic. Nothing unusual about that except that the Leaf will be driving itself.

The journey, called the Grand Drive, is billed as the most complex autonomously controlled journey yet attempted in the UK. It will be the culmination of a 30-month development project that boasts heavyweight partners including Nissan and Hitachi.

The project is called HumanDrive since one of its goals is to develop a vehicle control system that emulates a natural human driving style using machine learning and artificial intelligence. To assist engineers, a detailed visualisation of the environment the autonomous test cars have been developed in has been created. “The visualisation is rendered by a powerful games engine and derived from a detailed scan of the environment,” says Edward Mayo, programme manager at Catapult, the organisation managing HumanDrive. “We use tools that extract data from the real world; for example, the exact position of centre lines, road edges and potholes, as well as the precise angles of road signs.”

As the autonomous car, with a safety driver on board, is driven through the real test environment, it generates a stack of performance data. This is used to recreate its trajectory and behaviour in the digital visualisation.

A car driven by a human then repeats the journey. The resulting data allows development engineers to visualise and compare the performance of the two cars.

Find an Autocar car review

Driven this week

  • DS 7 Crossback E-Tense 2019 first drive review - hero front
    15 November 2019
    First Drive
    DS 7 Crossback gains a new flagship plug-in hybrid four-wheel-drive powertrain
  • Mazda 3 2019 road test review - hero front
    15 November 2019
    Car review
    The Japanese firm puts its new-age petrol engine in the fourth-gen family...
  • Mercedes-AMG GT R Pro
    14 November 2019
    First Drive
    Mercedes-AMG’s V8 sports car gets a hardcore chassis makeover but remains...

“We’ve found that one of the key challenges with an autonomous car is encountering cyclists and safely overtaking them,” says Mayo.

He calls it a challenge, but for one pedestrian who, in March 2018, was pushing her bicycle across a road, an encounter with an autonomous test car, unrelated to HumanDrive, proved to be fatal. Elaine Herzberg was wheeling the cycle, laden with shopping bags, across a four-lane highway in Tempe, Arizona, when she was struck by an Uber test vehicle.

An investigation showed that the car had misidentified Herzberg and her bicycle, leading it to make false assumptions. Video footage from inside it showed that the safety driver had only seen Herzberg when it was too late.

Simulation is best

Herzberg was the first pedestrian to be killed by an autonomous car, but Michael DeKort, a US-based former systems engineer with long experience of defence and flight simulation, and a member of the Society of Automotive Engineers’ task force responsible for autonomous vehicles, believes she won’t be the last.

“The processes most auto makers are using to create autonomous vehicles will never save most of the lives they want to and instead take thousands of lives needlessly in a fatally flawed effort trying,” he says.

The processes DeKort refers to are the practice of using public roads to develop autonomous cars, the method by which the vehicle cedes control to the on-board, human safety driver – called the handover – and the type of games-derived simulation technology he claims they use for development.

According to DeKort, using public roads for testing will never expose an autonomous car’s control systems to enough scenarios to make the vehicle safe – a state which, he says, it can only achieve by experiencing multiple accidents. Also, he says, tests have shown that vehicle handover is unsafe since the human driver has insufficient time to acquire the necessary situational awareness. “Driving around on public roads with a safety driver and stumbling on stuff is not the way to go,” says DeKort. “We’ve not had the death of a child or a family yet but already people have been killed in or by autonomous cars.”

He’s speaking to me on the telephone from his office at Dactle, the company he recently founded in the US. According to its website, its purpose is to enable the safe, ethical and efficient development and verification of autonomous vehicles through the use of aerospace and military simulation technology.

Such technology, it claims, will avoid the financial, safety and time impacts of public safety driving.

Given his business, it would be easy to dismiss DeKort’s criticisms of rival approaches except that in 2008 he was the recipient of public service awards for his efforts to expose what he claims were serious flaws in the equipment his then employer, Lockheed Martin, was installing in US coast guard vessels.

He shoots from the hip, at one point describing Cranfield University, and others who test autonomous cars on public roads, of being “reckless”.

“They’re waiting for a crash,” he says. “The industry must stop what it’s doing and use more simulation.”

But not, he says, games-derived simulation that has major real-time and model fidelity issues concerning variables such as vehicle specifics, tyre and road surface condition, sensor capability and environment.

“Using this type of simulation will lead to false confidence and real-world tragedies, while delaying for decades the introduction of safe, level four and five autonomy,” says DeKort. “Only aerospacederived simulation has the test and verification capability to deliver full autonomy, safely.”

Next, DeKort turns his fire on the limitations of autonomous sensors. He says: “Sensors face multiple challenges including identifying complex objects such as, for example, Herzberg’s bicycle laden with shopping bags or interpreting different fabric weaves and patterns.

“Imagine a UK tourist visiting Arizona wearing an item of clothing with an unrecognisable weave. The sensor would be confused. It needs to be exposed to thousands of different textures and weaves at different times of day and in different weather conditions to be sufficiently well ‘educated’.

“Don’t waste time on shadow driving to develop sensors. Instead, spend it on proper simulation, based on data from the real world that has been identified, categorised and degraded, and whose results are verifiable.”

It won’t be cheap. DeKort says hundreds of people will be required to run the necessary number of simulations using simulators like his that cost up to £2 million each.

“There is no choice,” he says. “Without doing this, full autonomy will be a dream.”

Autonomy can be safe

In response, Dr Stefano Longo, a senior lecturer at Cranfield and a member of Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI), where much of HumanDrive’s research work is being conducted, says: “I’m not sure Michael is [doing anything] different from the way I’m working.”

Longo, who is also employed by Embotech, a leading developer of decision-making software for autonomous technologies, says: “At Cranfield, we’re learning that real-world scenarios are infinite and to test them we’re doing more simulation. We need to be able to predict 99.9% of hazards or people won’t trust the technology and I reckon that, at the moment, we’re at 80%. The final 10% will be the hardest.

“There was a lot of hype around the industry at the beginning, but now it’s becoming clearer exactly what the challenges are. For example, people thought vehicle handover was sorted but it isn’t. It’s not a good solution and has been proven not to be safe. A few seconds’ notice is not enough; ideally, you need a few miles.

“But an autonomous car can be safe. For now, one solution is to limit the scope of its application to environments where pedestrians and vehicles are kept apart. Full, level five autonomy is 50 years away.”

READ MORE

Is the public ready to share the roads with self-driving cars?

Under the skin: how paint is improving EV batteries and autonomous cars

Are semi-autonomous systems making cars safer?

Join the debate

Comments
12

19 October 2019

It's exactly the same as a learner driver.  Probably safer in fact.

19 October 2019

it is best to allow driverless out on the open road. Once a vehicle passes the electronic outer markers of a city to enter, the driverless vehicle and its chauffeur get a wakeup call that it is time to switch over to manual mode.

19 October 2019

There's absolutely nothing wrong with trials although I'm not sure how they'll replicate the real-world. In a trial, someone will be contantly monitoring the car and it's surroundings, in the real world the driver will nod off after a long day at work, parents will be interacting with their kids in the back seat etc. And of course, there's the insurance nightmare of who's to blame when an accident occurs.

By the way my car had to be returned to the garage three times to have it's systems re-aligned after hitting an animal several months ago. For what should have been a simple repair job, the electronics meant a lengthy time at the repair shop ( who had then had to send it to a specialist dealership, who in turn I had to visit twice thereafter because the systems weren;t working correctly). And the cost of this simple accident was £4500 when without radar it would have been around £600. And after all of that, the car now sits left of centre when lane assist is on and instead of keeping a relatively straight line, weaves all over the lane when going around a corner. What's a fully autonomous vehicle going to cost to repair and how good will that repair be?

I say fully autonomous vehicles are a pipe dream. The technology may work but the practicalities doesn't. Anyway, what benefit does it bring?     

19 October 2019

scotty5, it does sound like you have had a bad experience with a repair. Not a new experience, but made worse for you because of new technology, and lack of expertise to sort it. Technology has a bid tendency to get cheaper and more reliable as it develops, and expertise comes with familiarity. The current situation is far from what it will be in a scenario where autonomous cars are commonplace.

 

The potential benefits are enormous. Curtailing the many deaths and injuries on the roads, most of which are due to human error. More freedom for those who are unable to drive due to medical issues. Better traffic flow due to alert, connected control. Taxis become much cheaper because most of the cost is the driver, so they will be accessible to more people.

 

Of course there will be downsides. Driving jobs lost, but changing employment types has been an ongoing process for centuries, including a change which brought the Luddite term. It will be expensive to develop, and will require changes. One that worries many on this site is losing the joy of driving, but how often is it a chore on our congested roads?

20 October 2019

We have to stop thinking that deaths on the road will be curtailed because “most are due to human error”. This statement is absolutely correct but neglects to consider that countless thousands are also prevented by these same fallible humans. My fear is that some deaths may indeed be saved but at least as many will be caused that would have been avoided with a human in control. 

19 October 2019

When self-driving cars are really ready for the road, they'll be able to go on a journey without you. Want some new kit for the garden or planks from B & Q? Just send the car, why bother going yourself? But, just like the human-robot that's been promised for decades, when it's really ready we'll know it is - because Manchester United Robots will be playing Chelia Robots. Like all new developments, the whole self-driving car thing has been grossly over hyped and will take far longer to become a reality that foolish politicians, jumping on the latest bandwagon, believe.

 

19 October 2019
Quote:

Autocar wrote:

"An investigation showed that the car had misidentified Herzberg and her bicycle, leading it to make false assumptions. Video footage from inside it showed that the safety driver had only seen Herzberg when it was too late." 

The "car" didn't misidentify Herzberg.  In the 4.7 seconds between seeing "something" and initiating emergency braking, the "car" couldn't decide what it was looking at...something "unknown", a "vehicle", and a bicycle.  Emergency braking was initiated at 1.3 seconds before impact which, at 43mph, was much too late.  The car was at 39mph at impact.  At 6 seconds before impact, when the "car" was first detecting something unknown, that was the time for it to start initiating some kind of avoidance, be it manouvering and / or braking.  It was not misidentification.  It was failure to adequately respond to a lack of any identification.

As to the safety driver, why did they only see Herzberg when it was too late?  Low light conditions and dark clothing, worn by the victim, as the local authorities asserted initially?  No, the safety driver was watching "The Voice" on her phone, and had been for almost the entirety of the self-driving phase of the test.  Further, as she was holding her phone, her hands were not at the steering wheel, as they should have been.

It would be too easy to bang on about simulation and testing but it comes down to human factors, nearly always, in these things.

The Ford Fusion cars, that Uber had used before the Volvo XC90 that hit and killed Herzberg, had seven LIDAR detectors on them.  The Volvos only one, mounted on the roof.  While they had increased the number of RADAR detectors from seven to ten, the decision to reduce the LIDAR coverage affected the amount of vertical scanning that the system was capable of processing which led to the issue with object decision making.

Velodyne, who supplied the LIDAR detector, state that a single rooftop mounted detector will always have a blindspot to ground and side detectors will always be necessary to be more certain of pedestrian detection.  The RADAR detection should be able to pick up the slack, but not necessarily so in all conditions.

The "car" wasn't capable of meeting a minimal risk condition due to failure, in cotravention of a state ruling handed down a little over two weeks before the death occured.  It couldn't alert the driver, and could only emergency brake, not feather, or actively apply, the brakes in response to upcoming obstacles (5.7 seconds away, in this case), as that system had been disabled.

Uber's visual detection was of poor quality too.  The forward looking optical camera only picked up Herzberg 0.1 seconds before emergency braking was initiated, 1.4 seconds before impact.  This was a part of the reason that led to the local authorites to try to pass the blame to victim.  Footage, from the phone cameras and "off the shelf" dashcams of drivers using that road proved, however, that conditions were such that Herzberg was visible even before the "car" initially detected "something". 

Uber decided to reduce the number of personnel in the test cars from two to one.  While the second person was there principally to review data, and Uber asserted their system made that role redundant (*), it is possible that a second human in the car may have led to more situational awareness, less likely that the driver was watching reality TV, and someone may not have died.  Of course it is possible that while one was watching The Voice another could have been watching The Apprentice.  Humans can be just that dumb.

(*)  While Uber asserted that the data reviewing was no longer necessary, while on the move, the driver claimed that the reason she wasn't looking ahead, in the lead up to and time of the impact, was that she was reviewing data messages on the car's display console.  The in car video does bear this out, as she was seen to look at that screen (and not her phone) 166 times, for around 6 minutes, during the 21 minutes before the colission.  Uber denied this was necessary, but didn't change that it happened. 

And then, of course, there's the age-old relationship between the corporation and the local authorities.  History is full of stories of inappropriate relationships and corruption between these sorts of parties.  I'm not saying there was here, however.  Separately, in the three years between being set up, and the death of Herzberg, the Arizona State Self-Driving Oversight Committee met just twice.  Separately Uber offcially started tests in December 2016, though leaks from the Governor's Office, found that Uber had been testing with the State's, but not public, knowledge, since August that year.  Separately Uber made thier office available for the Governor's staff to work in (though there's no evidence that they did).  Separately the Governor pushed through laws to allow the likes of Uber cabs to operate, even though they had been banned by the previous governor just a year earlier.  Coincidentally the governor was and enthusiatic tweeter of how good Uber are...almost like a ad service.  Separately the owners of Phoenix airport - which happened to be the City of Phoenix - had a lot of pressure applied to them to allow Uber cabs to operate from there.

Humans.

 

19 October 2019

Looks like you are in the industry or do your homework well.  I would like to chat with you about the items in my articles below. (Please look them up on Medium. Can't post links here.) As well as show you proof of the capabilities of DoD sim tech. As I talk with quite a few folks in the industry I can assure you it will be confidential. Google me and IEEE Barus Ethics Award. Michael DeKort

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle  
 
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
 
Autonomous Vehicles Need to Have Accidents to Develop this Technology

 

19 October 2019

 There's going to be teething troubles, has to be, but hopefully in the end we'll have a better,easier travel, commute journey, I'd be highly surprised though that people would just have forty winks in a car in on a busy Friday night....!

19 October 2019

Using simulation for 99.9% of the development and test is not expensive. It is far cheaper than using the real world. This because the real world would be over 500B miles and $300B per company. (Which cannot be done. And that doesn't mention the thousands of injuries/deaths caused by learning accident scenarios.) The hundreds of millions of $ I mentioned is for the whole scenario/simulation set to get to L4. A cost that would be spread by many users. Worse case if someone paid for it all themselves it equals what Uber and Waymo spend in a couple months. And again, a rounding error compared to using the real world. And there is no choice in the end. You can do this or never get to L4, never save the relvant lives and harm thousands of people for no reason.

I am all for shadow driving. We want that data and intention testing. We just want less of it. Meaning not eternally driving. Safety driving is what needs to be virtually eliminated. When it is necessary it should be after it is proven simulation cannot do what is needed. And when it is needed it is run like a movie set not public domain wild west action.

As for proper simulation. You must have every object in the world the system cares about be precise visually and in physics. This because perception systems struggle so much. Every building, tree, sign, the vehicle, tires, road, sensors all have to be exact. Not close - but exact. If this is not done there will be unknown gaps between the real world and the sim which will lead to planning issues and tragedies in complex and accident scenarios. And we cannot make the argument to switch 99.9% of the public shadow and safety driving to sim.

The sim tech in this industry is nowhere near capable of this. Or making a legitimate digital twin. Not a single vendor makes a system where they even try to get half the models right let alone get them right. I reached out to DoD to do this because these folks were unwilling to fix those gaps when I reached out to them. Mostly because they did not want to re-architect their systems or admit they had this many issues.  So you wind up with IT/gaming people who concentrate on great looking visuals with no geospecifics in most cases. Or OEM manufactures trying to use their systems with only the car model being acceptable. I would be glad to go over this with anyone in more detail, explain the exact architecture differences and show you examples of it being done right.

Pages

Add your comment

Log in or register to post comments

Find an Autocar car review

Driven this week

  • DS 7 Crossback E-Tense 2019 first drive review - hero front
    15 November 2019
    First Drive
    DS 7 Crossback gains a new flagship plug-in hybrid four-wheel-drive powertrain
  • Mazda 3 2019 road test review - hero front
    15 November 2019
    Car review
    The Japanese firm puts its new-age petrol engine in the fourth-gen family...
  • Mercedes-AMG GT R Pro
    14 November 2019
    First Drive
    Mercedes-AMG’s V8 sports car gets a hardcore chassis makeover but remains...