Currently reading: Autonomous car trials: Are they smart or reckless?
Live trials of self-driving cars, such as the Nissan Leaf's 230-mile trip to Sunderland, split opinion within the industry. We weigh both sides of the argument

Later this year, a Nissan Leaf will travel from Cranfield University to Sunderland, a distance of 230 miles. It will navigate roundabouts, A-roads and motorways, all through live traffic. Nothing unusual about that except that the Leaf will be driving itself.

The journey, called the Grand Drive, is billed as the most complex autonomously controlled journey yet attempted in the UK. It will be the culmination of a 30-month development project that boasts heavyweight partners including Nissan and Hitachi.

Leaf 1

The project is called HumanDrive since one of its goals is to develop a vehicle control system that emulates a natural human driving style using machine learning and artificial intelligence. To assist engineers, a detailed visualisation of the environment the autonomous test cars have been developed in has been created. “The visualisation is rendered by a powerful games engine and derived from a detailed scan of the environment,” says Edward Mayo, programme manager at Catapult, the organisation managing HumanDrive. “We use tools that extract data from the real world; for example, the exact position of centre lines, road edges and potholes, as well as the precise angles of road signs.”

As the autonomous car, with a safety driver on board, is driven through the real test environment, it generates a stack of performance data. This is used to recreate its trajectory and behaviour in the digital visualisation.

A car driven by a human then repeats the journey. The resulting data allows development engineers to visualise and compare the performance of the two cars.

Leaf 4

Back to top

“We’ve found that one of the key challenges with an autonomous car is encountering cyclists and safely overtaking them,” says Mayo.

He calls it a challenge, but for one pedestrian who, in March 2018, was pushing her bicycle across a road, an encounter with an autonomous test car, unrelated to HumanDrive, proved to be fatal. Elaine Herzberg was wheeling the cycle, laden with shopping bags, across a four-lane highway in Tempe, Arizona, when she was struck by an Uber test vehicle.

An investigation showed that the car had misidentified Herzberg and her bicycle, leading it to make false assumptions. Video footage from inside it showed that the safety driver had only seen Herzberg when it was too late.

Simulation is best

Herzberg was the first pedestrian to be killed by an autonomous car, but Michael DeKort, a US-based former systems engineer with long experience of defence and flight simulation, and a member of the Society of Automotive Engineers’ task force responsible for autonomous vehicles, believes she won’t be the last.

“The processes most auto makers are using to create autonomous vehicles will never save most of the lives they want to and instead take thousands of lives needlessly in a fatally flawed effort trying,” he says.

The processes DeKort refers to are the practice of using public roads to develop autonomous cars, the method by which the vehicle cedes control to the on-board, human safety driver – called the handover – and the type of games-derived simulation technology he claims they use for development.

Leaf 7

Back to top

According to DeKort, using public roads for testing will never expose an autonomous car’s control systems to enough scenarios to make the vehicle safe – a state which, he says, it can only achieve by experiencing multiple accidents. Also, he says, tests have shown that vehicle handover is unsafe since the human driver has insufficient time to acquire the necessary situational awareness. “Driving around on public roads with a safety driver and stumbling on stuff is not the way to go,” says DeKort. “We’ve not had the death of a child or a family yet but already people have been killed in or by autonomous cars.”

He’s speaking to me on the telephone from his office at Dactle, the company he recently founded in the US. According to its website, its purpose is to enable the safe, ethical and efficient development and verification of autonomous vehicles through the use of aerospace and military simulation technology.

Such technology, it claims, will avoid the financial, safety and time impacts of public safety driving.

Given his business, it would be easy to dismiss DeKort’s criticisms of rival approaches except that in 2008 he was the recipient of public service awards for his efforts to expose what he claims were serious flaws in the equipment his then employer, Lockheed Martin, was installing in US coast guard vessels.

He shoots from the hip, at one point describing Cranfield University, and others who test autonomous cars on public roads, of being “reckless”.

“They’re waiting for a crash,” he says. “The industry must stop what it’s doing and use more simulation.”

Leaf 5

Back to top

But not, he says, games-derived simulation that has major real-time and model fidelity issues concerning variables such as vehicle specifics, tyre and road surface condition, sensor capability and environment.

“Using this type of simulation will lead to false confidence and real-world tragedies, while delaying for decades the introduction of safe, level four and five autonomy,” says DeKort. “Only aerospacederived simulation has the test and verification capability to deliver full autonomy, safely.”

Next, DeKort turns his fire on the limitations of autonomous sensors. He says: “Sensors face multiple challenges including identifying complex objects such as, for example, Herzberg’s bicycle laden with shopping bags or interpreting different fabric weaves and patterns.

“Imagine a UK tourist visiting Arizona wearing an item of clothing with an unrecognisable weave. The sensor would be confused. It needs to be exposed to thousands of different textures and weaves at different times of day and in different weather conditions to be sufficiently well ‘educated’.

“Don’t waste time on shadow driving to develop sensors. Instead, spend it on proper simulation, based on data from the real world that has been identified, categorised and degraded, and whose results are verifiable.”

It won’t be cheap. DeKort says hundreds of people will be required to run the necessary number of simulations using simulators like his that cost up to £2 million each.

“There is no choice,” he says. “Without doing this, full autonomy will be a dream.”

Leaf 3

Back to top

Autonomy can be safe

In response, Dr Stefano Longo, a senior lecturer at Cranfield and a member of Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI), where much of HumanDrive’s research work is being conducted, says: “I’m not sure Michael is [doing anything] different from the way I’m working.”

Longo, who is also employed by Embotech, a leading developer of decision-making software for autonomous technologies, says: “At Cranfield, we’re learning that real-world scenarios are infinite and to test them we’re doing more simulation. We need to be able to predict 99.9% of hazards or people won’t trust the technology and I reckon that, at the moment, we’re at 80%. The final 10% will be the hardest.

“There was a lot of hype around the industry at the beginning, but now it’s becoming clearer exactly what the challenges are. For example, people thought vehicle handover was sorted but it isn’t. It’s not a good solution and has been proven not to be safe. A few seconds’ notice is not enough; ideally, you need a few miles.

“But an autonomous car can be safe. For now, one solution is to limit the scope of its application to environments where pedestrians and vehicles are kept apart. Full, level five autonomy is 50 years away.”

READ MORE

Is the public ready to share the roads with self-driving cars?

Under the skin: how paint is improving EV batteries and autonomous cars

Are semi-autonomous systems making cars safer?

Join the debate

Comments
12
Add a comment…
Cobnapint 20 October 2019

A dangerous waste of time

And money. And technical know how.
Get these people trying to crack new battery tech.
Old But not yet Dead 20 October 2019

Interesting

May I thank those who have a good understanding of the technical issues facing this developing science. Although some of it goes above my head,  it puts into perspective the challenges faced. As a layman it would seem that although much can be achieved by simulations , no legislative body will allow level 5 autonomy without real world testing to a very extensive level. And nor should they. There are dangers ,they will be mitigated , but nothing will ever be 100% safe. 

 

imispgh 19 October 2019

Michael DeKort - Clarifications

Using simulation for 99.9% of the development and test is not expensive. It is far cheaper than using the real world. This because the real world would be over 500B miles and $300B per company. (Which cannot be done. And that doesn't mention the thousands of injuries/deaths caused by learning accident scenarios.) The hundreds of millions of $ I mentioned is for the whole scenario/simulation set to get to L4. A cost that would be spread by many users. Worse case if someone paid for it all themselves it equals what Uber and Waymo spend in a couple months. And again, a rounding error compared to using the real world. And there is no choice in the end. You can do this or never get to L4, never save the relvant lives and harm thousands of people for no reason.

I am all for shadow driving. We want that data and intention testing. We just want less of it. Meaning not eternally driving. Safety driving is what needs to be virtually eliminated. When it is necessary it should be after it is proven simulation cannot do what is needed. And when it is needed it is run like a movie set not public domain wild west action.

As for proper simulation. You must have every object in the world the system cares about be precise visually and in physics. This because perception systems struggle so much. Every building, tree, sign, the vehicle, tires, road, sensors all have to be exact. Not close - but exact. If this is not done there will be unknown gaps between the real world and the sim which will lead to planning issues and tragedies in complex and accident scenarios. And we cannot make the argument to switch 99.9% of the public shadow and safety driving to sim.

The sim tech in this industry is nowhere near capable of this. Or making a legitimate digital twin. Not a single vendor makes a system where they even try to get half the models right let alone get them right. I reached out to DoD to do this because these folks were unwilling to fix those gaps when I reached out to them. Mostly because they did not want to re-architect their systems or admit they had this many issues.  So you wind up with IT/gaming people who concentrate on great looking visuals with no geospecifics in most cases. Or OEM manufactures trying to use their systems with only the car model being acceptable. I would be glad to go over this with anyone in more detail, explain the exact architecture differences and show you examples of it being done right.