Here’s a scenario. You’re sitting in an autonomous car on the way to work. You’re in the driver’s seat but the car is taking care of everything – steering, braking, accelerating and changing lanes - leaving you free to relax. Suddenly, a child steps into the street. There's no time for you to regain control, so the car has to make a decision - intially, it wants to swerve to one side to avoid the child, but that would mean mounting the pavement where other pedestrians are walking.

So there’s a dilemma. Does the car choose to swerve around the child and risk injuring more people, or does it sacrifice the few to save the many and hit the child? It may sound like an unlikely scenario but it’s exactly the sort of problem we’re going to need to overcome before fully autonomous cars become mainstream. As most experts believe that will be around the year 2025, the clock is well and truly ticking.

Let’s say you’ve managed to solve the problem. You’ve found the solution and essentially programmed a sense of morality into an inanimate object. You now need to make sure that the car will take that action 100% of the time, and that means ensuring your software is in every single autonomous car on the planet. The consequences of not doing this are dire – we could end up with one group of manufacturers whose cars would avoid the child, and another group that wouldn’t, and that's before you consider the laws of different countries and territories. I have no doubt legislation would pass very quickly to ensure everyone uses the same software – netting the company that creates it billions of pounds.