SELF-DRIVING cars are on their way, whether we like it or not.

At this point, I wouldn’t trust such technology enough to be regularly driven around by computers and sensors, no matter how sophisticated the designers claim they are. Is this a rational decision? I think so. Yet 90 per cent of all car accidents involve driver error. My decision is perhaps more intuitive than logical or rational. The statistics on self-driving cars show a much lower error rate. That said, we have very few statistics to go on with these cars, so it’s neither fair nor valid to compare the two.

Even though there are very few self-driving cars on the road there have been fatalities. The first, in 2016, happened when a truck in the USA exiting a highway suddenly turned into the path of a self-drive car. Neither the car’s radar, nor the driver, saw the truck. The driver was watching a Harry Potter movie. A recent, tragic fatality involved a cyclist crossing in front of a car in self-drive mode at night. Self-driving test cars and production cars have been involved in many minor collisions – no technology is infallible.

When we call these cars self-driving, is it really the best way to describe them? The technology assists the driver, rather than taking over complete control. Calling them “self-driving” might be a big mistake. It could give some people the impression that they no longer need to pay attention, take responsibility or use common sense. I’d prefer such systems to be labelled as “driver assistance”.

One of the oldest technologies that assists our driving is automatic gear changing. In the UK we have an affinity for manual gears but many countries mostly have automatic cars. When I drive overseas, I quite like an automatic. There’s enough to think about as it is, driving on the opposite side of the road, let alone shifting a gear stick with my right hand instead of my left. When I next change my car – I may well go down the automatic route.

Technologically, cars today are staggering compared with my first experience of driving 40 years ago. Electronic handbrakes, anti-skid braking systems, satellite navigation, climate control, parking sensors, automatic lights and windscreen wipers. Each system using sensors, many not requiring any driver input. We’ve cars that can park themselves, some that automatically prevent collisions by braking if you get too close to the car in front. There are systems to alert you if you move out of your lane on a motorway, systems to prevent you driving with excess alcohol. Yet driver error is still the biggest cause of accidents. Logically, if we reduce driver error, which automated car systems should do, we should have fewer accidents.

Measuring how effective self-driving systems are versus manual driving is not that simple. To make such a judgement, we’d have to try and work out how many accidents the automated system prevented in the first place. How would we know whether, without the automated system, an accident would have happened anyway?

The best drivers in the world make mistakes, so it’s no surprise that mere mortals on our busy roads will get things wrong. Watching Formula 1 racing recently and how the drivers make split second decisions, made me wonder; what are the limits for driver responses? Formula 1 drivers are highly trained athletes, mentally and physically at the top of their game. They make hundreds, perhaps thousands of decisions each race. The steering wheel is less “wheel”, more control centre. They also make mistakes and collide, spin or crash out of races.

People of a certain age will recall the Knight Rider car KITT, with a dashboard that lit up like a Christmas tree. The car responded to voice commands and we have that also, perhaps not in all our cars (yet), but at home. Voice activated speakers that will tell you what’s happening, play your favourite music, even turn your heating and lights on and off. Such technology is finding its way into everyday cars. Will we at last have real life KITTs? Sadly not. Computers make decisions based on sensors and algorithms. They respond to logic and humans are frequently illogical. We often make illogical decisions, yet sometimes they turn out to be correct. When that happens, we tend to call it intuition. Computers don’t do intuition. So, are computers intelligent? It depends on how you define intelligence – certainly computers can mimic intelligence, but whether artificial intelligence has been seen is disputed. Whether or not we’ll ever get computers that have artificial intuition I have no idea, but I suspect that’s what’s going to be needed to have a self-driving car that will not just assist the driver but take over completely.