28th Jan, 2022
Self driving cars are a (still) upcoming transport innovation that’s been a long time in the making. And after the results of a new report by the Law Commission of England, Wales and Scotland, it’s fair to say quite a lot of people would like them to remain in the making for a little while longer. If you’re thinking: I might actually scrap my car for a self-driving model, it’s worth bearing in mind that the law surrounding them right now can still be a bit of a headache.
Alright, let’s start at the beginning. A few years ago we explained the biggest problem on the horizon with self-driving cars, namely that when something goes wrong, the legal implications could be incredibly complex. In a nutshell; when a self-driving car is involved in an accident, who’s to blame? The occupant of the car? The manufacturer? Or the software developer who programmed it? (Remember, the latter two aren’t always the same entity.) Or maybe several of the above?
Well, that very question was put to the Law Commissions; statutory bodies that keep the laws of England, Wales and Scotland under review, recommending reforms where necessary. They’ve come to several conclusions in their report, the most prominent (and arguably controversial) of which is that the users-in-charge of self-driving cars shouldn’t be held responsible in the event of an accident. So that’s the headline – let’s unpack it a bit.
One of the first things that the report is keen to make clear is that the person in the driving seat of a driverless car wouldn’t be classed as a ‘driver’ as such. (The term becomes legally ambiguous in this context, for obvious reasons.) Instead, the person normally thought of as a driver would become a ‘user in charge’. It sounds like a small change, but it involves some quite seismic legal implications. For starters, it means that if something goes wrong with the car, it’s the manufacturer of the vehicle who’s held responsible, rather than the ‘driver’ of the driverless car.
But as you may already know, self-driving cars are classified into one of five levels, depending on the level of autonomy that their technology allows them to achieve. That’s probably going to lead to some understandable confusion amongst drivers as to what constitutes a self-driving car, especially when you take into account the likely marketing strategies that carmakers are likely to employ.
To try and minimise that sort of confusion, the law commissions recommend putting the onus on carmakers to be “extremely clear” about the difference between self-drive features, and driver assist features. In other words, carmakers will have a responsibility to make it clear whether a car is autonomous or not autonomous.
The choice is binary – they’re not to grade their products on a sliding scale. If any sort of human intervention is required, then the law considers the car not to be autonomous, in which case all the current driving laws still apply. (For example, if the user-in-charge is required to intervene for safety in inclement weather such as heavy rain, or snow and ice.)
The car is only considered autonomous when it can operate safely even if the individual isn’t actively monitoring the driving environment, the vehicle itself, or the way it drives.
This stipulation is intended to tackle the issue of human passivity. Basically, scientific research shows that people find it far more difficult to monitor a task passively, when their intervention may be required only sporadically – if at all. We’re much better at monitoring something that requires us to be fully engaged; such as when we’re in full control of the vehicle ourselves.
The UK’s law commissions haven’t rushed to these conclusions – they’ve been pondering them since 2018, when the government asked them to begin work on a series of reports on the regulatory framework for self-driving cars, and the best way to manage their rollout on public roads.
The latest report’s key recommendations include:
• A user-in-charge cannot be prosecuted for offences arising directly from the task of driving, such as speeding, dangerous driving, or running a red light
• However, users-in-charge remain legally culpable for other tasks, such as making sure the vehicle is taxed and insured, and that everyone inside is wearing their seatbelts
• Some vehicles may attain legal permission to drive themselves with nobody in the driving seat, as long as there is a licensed operator present to oversee the journey
• Following any collision, data to understand fault and liability must be made accessible to authorities
• Potential sanctions for carmakers who deliberately withhold information about their systems, or otherwise fail to make clear how they work
We’ve already touched on one of the key priorities for the law commissions right now – making sure that car manufacturers use specific terminology when marketing their systems, so that motorists don’t fall under any illusions as to what their car is capable of, and what their own legal responsibilities are.
The vehicles permitted to drive themselves without a user-in-charge would probably be public transport vehicles, with numerous occupants. However, obviously it’s not the passengers who would oversee their progress – that job would be given to a licensed operator, who would need to be on board the vehicle for it to legally operate without a driver.
It might all sound a little niche now, but it’s all going to become substantially more relevant over the course of the next 12 months, when we’re likely to see the first iterations of self-driving features on UK vehicles. We’ve already hit quite a big milestone recently; last year, the Department for Transport gave the go-ahead for Automated Lane Keeping Systems. That’s the first type of hands-free driving system to be legalised in the UK. The person in the driving seat doesn’t need to monitor the road or keep their hands on the wheel, but they need to be ready to take back control within the space of 10 seconds when they’re prompted to by the system.
As you might expect, the recommendations in the report haven’t been universally welcome. The issue of ethics and robotics has long been a mainstay of scientific discourse, but it’s set to take on a very real dimension with the widespread launch of self-driving cars. One of the biggest ethics issues can be summed up by a thought experiment you might be familiar with – the trolley problem.
We won’t delve into a thesis, but here it is in summary; there’s a runaway train (or trolley) on the tracks, and it’s headed straight for five innocent, unsuspecting people. However, you and you alone have the power to change its course, by pulling a lever to switch the tracks. The problem is that doing so would lead to the death of another person. So the question is: would you kill one person to save five?
We think we might know your answer. But alternatively, what if there’s only one person on each track? How do you weigh the value of one human life against another? It’s a conundrum you’ll be familiar with if you studied Philosophy and Ethics. Or failing that, watched I, Robot starring Will Smith back in 2004.
You’ll have your own responses to these questions, but the issue at hand here is that these are exactly the questions that – at some point – is likely to be asked of the AI in a self-driving car. Human drivers react instinctively. Artificial Intelligences always make a decision. (In case you’re wondering, in this context taking no action constitutes making a decision.)
Now, the trolley problem is a thought experiment. There are no right answers, and no real consequences. The same is not true of self-driving cars in the real world. And if your relative is killed in a car accident, it’s one thing to find out that they died in an impossible situation, where drivers are suddenly forced to rely on pure instinct, with no time to make conscious choices.
It’s altogether a different kettle of fish to find that the loss of your loved one was the result of a calculated mathematical decision; the computer in control weighed up the life of your friend or family member one against the life of someone else, and it made the decision to save the stranger, not the person you love.
These are not necessarily purely hypothetical situations. Tesla, the manufacturer virtually synonymous with self-driving technology, has already found its creations at the centre of several fatal accidents in the US. It’s overwhelmingly likely that the stipulation above about sanctions for reticent carmakers is at least partially down to Tesla, which was notably reluctant about sharing information about its assets in the wake of these incidents.
Back here in the UK, the Department for Transport didn’t get its original wish, which was to get self-driving cars fully on the roads by 2021. Now the technology is accelerating, but there’s still a sizeable level of scepticism amongst the general public. And ultimately, it doesn’t matter how well they’re legislated, or what the technology is capable of – if the public isn’t convinced enough to buy them, then self-driving cars are going nowhere.
Now, it’s worth repeating that here at Scrap Car Network, we’re not experts in advanced car tech or legislation, so most of the above constitutes personal musings rather than specialist commentary. We’ll tell you what we are experts in, though; and that’s getting you the best price when you scrap your car with us. It only takes 10 seconds to find out how much your car is worth – just enter your car reg and postcode into the fields on our site, and we’ll get you your very own instant scrap car quote!