Driverless Car Safety: Protocols, Safety Rules, and Generations
If a person kills you, you are murdered, and that person is a murderer. That’s how we have seen it. Right? But if a machine kills you, what happens then?
Are you killed in an accident?
Who is going to be responsible?
Is the person operating the machine stand in the trial?
What if there was no one operating that machine? In such a case, who is the killer? Will the machine be tried in court?
Source: Patently Apple
These questions will become more and more prevalent with driverless cars coming to our streets. However, the biggest question still is: Who is the killer?
History of Driverless Cars
The concept of driverless cars isn’t new. Driverless cars have been in testing mode for the last century. Japanese were the first ones to come up with the idea of driverless cars in the 1930s. They did it with a few cameras on the driving seat, analog modulation to control the steering and acceleration pedals.
However, due to the lack of computational power, these tests couldn’t be successful. Next came the Americans. During the cold war, the Defense Advanced Research Projects Agency (DARPA) wanted to test automatic vehicles to carry espionage operations in unexplored territories. They also failed because object recognition systems were not advanced enough.
Later, the US National Highway Authority started testing automatic cars to reduce accidents, especially on the highways. They succeeded in automating the car on straight highways through sensors both on the cars and on the roads.
In the 2000s, Google and other tech companies emerged with supercomputing power. They had the resources and the budget to improve the technology. By this time, neural technology had also become widespread, and companies were ambitious to explore machine learning and artificial intelligence-based techniques.
This was the start of driverless cars. Now, around 20 years later, we have the fourth generation of driverless cars available. Most of them don’t even have a steering wheel, and they are approved by the US National Highway Authority (NHA).
Source: Patently Apple
Protocols of Driverless Cars
So far, there are no universal protocols for driverless cars. Experts, however, think that the Three Laws of Robotics by Asimov will be the universal laws for driverless car safety.
However, specific questions still need an answer. Let’s read about each one of them.
1. Responsible Robot
First thing first, a driverless car (robot car) needs to be responsible. How do we know it is responsible? This is derived from the first law of robots by Asimov, which says that a robot may not injure a human being through its action, or cannot let harm come to a human because of its inaction. If a robot values human life, then it is valuable.
2. Life vs. Object
Then there is the question of life vs. objects. The best way to explain this is with the small child example. Consider this: The driverless car is moving with speed on the road, and suddenly a little boy comes in front of the car. Will the car jump the curb, or will it hit that child?
Engineers say that the car will make the decision based on its ‘confidence level.’ If it is highly confident that the object is a living thing, it will jump the curb. If it isn’t, it will hit it. Currently, trials are in progress to increase the confidence level of driverless cars.
3. Low-income areas vs. High-income areas
Another problem with driverless cars is their driving conditions. Not all roads are fully paved; not all roads have signals; not all roads have railings. This brings up the question of low-income and high-income areas. Most high-income areas are fully developed, but most low-income neighborhoods are not developed. So, the chances of accidents and wrong decisions are high for driverless cars.
4. Economic Damage vs. Human Damage
Driverless cars have another issue. What will they do in a scenario where there are chances of both human damage and economic damage? While the Asimov rule clearly says that human life is more valuable, one should understand that these driverless cars will be private property.
Therefore, the owners of these cars can change the protocols as per their will. Otherwise, why will they even buy a driverless car in the first place?
Consider this: The driverless car can either hit the side railing or the oncoming car from the opposite direction driven by a human. What should it do? If it hits the railing, it will cause economic damage to the self. If it hits the other car, it will cause damage to the human being.
RAND global think tank has proposed a framework that allows the testing of AI decisions in simulators. Each test can be done in split seconds through customized conditions, and the decisions can be altered to make a more secure driverless car in the future.
Once the system is refined, it can be implemented in multiple driverless cars by the installation of a single chip. The same practice applies to SUVs and Saxtons 4×4, one of the biggest dealers of 4×4 in the UK is working on importing smart SUVs in the market.
However, not all driving conditions are the same, and the testing scenarios can change at times. In such cases, the necessary protocols will be defined for each driverless car by the National Highway Authority (NHA). These will become the baseline for human safety.
Generations of Driverless Cars
The driverless cars have five generations, each having a different range of actions. The 1st generation car has little to no driverless features. It is the manual or automatic cars that people drive on the roads. 2nd, 3rd, and 4th generation driverless cars will have a certain degree of liberty, with increased driverless features.
Each driverless car will have different protocols and rules. Generation 5 driverless cars are ‘fully automated and driven without human intervention.’ Most companies are aiming to develop these cars in the next decade.
Level 2 and level 3 cars are already available on the roads, and most of them are built by Tesla, GB, Volvo, and Mercedes.
Which Generation Is Safer?
The fifth-generation, also called level 5 cars are the safest on the roads. Google and Amazon are testing these fifth-generation cars to make sure that they remain safe in the long term. Most of them have defined protocols embedded in their systems with a particular focus on human safety.
However, even the fifth generation cars have problems as Elon Musk says in a tweet, “intersections with lots of traffic lights and shopping mall parking lots are among the technology’s biggest challenges.”
He also mentions that even if driverless cars have the auto-pilot feature enabled, they will still require an attentive driver to take control of things in case the driverless car fails to control the vehicle. This is quite similar to what airplane pilots do during long journeys. They set the routes, add stops and destinations, and the planes fly accordingly.
Verdict: Are Driverless Cars Safe?
I would be repeating what Elon Musk says. Driverless cars are as safe as the attentive drivers that are sitting behind the driving seats. Right now, our driverless cars have not become smart enough to make all decisions on their own. They can drive on the highways, but when it comes to high human concentration areas, they have more chances of failure.
Now the second question: If driverless cars hit a person even by accident, who is going to be responsible? Well, the answer to it is simple. If the protocols for safety are ingrained in the system, the manufacturer will be responsible. However, if the owner has altered the system or if he/she was behind the driving wheel when the incident occurred, then it will be his/her responsibility.
About Michelle Joe: Michelle Joe is a blogger by choice. She loves to discover the world around her. She likes to share her discoveries, experiences, and express herself through her blogs. You can find her on Twitter, LinkedIn, Facebook