(Pictured: You see kids, when two self-driving vehicles fall in love they make robot-babies…. and those robot-babies grow up to become our FUTURE ROBOT OVERLORDS. But hey, Tesla cars are cool!)
When someone breaks a law, they generally get in trouble – that’s just the way the world work. Systems of law have been managing the conduct of nations and their citizens as long as civilizations have been in place. With the technology renaissance of the 21st century, a new form of life has been participating in society, and just like humans some of them have begun to break laws. We are talking about – of course – robots.
In 2015, a self-driving car created by Google was operating in the area around the company’s headquarters. The driverless vehicle had been on a road near ‘the company campus’ when it ended up getting pulled over – for going 10 miles an hour below the speed limit! Basically, it had been violating traffic laws by holding up traffic without a good reason.
News outlets report that the officer after discovering that there was no one driving the car, contacted the operators responsible for programming the vehicle. The article also reports that several cars in the Google self-driving fleet have been in accidents, although none of them have been reported to be the self-driving car’s fault. The incident comes on the heels of many people questioning the safety of self-driving vehicles.
For many years now, electric car manufacturer Tesla has touted its self-driving initiative and how much safer it is than actually driving. Tesla’s critics are still skeptical of the company’s self-driving vehicles’ capacity to be safe at all, pointing to a National Transportation Safety Bureau (NTSB) investigation of a 2018 fatal accident that involved a self-driving Tesla. In that incident, the car’s driver life being lost (instead of steering the car, the ‘driver’ had handed control over to the vehicle’s self-driving computer when the accident took place). The NTSB report found that the Tesla vehicle had actually acted in a way meant to try the driver, however the vehicle still ended up crashing. Many skeptics of self-driving vehicles point to that crash in particular as a reason that driverless cars and trucks should not be allowed on the roadways.
Instances like the fatal Tesla crash and the slow self-driving Google car provide examples of a significant issue that goes beyond just general safety – one that will become more relevant as technology continues to evolve:
So, when a robot or AI commits a crime that puts humans or other lives in danger, who is responsible in a legal sense?
Technology has and will continue to develop at a rate that the law can’t keep up with. If there are robots used to make work easier in a factory for example, who is responsible if a ‘thinking’ machine harms a human worker in a workplace accident – the programmer, the actual ‘code’ of the machine, the company, or someone (or something) else entirely? As companies, manufacturers, and even governments are beginning to make the push for automated processes like self-driving vehicles that include the use of Robots, AI, and other ‘driverless’ technologies, we will all begin to be faced with these questions.
Questions:
1: Who should be held civilly and/or criminally accountable when a robot is involved in crime? Explain your choice(s)
2: What laws should be put in place to ensure that the right entities (people / companies / robots) are held accountable for Artificial Intelligence related problems, and how will they help prevent or address those problems?
3: What are some things that a robot could do to break a law?
4: Should a person who built a robot that commits a crime, be charged to the same level as contributing to the crime or aiding in it? Explain why you think they should or should not.
Be sure to explain the thinking behind your answers, and for more details, you can read the articles this piece was sourced from here:
Contributed By: Joseph Motta