Morals and the machine
Originally, I was supposed to put this interview at the end of my movie for design fiction, but I didn’t do that, because I think speculative design need to leave something that make people think about. So, I didn’t put it in the movie.
However, the interview is really interesting, and related to my movie.
The interview is involved in the point of my design fiction that I’d like to say.
My main point of the film story is about what if a drone makes a decision by itself?
I think that we have to discuss about that problem, now robots are becoming automated, including drones, a pilotless jet plane, a driverless car..
These are spreading in the world, especially, Google’s driverless cars have clocked up more than 250,000 miles in America. It means that we can face the problem in the not too distant future. The robot will be together with us in the future. They are being smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming moral agency. That’s why we need to consider plan for improvement.
In reality, If it happens, the automated robot will be presented with ethical dilemmas. Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
Therefore, society needs to develop ways of dealing with the ethics of robotics—and get going fast. In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation. It is clear that rules of the road are required in this difficult area, and not just for robots with wheels.