As robots are becoming more and more involved in our daily lives, a question arises. What code of ethics do you think these robots follow?? We are talking about particularly true autonomous robots that are tasked with making their own decisions. Robots Ethics is a research method that aims to understand and follow all the ethical implications of robotics and answer that question for us all. Researchers from all the diverse fields of technology such as robotics, computer science, psychology, law, philosophy, and others are uniting forces to seek an answer to the above questions. Initially, the main focus was on military robots since they can use lethal force, but since then, the areas have expanded to all kinds of robots, especially all those that interact with humans.
On ethics and roboethics
As we all know, ethics is the branch of philosophy that studies human behavior, moral conduct, the concepts of good and evil, right and wrong, justice and injustice, etc. Then roboethics comes into the picture; it is a fundamental ethical reflection related to moral dilemmas and policies generated due to the development of robotic applications.
Roboethics, an abbreviation of robotic ethics, deals with the set of rules that the robotic designer engineer must implement and follow while working on a robot’s artificial intelligence. With this code of conduct, roboticists must guarantee that these autonomous systems can exhibit ethically acceptable behavior in situations where robots or autonomous vehicles interact with humans.
Laws of Roboethics
According to the international federation of robotics 2017 world robot statistics, 2016 saw 74 robot units employed per 10000 humans as the average global robot density in manufacturing industries. This was significantly higher than the 66 units per 10000 average reported in 2015. This makes it even more critical the ethics of how robots are used be clarified. One way to tackle the issue is with writer Issac Asimov’s 1942 three laws of Robotics. Although written for a work of fiction set in a time when robots could think for themselves, some people think that these laws can be applied to real-world robotics they are:
- A robot should not cause any harm to human beings or through inaction that can cause human beings to come to harm.
- A robot must execute the orders given to it by human beings except where such orders can cause damage to human beings or conflict with the first law.
- A robot must protect its existence at any cost except for where it can cause the violation of the first and second laws.
The need to follow the above laws becomes more pressing as robots are becoming more autonomous and AI, in many ways, exceeds human capabilities let it be physical or mental. Many futurists and technological experts like Elon musk, Stephan Hawking, Steve Wozniak have expressed their concerns that if we don’t control robots, it could directly lead to humans’ downfall. While on the other hand, optimistic views include the hope that carefully designed robots could help the world recover from many human-related problems.
On the topics as sensitive as human life decisions, for example, using robots as a weapon, these ethical issues of war and autonomous robots were discussed, which includes principal objection against the use of autonomous robots in lethal war. Robots should not be allowed to make human life-death decisions. In the end, roboethics seems to leave us with more unanswered questions. Only time has the answer to how the field evolves with the advent of more and more robots.