Artificial Intelligence – article analysis

AI’s are always depicted as “going bad” in movies and pop culture (back when the reality of AI’s were still science fiction). Now, we’re actually having to deal with the reality of AI’s. Can we teach them right and wrong/good and bad? Many ethical dilemmas that come with AI technology.

  • U.S. military building autonomous vehicles
  • Newest ethical dilemma: Will humans allow their weapons to pull the trigger on their own without human oversight?
  • 2018 – U.S. Military long range anti-missile (LRASM) which can shoot down enemy missiles autonomously. Makes decisions on flight path and target.

Is it okay for robots to kill humans?

Movies: 2001 Space Odyssey. iRobot. Terminator.

Robots becoming more human-like. Key aspect of being human is morality. Can robots become moral agents?

Teaching Robots Right From Wrong 

In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.

If AI’s are able to learn right from wrong, they’s have to do it through us—mimicking humans. Humans have an innate sense of morality, some level of ability to make those decisions. A machine doesn’t have that, so we have to help the machine.

One method for “robot morality”—teaching them like children “blank slates.” BUT are we blank slates?

Ethics can’t be taught—tacit knowledge (like driving a car). More than facts and experiences, but putting them into practice.

Article assumes that ethics is all about behavior. Knowing right from wrong is different than doing right/wrong.

What are the consequences of humans having to teach/input ethics into a machine?

Morality is based on values (value of objects, property, life, etc.). How are those values determined?

  • Utilitarianism – decisions based on the greatest good for the greatest number of people. Problem – You can never figure out how a particular event/person’s life is going to unfold. If you have to choose between saving a group of people or saving one—how do you know that the one person won’t be the one to come up with a cure for cancer, or start up an orphanage, etc. You can’t calculate what will happen down the road.

When Your Boss Wears Metal Pants

AI’s can be good! The problem is when a society that doesn’t have a firm foundation of what a human being is willing to give up our humanity to a machine that mimics our humanity, giving them human value. It doesn’t actually give them value, it removes our value. (Same thing for animals given as much value as humans.) Question to ask: How is this effecting our humanity?

Myth of Narcissus – he looks into a pool at his reflection, falls in love with himself, and it ultimately leads to his death. This is where we are headed with technology — we’ve made it to reflect ourselves in such a way that it’s now taking the place of human interactions. We’re “falling in love” with an imitation of humanity. Consumed with ourselves through our technology.