Artificial Intelligence – article analysis

AI’s are always depicted as “going bad” in movies and pop culture (back when the reality of AI’s were still science fiction). Now, we’re actually having to deal with the reality of AI’s. Can we teach them right and wrong/good and bad? Many ethical dilemmas that come with AI technology.

  • U.S. military building autonomous vehicles
  • Newest ethical dilemma: Will humans allow their weapons to pull the trigger on their own without human oversight?
  • 2018 – U.S. Military long range anti-missile (LRASM) which can shoot down enemy missiles autonomously. Makes decisions on flight path and target.

Is it okay for robots to kill humans?

Movies: 2001 Space Odyssey. iRobot. Terminator.

Robots becoming more human-like. Key aspect of being human is morality. Can robots become moral agents?

Teaching Robots Right From Wrong 

In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.

If AI’s are able to learn right from wrong, they’s have to do it through us—mimicking humans. Humans have an innate sense of morality, some level of ability to make those decisions. A machine doesn’t have that, so we have to help the machine.

One method for “robot morality”—teaching them like children “blank slates.” BUT are we blank slates?

Ethics can’t be taught—tacit knowledge (like driving a car). More than facts and experiences, but putting them into practice.

Article assumes that ethics is all about behavior. Knowing right from wrong is different than doing right/wrong.

What are the consequences of humans having to teach/input ethics into a machine?

Morality is based on values (value of objects, property, life, etc.). How are those values determined?

  • Utilitarianism – decisions based on the greatest good for the greatest number of people. Problem – You can never figure out how a particular event/person’s life is going to unfold. If you have to choose between saving a group of people or saving one—how do you know that the one person won’t be the one to come up with a cure for cancer, or start up an orphanage, etc. You can’t calculate what will happen down the road.

When Your Boss Wears Metal Pants

AI’s can be good! The problem is when a society that doesn’t have a firm foundation of what a human being is willing to give up our humanity to a machine that mimics our humanity, giving them human value. It doesn’t actually give them value, it removes our value. (Same thing for animals given as much value as humans.) Question to ask: How is this effecting our humanity?

Myth of Narcissus – he looks into a pool at his reflection, falls in love with himself, and it ultimately leads to his death. This is where we are headed with technology — we’ve made it to reflect ourselves in such a way that it’s now taking the place of human interactions. We’re “falling in love” with an imitation of humanity. Consumed with ourselves through our technology.





Close Reading and Analyzing Text

Close reading means reading for meaning and understanding. Follow these steps to perform your own close reading.

  1. To begin, read your passage slowly. You just want to get the general idea.
  2. Next, read the passage again; but this time you will be marking and writing on the page as you go. Look for the following things:
    • Vocabulary – Circle any vocabulary you are unfamiliar with. Can you guess what the word means by using the context? Look up the definitions if necessary.
    • Language choice – Underline any language that attracts your attention for any reason. Why do you find it interesting? What emotion does it evoke in you? Jot down your reasons.
    • Repetitions or patterns – Look for any words, phrases, or ideas that are repeated. Do you see any patterns? Mark them. If you have any ideas on why the author chose that pattern or repetition, make a note of it.
    • Questions you have – note them down, and remember there is no such thing as a stupid question. Try to list more open questions than closed questions.
  3. Now, go back and read the passage a third time. This time, you are looking for answers to your questions. If you find something that answers your question, write the answer down next to the question you had. Also, if you have any general comment to make, now is the time to write it down as well.
  4. The final step of close reading is reflecting on what you read and learned.
    • Do you still have any questions that are unanswered?
    • What are your overall thoughts and opinions? Do you agree/disagree with the author?
    • What is one idea from the author that stood out to you the most? Why?
    • Can you make any connections to your own life? Does the passage remind you about your own experiences? Other books or films? What are the similarities?
  5. Be prepared to discuss your reflections, insights, and questions with the class!

Logical Fallacies

Short story: “Love is a Fallacy” by Max Shulman (written in the 1950’s)

Short story “The Lottery” by Shirley Jackson

Key Vocabulary terms

  • Argument: A conclusion together with the premises that support it.
  • Premise: A reason offered as support for another claim.
  • Conclusion: A claim that is supported by a premise.
  • Valid: An argument whose premises genuinely support its conclusion.
  • Unsound: An argument that has at least one false premise.
  • Fallacy: An argument that relies upon faulty reasoning.
  • Booby-trap: An argument that, while not a fallacy itself, might lead an inattentive reader to commit a fallacy.


Handout 1

Sample Fallacies and Teacher’s Answer Guide



Playlist of logical fallacies

West Wing scene – After it, therefore, because of it – don’t simply assume that because one thing follows another, the first thing caused the second thing to happen. A classic scene from the second episode of The West Wing, where Jed Bartlett implies that the reason he lost Texas was because he believed Texans would never elect a president who spoke latin.

Dodge Charger Ad – Slippery slope. It’s not logical to assume that self-parking cars will lead to robots harvesting our bodies.

South Park “You Hate Children” – False choice

Friend’s – Joey’s Fridge – Post Hoc

South Park – the Chewbacca Defense – Red herring (*language)

“Seven” campaign ad for Barak Obama – Red herring. It starts by claiming Mccain doesn’t understand the fundamentals of the economy. But then it diverts attention from the economy to how many houses he owns.

2:53-4:07 SNL’s “You Lie” – the bandwagon fallacy

0:39-0:46 Tom Cruise on Psychiatry –  false authority. he claims to have done research, but he is not an authority on the subject.

Sony Commercial – appeal to false authority

Big Bang Theory’s Superman Argument

Big Bang Theory – post hoc ergo propter hoc

Direct TV commercials – slippery slope

Mean Girls – Hasty generalization?

Big Bang Theory – reductio ad absurdum




As a group activity, have students “sell” a product* using as many fallacies as they can. Encourage students to go overboard here to make the fallacies as outrageous and therefore transparent as possible.   While (or after) each group presents, the other class members should try to identify the fallacies. An option is to keep score and award a prize to the “team” naming the most fallacies or naming them the fastest.