Node #2 Reflection and Summary

  • Impressions of AI: A Progression

(1) As we have started talking about AI my first thought was of an old website my siblings and I used to play on called I believe that this website would be considered a form of AI because like Shane talks about in the reading, it cannot itself understand human language but it can learn from input. The responses from Cleverbot are informed by what people type in to chat with it, and therefore its responses overtime become more believably conversational rather than very robotic, stiff responses. This example of AI technology has me leaning more towards the “not taking AI super seriously” stance.

(2) As we progressed in the unit however, the conversation about AI of course became more complex. Our discussion regarding Black Software and the Alert II system helped make me more aware of the real world dangers that can arise from AI technology (as opposed to those portrayed in movies). I think that a widely-held perception of AI is that it is more accurate or scientific than humans are, but at the end of the day humans are the ones programming this technology. AI is therefore susceptible to the bias and follies of humanity, so we shouldn’t just blindly trust that AI programming will be faultless. If anything, this discussion is a further argument against those who blindly praise AI technology without considering the many ways in which it still needs to be developed/improved.

(3) The “AI and Creativity” section of this node helped confirm this sentiment, but again further complicated the overall discussion of AI. The part of the discussion involving collaboration among humans and AI was positive in the sense that it allows new and interesting forms of art/creative expression to emerge. It is interesting to analyze how AI technology learns from input and furthermore how it interprets that input to produce output. Analyzing this process is more than just interesting though. It is critical to an honest understanding of how this input/output process need to be improved. The example of the messed up cats that the AI still recognized as cats demonstrates this point. The AI generated cats are funny only because their interpretation by AI is low stakes. One of the points from this discussion that has stuck with me the most was: “So if this is what an AI thinks cats look like, what does the AI in a self-driving car think I look like when I walk down the sidewalk?” The ability for AI in a self-driving car to accurately interpret input is incredibly high stakes. That is why it is dangerous to overstate the capability of AI today. Such actions could have devastating consequences.

  • The Trolley Problem

These are my results from the Trolley Problem game. I felt absolutely horrible about all of my choices. In some cases, I thought it would be “better” to lose the least amount of human life. But when that decision involved pushing someone onto the track who wasn’t in harm’s way to begin with, I couldn’t do it. Additionally, when that single person was someone I cared about, I could not sacrifice them to save more people. If the choice was that I myself could die to save a greater number of people, I would take that in a heartbeat. But I could not choose to let my brother die to save strangers. The graphics made the game bearable in the sense that the depictions of humans were not detailed. I can’t imagine how horrific the game would be if it was say a VR game. In the context of this unit, this game made me wonder what an AI system would choose each time. AI doesn’t deal with the complication of having emotion, so would it be easier for them to say always choose to kill the least amount of people no matter the circumstance? I guess if the AI was like that in the movie Colossus, it would just try and kill as many people as possible with the trolley.

The coding exercise involving inputting text was definitely challenging but very helpful in demonstrating collaboration between humans and technology. This exercise emphasized an earlier point about the responsibility human’s have in submitting input into a system. The output created is going to reflect the input (like we saw with the Alert II system). This exercise was no different. If you were to include significantly more text from one author versus another, the resulting output would reflect the voice of the book that the system had more information about.

  • Ultimate conclusion: AI cannot and should not be seen as impervious to error! Improvements are necessary and ignoring this fact has already and will continue to produce negative consequences.

Leave a Reply

Your email address will not be published.