Monday, September 26, 2016

The Robots Are Coming


What first struck me while reading Monstrous Technologies was the line in reference to our current societal state, “...a time in which humans are becoming increasingly intimate with technology, penetrated by and absorbed into the technological realm in an unprecedented manner” (Biles 148).  I believe this anthology was published in 2013 and since then technology has grown even more.  We use technology for a plethora of reasons: communication, connection, finances, planning, entertainment, and even dating, to name a few.

With all of our numerous uses of technology, it's no wonder we play with the idea of artificial intelligence and humanizing technology.  But the idea of robotic humans, or specifically Cylons as mentioned in the text, is terrifying.  We have no way of knowing how evolved these robots will become.  If science fiction movies (and Will Smith) have taught us anything it's that robots will likely try and overthrow humans.  The text even mentions the possibility of an “‘apocalypse’: a death of the imperfect human coincident with a technological resurrection” (Biles 149).  I don't know about you but I'd rather not tempt fate.  There’s even a notion that the human mind can essentially be uploaded to a computer.



Although I understand the importance of inquiry, exploration to further our minds and the human race, and the need to preserve our knowledge,  I have to wonder at what cost?  Do we really want to create technology that could eliminate us?  When should we draw the line and heed to the old adage of “curiousity killed the cat”?  The human mind and its constant quest for answers is too large of a fire to put out which means it's almost inevitable that soon we will be playing with humanized robot technology on a mass scale (that is if we haven't already).  The question remains, will we know when to stop and limit what we are producing?  Or will we continue in the name of science?  And if we do continue, how long before we start the process of eliminating the human race to advance technology?  Is our curiosity worth destroying our humanity?


Sources:
http://www.imdb.com/title/tt0343818/mediaviewer/rm1679789824

Levina, Marina, and Diem-My Bui T. "Chapter 9: Monstrous Technologies." Monster Culture in the 21st Century: A Reader. N.p.: n.p., n.d. N. pag. Print.
Chapter / Anthology




7 comments:

  1. I find it interesting that we think of AIs, we immediately think of the worse possible scenario. Like I have said before, we have two main goals: surviving and reproducing (although you can argue that reproducing is a form of survival). So of course when we think of machines reaching our levels of intelligence and cognitive thought is super scary. We go into survival mode. We think of the worse scenario and try to find solutions and preventions of the destruction of the human race. We do this because, like you mentioned, society tells us this. Society tells us that we are the ones in control of the world and anyone coming into our turf should be destroyed immediately. The more I think about it, the more I realize that perhaps this will not happen. Society loves conflict. We rather think of an exciting fight between humans and AIs, than a world where we just simply get along with them. If (somehow, some way, hundreds of years from now because we are nowhere near close of creating something so incredible), we do create such thing, then maybe we can learn to co-exist. Perhaps we can even reach co-inherence and we exist as innate components of each other. Therefore, we cannot survive without the other thus we must preserve our existence. I would like to take a shot a world with AIs, maybe nothing exciting happens and the human race just goes on and on.

    ReplyDelete
    Replies
    1. I was going to write about our survival instincts too! What we know of the world, we have spent at the top of the food chain. We know nothing else. We have no way to conceive what it would be like to not be the "most evolved species." That thought is terrifying. To us, it would have to mean that the human race is wiped out completely. But what would happen if we do create technology that is smarter and more capabale than us but we are able to live together? Perhaps we're not the toughest guys on the block anymore but does that mean that we cannot co-exist? Are we so stubborn that either we remain at the top or we all have to die? Is there no middle ground? Karen says it perfectly; society loves conflict. Maybe if we make technology smarter than us, they'll figure out the way to obtain peace, something humans have utterly failed to do.

      Delete
  2. Hey Stephanie,

    I would really like to address the notion of uploading a human mind onto a computer that you mentioned in your blog post. I, too, understand the importance of preserving our knowledge. But like you said – there will be a cost. To me, this cost would be losing this knowledge we so desperately want to conserve. We talked in class about how the mind and body are one entity, although we tend to separate them. But think about it: what allows us to discover all of this IS our bodies… and not even just the brain which physiologically allows us to think. Part of being human is moving around, interacting with other humans, and experiencing life. To “upload” your mind and solely just have your mind is not human, and, I think, would change these computer “people” – they would not be themselves anymore.

    ReplyDelete
  3. If I had to answer the question and say where to draw the line, it would most definitely be where we begin to give authority and/or self-consciousness to machines. When we abuse technology as a crutch for humanity, we become Dr. Frankenstein. Although the possibility of creating something even better than ourselves is thrilling, there are just some questions that should be left unanswered. Given the danger that we as humans pose to ourselves, why would we ever want to create a potential predator?

    ReplyDelete
  4. If I had to answer the question and say where to draw the line, it would most definitely be where we begin to give authority and/or self-consciousness to machines. When we abuse technology as a crutch for humanity, we become Dr. Frankenstein. Although the possibility of creating something even better than ourselves is thrilling, there are just some questions that should be left unanswered. Given the danger that we as humans pose to ourselves, why would we ever want to create a potential predator?

    ReplyDelete
  5. Stephanie, I had the same reaction as you to the text. All I could think about was the end, when exactly will technology have the ability to take over us? Just like in iRobot. I like the reference to the movie that you made because that is exactly the type of scenario I had in mind. As technology does become more advanced, I do think the dependence of humans on technology is going to be incredibly strong, even stronger than it is today. With this said, realistically, humans are the creators of technology, technology cannot advance without humans.

    ReplyDelete
  6. Your last section, about the questions lingering around human-like AIs, drew some good points. On one hand, humanity should not allow fear to dictate its actions too heavily. Otherwise, we'll be at the mercy of whoever can quell or spread those fears. On the other hand, scientific endeavors without any limits brought humanity into some dark places, like experimentation on living beings and devastating weapons of war. A middle ground between heavy restrictions and no limits would be ideal, but without boundaries it would be too vague of a concept to work.

    ReplyDelete