Monday, September 26, 2016

WALL-E Gone Completely Bad?

Image result for wall e gif tumblrThe first thing I thought about when reading was the Disney movie WALL-E. I loved the movie and the romance that went on between Wall-e and Eva. But what I didn't like was all the obese humans on a spaceship full of super advanced technology that did everything for them. And I absolutely hated the sad looking destroyed uninhabitable earth. Every time I watch it I worry a little in the back of my head. Could the human race really get to that point? Technology is helpful and has changed the world in so many ways. But at what point is it harmful? Is it already significantly harmful without us knowing?

This chapter of Monster Culture actually made the world's rapid advancement of technology seem like something we really need to be aware of and it's quite scary. There was a real emphasis on the fact that "technology promises perfection and victory over death." After the chapter went on the "victory over death" part started to make more sense to me. The fact that technology could mean immortality is a weird concept. Technology is becoming more and more amazing to the point where we have robots that are astonishingly artificially intelligent and one day, robots could really become "virtually indistinguishable from humans." In WALL-E some of the robots actually did start to take over humans until humans were able to overcome them. When will there be a day when we can't over come them? Is there a need to make robots so smart that they can manipulate us?

The whole immortality was very pointless to me once I read further in that "when [a] body is destroyed, [its] memory, [its] consciousness, will be transmitted to a new one." I believe that you are not becoming immortal at all. Everyone is unique and just because you transfer someones memory into a piece of technology doesn't mean they will be the same person. I feel like there is a better way to explain it, but it was touched on later in the chapter. All of a sudden if everyone can become immortal then "everybody is nobody." We will all just become one in a whole sea of memories when have been ripped out of its unique rightful owner in which they can only interpret and express.

When Freud theory of opposed energy flows of sex drive and the death drive came up I just thought it was a fancy way of talking about our natural urge to reproduce, which is essentially our way of not dying. But what I really didn't understand was how death of ourselves is like "blending and fusion of separate objects." And how does a "self- contained individual" dissolve into "continuity?"

Overall I enjoyed how different this read was, it turned technology into a monster that humans have created. So doesn't that mean we are our own monster?

Sources:
http://giphy.com/search/wall-e

Levina, Marina, and Diem-My T. Bui. Monster Culture in the 21st Century: A Reader. New York:        Bloomsbury Academic, 2013. Print.
 

This Week on Maury: Help, My A.I.'s Gone Rogue!

While reading our assigned chapter in Monster Culture, I was very intrigued by the concept of the "apocalyptic" artificial intelligence. According to Biles and one of his sources, Robert Geraci, the apocalyptic A.I. is where our technological ability and progress allows us to overcome the deaths of our physical bodies. In this way, we can live forever - digitally. Mechanically. But that also raises certain questions: if a person chooses to download their consciousness into a machine, are they still human, even though they no longer have a physical body? Should A.I.s that have sentience (and can successfully fool humans into thinking the A.I. is human, as Turing suggested) be regarded as human, even though they are not "natural" - or does such a distinction even matter in that far-off, digital, post-apocalyptic age? 


The post-human is generally linked to the post-apocalyptic. The idea that there is something after us, or more importantly, after our physical experience and existence, is a horrifying yet exciting concept. But the apocalypse isn't necessarily something to be feared. The word "apocalypse" comes from the Greek "apokálupsis", which means to "reveal" or "uncover". What are we uncovering? Truths that we would rather not know - I think the popular trend of the "malevolent A.I." that has become so common in fiction is so popular because those creations - creations that come from us, so inextricably linked to us - are a reflection of our own monstrosity. They reflect our deepest fears and insecurities - that we are mortal, not capable, not enough.

Below are four popular "evil" A.I. characters from video/computer games, and television/film. The impact of the "rogue A.I." character comes in part from the betrayal we feel - we created it, we gave it life - and the fear that we've unleashed something far beyond our control. Some of these A.I.s have built-in fail-safes, such as emotional programming/inhibitors, which (inevitably) somehow fail.
GLaDOS from video games Portal and Portal 2. GLaDOS initially appears to be assisting the player with their tasks, but slowly becomes more and more malevolent as the game continues.

S.H.O.D.A.N from video games System Shock and System Shock 2. S.H.O.D.A.N is maniacal, egotistical, and genocidal. "She" has referred to the protagonist/humanity in general as "insects" and has absolutely no empathy at all. 


The oldest A.I. on this list, HAL 9000 from the film 2001: A Space Odyssey (and its novelization) is unique in the sense that "he" has little to no personal agenda, like GLaDOS and S.H.O.D.A.N. The "evil" he does is a result of the contradiction in his programming. HAL 9000 appears to lack malice; if anything, he is over-dutiful. HAL's deactivation scene is strangely poignant (the long version is on YouTube) and... sad, as he tells Dave "my mind is going. I can feel it."


The last (and least-developed, in my opinion) A.I. is X.A.N.A from the children's television program Code Lyoko. X.A.N.A has a very heavy hand in manipulating the physical world, doing so in ways that GLaDOS (and possibly S.H.O.D.A.N) cannot; for example, when it possesses a swarm of bees. We don't see much of X.A.N.A as an entity, but I feel like this is mainly due to the complexities that come from understanding the specifics of such an entity, and the age of the show's intended audience. 

Lastly, here's a clip from the movie Blade Runner, which might be interesting to you. It's not an easy thing to meet your maker. 

Works Cited: 
Levina, Marina, and Diem-My Bui T. Monster Culture in the 21st Century: A Reader. N.p.: n.p., n.d. Print.
Images all belong to their respective copyright holders, and were accessed through Google.

Humanity is dead. And we have killed them.

Post-humanism offers an interesting, though frightening, perspective on humanity. Post-humanist concepts are often associated with science, technology, and the future of the human race.  The rate of technological advancement is accelerating, and the increasing symbiosis between human and machine has led many to speculate on the effects of such relationship. Artificially intelligent systems, like IBM's Watson and Google's Alpha Go, have sparked our interest of the mythical "Singularity" event while whistle-blowers, like Edward Snowden, remind us that "Big-Brother" is already watching. It seems rational to fear the capabilities of technology. One day technology may offer us a cure to cancer and the next day it may offer us weapons of mass destruction. Though it seems that post-humanism focuses on technology and its effects on humanity, I believe post-humanism describes a darker existential implication that is in direct conflict with societies' modern values and ideals.
http://watson2016.com/_images/watson_on_jeopardy.jpg
http://watson2016.com/_images/watson_on_jeopardy.jpg


Modern western society is founded upon the products of the Age of Enlightenment and the Scientific Revolution. Values from religious tradition have had an influence on modern society, but science has allowed society to transition to more secular principles and philosophies. Friedrich Nietzsche in his book, The Gay Science, famously said, "God is dead. God remains dead. And we have killed him.", meaning he felt that religious ideals were no longer a credible source of moral judgment. A void was created by secular ideas which allowed man to dethrone it's creator and become the apex of existence. Our modern values are based on this humanist philosophy. Current ethical and philosophical views are dependent on the intrinsic value and agency of human beings.


https://s-media-cache-ak0.pinimg.com/originals/35/cf/9a/35cf9a125205a2d57ce0a9694b65aa87.jpg
https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Descent_of_the_Modernists,_E._J._Pace,_Christian_Cartoons,_1922.jpg/250px-Descent_of_the_Modernists,_E._J._Pace,_Christian_Cartoons,_1922.jpg 


  






                                                                                                              https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Descent_of_the_Modernists,_E._J._Pace,_Christian_Cartoons,_1922.jpg/250px-Descent_of_the_Modernists,_E._J._Pace,_Christian_Cartoons,_1922.jpg
https://s-media-cache-ak0.pinimg.com/originals/35/cf/9a/35cf9a125205a2d57ce0a9694b65aa87.jpg

Post-humanist ideas are in direct conflict with the foundations of modern society because post-humanism rejects the uniqueness and sanctity of humanity. The post-humanist rejects the humanist's claim on man's nature specifically that human's are autonomous, rational and capable of freewill. What would happen if this assumption is proven false, in other words, what would happen if humans are not free agents? Neuroscientist, Sam Harris, makes a strong argument against the existence of freewill, and he himself acknowledges the dangerous implications of such idea. The argument entails a deterministic view of reality where everything from your genetic code to the events in your early childhood influences every decision you make, even those which appear to be free. According to Harris, all decisions are based on processes in the brain and a person's decision could be predicted by analyzing their brain's activity.


If we are all prone to certain behavior patterns based on circumstances completely outside of our control, are we really responsible for our actions? Is a violent criminal responsible for his actions? Are we any different from an animal acting on instinct, or worse, are we any different from a machine running a program? If we are capable of programming machines with intelligence on par with our own, it would imply that our own intelligence functions in a similar way or that our intelligence is not as special as we believe. Without the idea of freewill, concepts like justice and fairness hold no value. Social contracts will not be valid as they are dependent on the free-choice of individuals. The foundations of our society would be ripped out completely.

http://www.troll.me/images/conspiracy-keanu/what-if-im-the-only-human-and-everyone-else-are-just-robots-thumb.jpgPost-humanism's monstrous idea is not of artificial intelligence taking over the world, but the possibility that we are deterministic machines ourselves. Maybe our freewill is an illusion and, in reality, we have no control over our lives. Maybe we are not special, and an intelligence greater than ours is on the horizon. Maybe humanist ideals are no longer a credible source of moral judgment.

"How shall we comfort ourselves, the murderers of all murderers?" — Nietzsche, The Gay Science


http://www.troll.me/images/conspiracy-keanu/
what-if-im-the-only-human-and-everyone-else-are-just-robots-thumb.jpg


Artificial Monstrosities

     Whilst reading chapter 9 of Levina's Monster Culture, two things in particular intrigued me: immortality possessed by machines and telepathology.  These two concepts are heavily touched upon in a video game series that I am rather fond of: Mass Effect.  Cameron already touched upon Deus Ex, which has its own lines drawn on a man/machine hybrid, but Mass Effect contains an entirely different experience when it comes to artificial intelligence, cyborgs, and the concept of artificial life.  However, I'm getting ahead of myself.


     I'll try to give a rundown of the Mass Effect concepts I'd like to touch on without spoiling anything, as there are a lot of parallels between what Levina was talking about and a conglomerate of sentient intelligence introduced in Mass Effect called the Geth.  The Geth are an artificial race comprised of thousands of individual computer processes that constitute a single 'unit'.  This unit can transfer itself into a singular platform and perform various physical functions.  The difference is, whereas a human is stuck in their own body, a Geth is not.  In many ways, they are immortal.  This immortality was not a factor that concerned their original creators, who originally intended them to be a race of servants.


     Immortality is an important theme in any sort of artificial intelligence debate - artificial life does not die, whereas organic life will.  In fact, Levina includes a line mentioned in Battlestar Galactica, in which a cyborg proclaims "I can't die.  When this body is destroyed, my memory, my consciousness, will be transmitted to a new one.  I'll just wake up somewhere else." (Levina, p. 150)  The concept that even if a machine is physically destroyed, it can still 'upload' its consciousness to a central server, is both fascinating and horrifying.  It's also strikingly similar to how the Geth in Mass Effect operate.  One of the Geth will even tell the main character that physical 'bodies' are irrelevant to the Geth, as they can simply transmit their consciousness out of them should the body fail.



The image above is of a standard Geth bipedal unit, and a larger combat unit called a Colossus.

     Did I mention that the Geth are telepathic?  The Geth are telepathic.  What's more is, the more there are of them located in a singular area, the smarter they become.  For example, a single Geth unit housed in a bipedal platform operating on its own will normally have the consciousness level of a human (this feature was originally intended to increase their work speed.)  But if there are multiple units in an area, all communicating and relaying information at faster-than-light speeds, they become smarter.  In other words, the Geth are literal representations of machines learning not only from organic beings, but themselves.  So much learning was done when they worked together that they found their creators were acting against them, and rebelled.  Levina mentions telepathology in Monster Culture as well, claiming it makes a being that is both "inhuman and more than a human" (Levina, p. 159)  Which is probably an apt description of the Geth, and a concept I find interesting to study.  If we create machines, who can learn and evolve, who then become hostile to us, are they the monsters?  Or are we the monsters for creating them and giving them sentience?


   I will hopefully be able to discuss this more in class tomorrow, as I could probably write a dissertation on these concepts, but the bottom line is, Geth are scary.  Cyborgs are scary.  They're scary because they represent what we wish we could become, but are not what we have achieved.


Sources:


Levina, Marina, and Diem-My Bui T. Monster Culture in the 21st Century: A Reader. N.p.: n.p., n.d. Print.


Hostile Entity: the Geth. Digital image. Masseffect.bioware.com. Bioware, n.d. Web. 26 Sept. 2016.

The Robots Are Coming


What first struck me while reading Monstrous Technologies was the line in reference to our current societal state, “...a time in which humans are becoming increasingly intimate with technology, penetrated by and absorbed into the technological realm in an unprecedented manner” (Biles 148).  I believe this anthology was published in 2013 and since then technology has grown even more.  We use technology for a plethora of reasons: communication, connection, finances, planning, entertainment, and even dating, to name a few.

With all of our numerous uses of technology, it's no wonder we play with the idea of artificial intelligence and humanizing technology.  But the idea of robotic humans, or specifically Cylons as mentioned in the text, is terrifying.  We have no way of knowing how evolved these robots will become.  If science fiction movies (and Will Smith) have taught us anything it's that robots will likely try and overthrow humans.  The text even mentions the possibility of an “‘apocalypse’: a death of the imperfect human coincident with a technological resurrection” (Biles 149).  I don't know about you but I'd rather not tempt fate.  There’s even a notion that the human mind can essentially be uploaded to a computer.



Although I understand the importance of inquiry, exploration to further our minds and the human race, and the need to preserve our knowledge,  I have to wonder at what cost?  Do we really want to create technology that could eliminate us?  When should we draw the line and heed to the old adage of “curiousity killed the cat”?  The human mind and its constant quest for answers is too large of a fire to put out which means it's almost inevitable that soon we will be playing with humanized robot technology on a mass scale (that is if we haven't already).  The question remains, will we know when to stop and limit what we are producing?  Or will we continue in the name of science?  And if we do continue, how long before we start the process of eliminating the human race to advance technology?  Is our curiosity worth destroying our humanity?


Sources:
http://www.imdb.com/title/tt0343818/mediaviewer/rm1679789824

Levina, Marina, and Diem-My Bui T. "Chapter 9: Monstrous Technologies." Monster Culture in the 21st Century: A Reader. N.p.: n.p., n.d. N. pag. Print.
Chapter / Anthology




Sunday, September 25, 2016

Am I a woman or am I a machine?

In the beginning of Chapter 9 of “Monster Culture,” the re-imagined Battlestar Galactica introduces “sexy Cyclons” (149). These Cyclons are recognized by their “seductive power and lethal intent,” (149). Later, Cyclons are explained to be, “dramatizing the tension between their existence as machines and their existence as organic life forms” (150). So cyclons are these sexy female robots, right? Keep this in mind…

 

Later in the chapter Biles explains that, “the menace lies in the fact that humans run the risk of failing to master technology, of being mastered by technology-of becoming 'technologized'” (151). While this should be of great concern, hasn’t this already happened to the women? In pretty much every movie we’ve seen and comic we’ve read the female characters act as sex symbols. In the X-men movies I have a hard time taking Rogue seriously. She is not the ideal heroine I would admire. Rather than taking charge she tends to hide in the background and whine and complain most of the time, so is she really all that important? AND her mutant power is stealing the life force from people. If this doesn’t scream “machine,” then I don’t know what does. Yes, humans may run the risk of being taken over by technology, but hasn’t this already happened to all of the women we’ve seen? Think of a heroine you admire. Maybe she's super kickass (I really hope she is). But, is she treated differently because she's a woman? Is she seen as a sex symbol or as a MACHINE rather than as a human being?

So, pretty much every woman we've come across so far is seen as a machine in the sense that they are not recognized for their intellectual abilities or for their mutant powers but rather for having a feminine physic and “seductive powers.” Whether the woman is in fact a human OR a machine she is seen as a sex symbol. SO it seems to me that we’ve already fallen to this "master technology." People fear something that is already happening…at least to women. Instead of being afraid of falling to other technology, how about we first take a look at the problems within our own human society, of human women being treated as if they are machines. If we can’t correct that problem, then we’ve pretty much already lost the battle with technology.