Tuesday, April 21, 2015

Is Artificial Intelligence a Threat?

It is something of an age-old question, if we consider artificial intelligence to be just another step in the evolution of technology.  The key difference for many people is that artificial intelligence might make us question our very humanity, not just cause changes in lifestyle.  As we see in movies and literature, there are fears of an artificial person replacing real people in the feelings of others, since being non-human, they could be lacking human flaws while preserving human qualities.  We imagine an artificial intelligence to be superior, almost god-like, and fear it will end up controlling us.  The characters of HAL and VIKI are examples of this.  Neither example, it seems, really wants to control humans for pure power, but to "protect" us from our own weaknesses like a helicopter parent.  There is an element of power in giving that protection, of course, but when we imagine brutal dictators or machine overlords, they are not exactly parental.

The Source referred to the Industrial Revolution and the film Modern Times as examples of past advancement and fear of change, saying that we have adjusted to technology in the past and we will adapt to any future change as well.  Although there are problems, and unforeseen consequences of new inventions and discoveries, ultimately they offer us solutions or they disappear as quickly as they came.  For her, the fear of new technologies in general and artificial intelligence in particular comes down to the fear of losing control over our own lives.

The True Philosopher wrote more of the dangers of other humans using AI against us than the dangers posed by AI itself, and continued along these lines in the meeting.  He reminded us that AI exists today in many of our appliances and gadgets, not in some sci-fi future.  Some experts, however, are sure that artificial intelligence will never overtake natural intelligence, one being Hubert Dreyfus.  He also questioned the limited definition we normally give intelligence when it relates to human-created entities, saying that we now recognize several varieties of intelligence in humans.  Why should we limit artificial intelligence to the cognitive?

The Deep Thinker examined a number of viewpoints on the possible threat to humanity, saying it depends on the mind using the technology.  There is the threat of losing livelihoods due to a machines superior output or efficiency.  There is also the threat to our self-image; we see a reflection of ourselves in a robot or android that might remind us of our own mechanics.  Are we nothing but machines ourselves?  Even bots can learn to converse like humans.  He told us some have even been used for therapeutic purposes.  While the horror of being a machine might be true, I also wonder if we fear machines being human.  In movies such as A.I. and television shows like Futurama, the conflict between fleshly humans and robotic humans can be seen.  Futurama even presented it as analogous to racism and segregation in the American 20th century in the episode "I Dated a Robot".  Also interesting to note is that bots that pass the Turing test are often the ones that are rude.  We expect other humans to be impolite on the internet, so the bot that calls us assholes is less suspicious.  The Deep Thinker conceded that fear of recognizing the humanity in an object could very well exist, besides our own loss of humanity as organic machines.  He also found the use of the word artificial interesting, stating that it is only used when we feel some pressure to distinguish ourselves from something we use, like an "artificial" limb.  Machines are extensions of ourselves, like any other tool.  He brought up Siri as an example of existing AI, a program running an algorithm that allows it to modify its behavior based on experience and new information.  This is the basis of human intelligence, as well as a trademark of AI.  We worry about the danger of malfunctions in our machines, but we have our own malfunctions.  Only the simplest of programs are not normally faulty.  Finally, he reasoned that the AI we imagine in movies and literature is an attempt to imprint human character on machines.  Then he said that our own intelligence is not "natural", that is not something found outside of the insulated society we have built around ourselves.

Our Leader also firmly stated that the threat to us is not posed by AI, but by the humans that create and use it.  We want to create something god-like and better than ourselves, but our own limitations of imagination, prediction and reasoning prevent us from being able to create something perfect.  The fears represented by Hollywood are "mythology" and are best cast aside.  As for the Deep Thinker's question about the word artificial, he insisted it was not necessary to ponder it too much.  Besides that, we could make androids or robots out of biomatter.  The distinction between artificial and natural in this case is merely one of being consciously created by a human or not.  There is also the question of emotion and morality in machines.  As mentioned by the True Philosopher, cognitive intelligence is but one issue, what about emotional understanding and compassionate decision making?  In the movie I, Robot a robot saves an adult with greater chances of survival from a car accident, while humans would choose to save the child if they could, possibly an evolutionary tactic to give more importance to future generations than present/past.  How can we program a system to override its logical conclusion in extreme situations?  Without an emotional history, how can we encode morality into a machine?  The weakest point of AI, in the Leader's opinion, is how it reacts in bad situations, when unexpected circumstances arise that it has not been programed to handle.

The Writer also supports the human users = real threat idea, saying that the portrayals of Hollywood and literature are simply there to profit from our fear, not to show any accurate picture of what is being developed.  She also reminded us that machines are tools, there to help us solve problems and do work, but ultimately developed by humans, so we are responsible for them and how they function.

The Educator pointed out another aspect of intelligence that had not been much explored, that being humor.  Jokes written by computers are not funny, at least to her.  As mentioned previously, they lack the emotional history and the empathy to generate stories and comments that strike people as funny.  Could AI one day produce enjoyable comedy?  It does not seem to be impossible, but it likely depends on achieving the successful programming of emotional intelligence as well as memory capacity.  The Educator also let us in on her pining for a chip that gives one fluent command of a foreign language, in our situation, English.  Many others in the group nodded wistfully as she described it.

The Seeker of Happiness explained how he uses recordings of games to improve his playing, but that is rote memory, not intelligence.  It is using memory and limited resources efficiently that defines intelligence, in his opinion.  AI should not have enormous stores of memory, it should have a simple program to keep the most useful information for the future, as we do, more or less.  He also saw no risk in AI, but in the people in power that control it.  He often speaks about the haves and have-nots, but this time he ended with a furious rant about the exploitation people suffer at the hands of the wealthy and powerful, not only in poor regions of the world, but also in develop and "rich" countries in Europe.  AI might be able to squash dissent more efficiently for the ruling class, but it is still the rulers that send the bots to do their bidding.

No comments:

Post a Comment