The Potential for Artificial Intelligence

Old 1 Comment on The Potential for Artificial Intelligence 7

In order for something to be an artificial intellect, it has to be a computer program that is so complex that it has the ability to learn and improve itself over time as it gets more experience, the same way a human learns. A human can learn new skills and get knowledge from experience. A chess program stores its games so it does not have to recalculate its next move if it gets into the same position; it will play the move it made in a previous game. Some artificial intelligence programs will start a conversation with a person and will make adjustments on how it communicates based on what the person has said to it. There is even a test to see how realistically a computer can communicate. This is done by asking volunteers/subjects whether they think they are interacting with a computer or a live person. This test is called the Turing test. That would have to be an insanely intricately executed  program. But can a computer actually become conscious? First, to answer this question we need to know when something is conscious. Am I conscious? Is my cat conscious? The knowledge that one exists is a barrier between consciousness and not being conscious. That is all well and fine, except we need to define when one knows that one exists. If I were to print, “I know that I exist,” on a computer screen, does that mean the computer is conscious? It appears we are stuck in the conundrum of defining consciousness. Maybe we should try an easier task; define life. What is life? If something is alive, it has to respond to stimuli, right? Well, what counts as reaction to stimuli? A rock can melt if enough heat is applied. Hmm. If something is alive, then it tries to preserve itself.

As the Star Trek: The Next Generation (I am a fan of Star Trek) episode “The Quality of Life” pointed out, our current definition of life is deeply flawed. Data, an android, a form of artificial life, ponders the question “What is life?” and designs an experiment to test whether an object in dispute is alive. Data decides that all life will try to defend itself when threatened. He puts it somewhere, gives it a sign of false danger, thinking surely it will try to save itself. Unfortunately, the robot in question continued on, apparently unaware of the danger it should have readily detected; it was considered not alive. The episode went on from there when it was eventually discovered that it was indeed alive. In an ironic twist, its superior intelligence had already determined that the danger was false, and  there was no real threat, and it decided to continue on. In an additional swipe at our perceptions of what makes consciousness, intelligence, and “life”, in particular, Data also points out that fire meets many of the requirements for life. Fire eats as it burns wood, fire grows as it expands, and fire reproduces as it produces sparks that could potentially create another fire. Convention tells us that fire is not alive, however.

We can’t really answer whether a computer can become alive or conscious. Our definitions of life and consciousness are mostly to blame. Since we have such a small sample of life and conscious beings, our definitions are rather crude. Personally, I would argue that a computer could become alive and conscious but would require elaborate hardware we do not yet have.

Author

Related Articles

1 Comment

  1. Ivan Pierre June 17, 2014 at 3:18 pm

    No, he certainly doesn’t have to be designed complex, he should find its complexity through learning and construct its own complexity by refactoring itself…
    Else there’s no hope to achieve this level… at last for us…
    And I don’t thing a better knowledge would avoid this problem.

Leave a comment

Back to Top