By Michael Joo
Computers have been essential to human life ever since, well, only the late 20th century. Technology has been around us since the first hand axe, the computer, from its theoretical model devised by Charles Babbage, has forever changed the potential of machine development. But how has a simple computing machine turned into something we are now calling “intelligent”? And can artificial intelligence not stop at being smart and become sentient conscious beings with qualia like ourselves? These questions all started from Alan Turing.
Alan Turing is not the father of computers. That title goes to Charles Babbage (Halacy, 1970).1 But it is safe to say that Turing is the father of Artificial Intelligence, and that, not just computers, is the focus. Turing started to ponder more about artificial intelligence after serving as a code breaker in the Second World War and creating his Universal Turing Machines. It was in this process that he came up with his famous Turing Test. In “Computing Machinery and Intelligence,” Turing proposed what he called the Turing Test to identify whether a computer, or any machine for that matter, is truly intelligent or not (Turing, 1950).2 The principle of the test was simple: if a human had a conversation with it and the human could not tell if it were a computer or a human, then it passed the Turing Test and alas, was intelligent!
So where are we? Well, by the Turing test, we might already have a truly intelligent computer. Last semester, Ashok K. Goel, a Professor of Computer Science and Cognitive Science in the School of Interactive Computing at Georgia Tech, added a new teaching assistant to his class whose name was Jill Watson. Sound fake enough? Jill Watson was actually a virtual TA born from IBM’s Watson Platform (Maderer, 2016).3 Watson was taught about 40,000 questions and answers from that particular class. Jill Watson was constructed and trained using machine learning: it was given thousands and thousands of data and was left to recognize patterns and compute them for itself. Although at first the thousands of questions were not enough for Watson to be apt to be a helpful TA, the other TAs soon fixed the problem by making Watson practice its patterns in machine learning. So in a way, Jill Watson was trained in two steps. First, it was given a massive amount of data to recognize patterns and understand its task. Second, Watson actually tried out those tasks and, by learning through context errors and recognizing patterns in those errors, became better at its job. Going back to where we are with this technology, it might be too far of a stretch to say that Jill Watson truly passed the Turing test. After all, the students who asked Jill Watson questions about the course were not interrogating her with the purpose of finding out whether she was a computer or not. Nonetheless, the case of Jill Watson gives a positive prospect on what we can achieve from Artificial Intelligence.
But does the above constitute a conscious machine? Will an artificially intelligent computer have the same consciousness that we humans have? At first it feels scary that a hunk of metal can think and be self-aware like you and I are. But is it really? (Are you even self-aware?) At its fundamental level, that hunk of metal is not so different from the hunk of tissue that we have inside our heads. They both use a sort of binary system with the computer using 0 and 1, and our brains using the on and off of neural activation. Both of them, as is widely accepted today, think and solve problems through material interaction, not through a certain whizzy corporeal form. However, the philosophical literature is still very much divided in what machine consciousness requires. Hilary Putnam, a functionalist among many other things, believed that if a system activates the same pattern of information processing that our brain has, that system will have mental states and, in conclusion, consciousness both in access terms and phenomenal terms. On the contrary, type-identity theorists such as Ned Block believe since qualia, along with other forms of consciousness, depends on physical constitution, it must have the same material as the brain to truly exist.
Christof Koch, a more modern member of this debate, gives two very important insights on this whole debate. First, being a functionalist, Koch believes that according to his “‘integrated information theory’ which asserts that consciousness is a product of structures, like the brain, that can both store a large amount of information and have a critical density of interconnections between their parts”, a computer with the same cause-effect circuit as the brain, it would have a consciousness and it would “feel like something to be [a] computer” (Regalado, 2014).4 Second, Koch makes the case that a software simulation of the brain would not have consciousness, only a physical computer that mimics the human brain will; he thinks that consciousness is a fundamental and intrinsic property like mass that appears when matter is arranged in a certain way. One could agree or disagree with this view, but it raises questions about not only how we can construct machine qualia, but also what standard it would require for us to truly say that a machine has consciousness.
The strangest thing about consciousness is that, technically, we can never “truly” know for sure if someone else has consciousness. Only I would know that I am conscious. Defining consciousness itself is almost an unsolvable problem in philosophy, and no matter how far we go with technology, we may never have an answer to the question “Is that computer conscious like I am?”. But that should not, and probably most likely cannot, stop scientists from trying to construct artificial consciousness. The step from artificial intelligence of pattern recognition and simple problem solving to the realm of artificial consciousness of emotions and inductions may happen soon.
1 Halacy, D. S. (1970). Charles Babbage, Father of the Computer. Springfield: Crowell-Collier Press
2 Turing, A. M. (1950, October). Computing Machinery and Intelligence. Mind, 59(236), 433-460
3 Maderer, J. (2016, May 9). Georgia Tech News Center. Retrieved May 15, 2016, from http://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant
4 Regalado, A. (2014, October 2). Technology Review. Retrieved May 16, 2016, from https://www.technologyreview.com/s/531146/what-it-will-take-for-computers-to-be-conscious/