Ascribing cognitive capacities to non-living objects establishes quite the dangerous precedent. Computer engineers and roboticists working to program and build artificially intelligent machines are consistently over-ambitious in this regard. They extend themselves far too wide. Humanity ought not be the standard by which we measure machine competency and potential. Calling smart programs such as those capable of responding to natural language or identifying common faces across photographs “human” is insufficient on multiple levels. First, it deprives both technology and humanity of our respective individuality. If we aspire for AI to ever exist, we must eventually create unique ethical and technical rules governing their use. American society has already encountered this conundrum now that a self-driving car has been branded responsible for killing an Arizona woman (Wakabayashi). Second, it falsely equates how strong AI programs have historically operated in cognitive terms to what actually happens inside living brains. And furthermore, by labeling intelligent devices and software as human, we impart onto them values and abilities which they plainly lack. This paper revisits what John Searle surmised about AI in 1980 to discuss the inherent problems with calling mechanical inventions “human.”
Prior to rehashing those overly ambitious early efforts at constructing human-like intelligence through code, I will paraphrase that “Minds, brains, and programs” article in which Searle outlined his iconic Chinese room argument. Publishing in the Behavioral and Brain Sciences journal in 1980, Searle challenged recent simulations of human cognition, writing, boldly, “the computer understanding is not just (like my understanding of German) partial or incomplete; it is zero” (Searle 419). When machines merely manipulate symbols, when they operate based upon preset rules instead of transient, present judgments, they fail to demonstrate intentionality and therefore should not be considered artificially intelligent. Searle believed this thesis quite passionately and ventured to debunk criticism coming from computer scientists out of Berkeley, MIT, Yale, and elsewhere. He sorted rebuttals into six categories: the systems reply, robot reply, brain simulator reply, combination reply, other minds reply, and the many mansions reply. Altogether, Searle consolidates these respective outlooks as “strong AI” theories. Such points of view assume an equivalence between mental and computational processes. Primarily, Searle believes that only “brains and machines [having] the same causal powers as brains” can think, a perhaps narrow-minded perspective that clearly agitates detractors.
We have encountered decades since Searle various attempts to mimic human consciousness. Siri is little more than a toy, blabbering back at users links to web pages whenever she struggles to provide rich, valuable information, and regurgitating popular jokes upon receiving some other uncommon commands. Her ineptitude and improvisational gaps validate critiques by Searle and more contemporary lamentations regarding AI. Her word parsing tactic presents little evolution beyond ELIZA, the natural language communicator that Joseph Weizenbaum developed in 1966 for psychological adaptations. Just as Siri responds divergently based on particular input, ELIZA could manipulate sentence construction after receiving specific keywords (Weizenbaum 36). Besides hardware disparities, namely an audio receiver and expanded memory allocation, Siri operates essentially as ELIZA did half one century ago.
The debate over diction, how people describe machines, extends beyond conflating a few concepts. Ideas get befuddled too. Responses to Searle battled mostly over semantics. While many took issue with how Searle defined “intentionality,” John C. Marshall posited his confused overlapping of “theories” with “programs” an even worse mistake. The dualist hypothesis which Aristotle introduced into Western canon and Descartes reformulated separates notions of mind from body. Searle substantiates his critique of Schank, Abelson, and other hotshot “strong AI” scientists in part by referring to dualist assumptions. He interprets mind and body as inseparable components of the human form. Breaking apart minds from bodies to theoretically justify your particular scientific application, whether that involves artificial intelligence or another subject, corrupts what dualism means and signifies.
Scientists, philosophers each have argued long over what distinguishes homo sapiens from non-human primates and potential artificial intelligence. The correct answer, I argue, is not one feature alone but many combined. Our ability to understand analogies, deeper meaning from stories; our capacity to empathize with characters in distress; our ability to heal ourselves scientifically; our ability to communicate through written word, picture, and sound; our ability to assess logical argument; and our ability to construct advanced technology, altogether, differentiate humanity. Infusing that exact characteristic recipe into any non-living, unnatural form transcends possibility.
Therefore, applying comparisons to connect human beings to AI, using innocent metaphors to benchmark technological progress hamstrings its very potential. Furthermore, Reducing dramatic developments in computer science to cheeky, seemingly mundane questions like “how human is Ava,” for example, radically oversimplifies complex programming, unnecessarily and incorrectly implies that to create intelligent machines our inventors must replicate human cognition. While debatably successful on occasion at prodding undergraduates to vocalize half-formed (sometimes wise) opinions, these questions establish absurd equivalencies unworthy of scholarly consideration. Artificial intelligence cannot be human because “being” human results from innate complexity, from having an unreplicable balance between all those traits aforementioned. Computers may resemble us in spirit, and may sometime soon make “intentional” judgments, but conferring on them a higher distinction, “human,” perpetuates semantic absurdity.