Building brains: An ethical dilemma in AI

Nested inside a connected townhouse three blocks west of Dupont Circle, a tech pioneer resides with his wife, retired from 40 years teaching computer science at George Washington University. Midway up about ten stairs, a diamond-patterned rug hangs before their living room. Fresh coffee brews in the kitchen, and what seem like apple slices are scattered on a serving tray atop a round dining table. Across the room a half-constructed jigsaw puzzle lies incomplete next to a window which overlooks their small community courtyard.

Despite this quaint home life, Professor Emeritus of Engineering Peter Bock, a grandfather to three, remains as ambitious as ever, determined to construct an artificially intelligent being whose cognitive abilities match that of humans in “every conceivable way.” Bock explained what this entails last Thursday afternoon in an enjoyable conversation consisting of hypothetical dilemmas and nifty anecdotes.

Among the latter, Bock spoke about riding in a self-driving car with a Google employee. Since mid-2011 the Mountain View company has been testing their laser-mounted automobiles mostly in private patches across San Francisco, though legislation permits them in three other states and the District of Columbia. Everything went without a hitch until a policeman directing traffic posed the vehicle a unique scenario to tackle. They were commanded to stop. Instead, the car maneuvered around the officer like a pedestrian stranded mid-road. Quickly the Google representative intervened, bringing the car to an abrupt halt. After sharing that result, Bock provided an appropriately blunt response: “That cannot happen.”

This allegory illuminates the swing-and-miss nature of innovation. Technology fundamentally entangles itself with risk; single errors in code often cause entire programs to fail. And when revolutionary products become widely available, there is a similar trade-off which consumers are forced to make: submit their identity and other personal details in order to fully enjoy services. There is a well-documented paranoia attached to artificial intelligence research, which Bock openly acknowledges will likely never cease.

“We are dealing with fire here,” says Bock. “We are building brains. (And) while everybody else worries about robots hurting us, I concern myself with us hurting the newborn technology, employing them as slaves and violating their rights.”

There certainly is a following behind the belief that machines will eventually outsmart human beings and inflict harm. Popular culture is perpetually fascinated with this existential question. Stanley Kubrick famously explored the topic almost fifty years ago with “2001: A Space Odyssey.” When an interplanetary spacecraft receives an alien signal, a computer system called ‘HAL 9000’ turns self aware and begins disobeying the orders of men aboard. This past April, “Ex Machina” continued the same science-fiction tradition in which an optimistic programmer helps an eccentric executive administer a Turing Test to a humanoid robot named ‘Ava.’

Artificial intelligence – or simply ‘AI’ – is a subject of tremendous scrutiny lately. Tech overlords Google, Microsoft and IBM each open-sourced their machine learning algorithms last month. In a similar move, WIRED unveiled last week that Facebook will be releasing the complex hardware design of their massive computer server built to run machine learning algorithms. These vital groups of code are what raise your ordinary smartphone app a few notches intellectually, such as by using face-recognition to conveniently sort photos into person-centric albums; reading location history to display travel arrangements at routine times; and playing music similar to up-voted albums and artists. Certain email clients offer a predictive reply feature, and Google Translate boasts the ability to instantly render a foreign language into your native alphabet by holding your camera in front of the exotic text.

With these statistical processes available to public domain, experts say progress on AI will come more often and at less a burdensome expense. David Alan Grier, an Associate Professor in the Center of International Science and Technology Policy within the Elliott School of International Affairs, is considered a leading writer on crowd-sourcing.

“Traditionally, developing software involves putting together a small team with a hierarchical structure,” says Grier. “Open-source is a general purpose way of developing software. It is cheaper, much more market-based and makes for easier debugging.”

Initially a rudimentary, symbol-based system, Grier says that machine learning represents the fourth evolution of AI. While scholars disagree on when the field specifically began, many attribute the rise of this sub-discipline to Alan Turing, the British mathematician historically credited with decrypting the Nazi code. Immediately after the war, Turing began researching AI full-time, and in 1950, published what would informally establish a standard for defining machine competence. In a Turing test, an interrogator converses with a man and a computer through a series of questions. According to this impartial figure, if the technology replies naturally – as humans might – seven of ten times, scientists can consider the machine artificially intelligent.

In another development, alongside a number of Silicon Valley investors and other industry moguls, billionaire entrepreneur Elon Musk announced last Friday the foundation of OpenAI, a non-profit research company dedicated to advancing the subject in a manner that, in their own words, “is most likely to benefit humanity as a whole.” For the Tesla and SpaceX founder, this marks a sudden transition from a position of fearful restraint to coordinated action.

Last summer Swedish philosopher Nick Bostrom published Superintelligence: Paths, Dangers, Strategies to widespread acclaim, forwarding the so called ‘singularity theory’ popularized by futurist Ray Kurzweil, while uniquely contending that AI poses a threat to planetary stability equal to or greater than nuclear weapons. Musk embraced the argument wholeheartedly. Months after tweeting in August that Superintelligence is “worth reading,” Musk attended an MIT symposium and said, “with artificial intelligence we are summoning the demon.”

Then this June, along with Stephen Hawking, Apple co-founder Steve Wozniak and thousands of other tech-minded people, Musk encouraged governments to ban AI weapons systems in a joint letter that claimed “autonomous weapons will become the Kalashnikovs of tomorrow.” Meanwhile, a less vocal segment of the scientific community rejects that fear.

“Any time you have a conscious organism, it is capable of doing harm in this society, says Evan Drumwright, an Assistant Professor of Computer Science. “It is too early to be afraid. So be optimistic.”

Inside the dazzling Science and Engineering Hall at George Washington University, through some shiny glass doors on the fourth floor, a bunch of ambitious recent graduates markup whiteboard walls with their machine learning algorithms. One doctoral student, Mahesh Mohan, applies these concepts to infer information from social network communities.

“Different aspects of data become important for solving different kinds of problems,” Mohan says, rotating back and forth on a swivel chair. “Name any aspect of your digital life and machine learning is probably involved.”

Still, the question remains: how will an AI ultimately get built? Bock discovered an answer in 1976 as a young professor at George Washington University. And ever since, he has tried persuading others that he is right.

“Machine learning is the only way to do it,” Bock says passionately. “This entire industry still relies on the rule-based approach, which does no justice to the depth of life.”

In other words, the multitude of conditions necessary to write out makes this method completely unfeasible. Yet ever since arriving on Apple smartphones four years ago, Siri – a rule-dependent voice assistant – has been a fan favorite.

“Siri is so dumb!” Bock says without hesitation. “So you can ask what the highest mountain in the world is. Oh wonderful!”

Instead, what Bock envisions is a creature being born in 2024 when there is enough random-access-memory storage available. Day by day ‘Mada’ – as he calls her – will interact with people and observe the world. She will be complex, an emotional and spiritual being, who Bock says will never be mass-produced into enslavement or forced servitude.

“People ask me why I insist on only building one or two artificially intelligent beings. I tell them: Earth is already overpopulated! Why do you want more people roaming around?”