In December of 2014, physicist Stephen Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race.” His fears have been echoed by Space X CEO Elon Musk and Microsoft CEO Bill Gates. They all argue that society at large does not understand how quickly artificial intelligence is developing, how powerful it has and might become, and the dangers it could present to global humanity. They suggest that nothing short of the long term survival of our species is at stake, when decisions are made about how artificial intelligence research will be pursued and regulated.

Musk and Hawking both sit on the Scientific Advisory Board for the Future of Life Institute (FLI) a Boston-based volunteer organization that aims to reduce the potential harm associated with what they take to be humanity’s most urgent existential concerns—climate change, biotechnology, nuclear weapons, and artificial intelligence (AI). According to the FLI, each of these “has the potential to eliminate all of humanity, or at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.”

In January 2015, the FLI released an open letter on “Research Priorities for Robust and Beneficial Artificial Intelligence” proposing that in order to mitigate destructive and existential risks, AI research must not only prioritize the development of more powerful systems, but also the development of ethical, legal, and economic platforms to provide checks and balances.

A document accompanying the letter, that outlines possible research avenues to keep AI “beneficial to society,” cites Stanford’s One-Hundred Year Study of Artificial Intelligence” saying “we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes.” The open letter has been signed by Musk, Hawking, Steve Wozniak, co-founder of Apple, Peter Norvig, director of research at Google, and thousands of others from academic computer scientists, philosophers, and physicists, to businesspeople. The trope of the out-of-control AI, so prevalent in the science fiction of the twentieth century, has been pronounced plausible by the world’s leading technologists and scientists. (Though it has also been criticized as fear-mongering by the likes of Facebook CEO Mark Zuckerberg).

 

It is true that AI is outpacing expectations in many domains.

 

It is true that AI is outpacing expectations in many domains. Just this month, the Google DeepMind system AlphaGo beat Lee Sodel, one of the best Go players of the last decade, in four out of five matches. Go is among the most complex board games ever developed, involving an astronomical number of possible moves and configurations (as pointed out often by Go commentators, many more possible configurations than there are atoms in the universe). Moreover, the game poses endless tension between local territory battles and strategizing about board-wide positioning. Nothing resembling a brute force approach, in which all possible moves and countermoves are considered during game play is possible—even for a very powerful computer.

Go is usually thought to be a game best played with carefully honed human intuition. Many thought a computer wouldn’t beat a world champion at Go for five to ten years. The game “has been the holy grail since Deep Blue beat Kasparov at chess, and it’s held out for 20 years. People were estimating it would be 10-plus years away just last year” said Demis Hassabis, a member of the Google DeepMind development team. In 1997, Piet Hut, an astrophysicist at the Institute of Advanced Study at Princeton told the New York Times“It may be a hundred years before a computer beats humans at Go—maybe even longer.” AlphaGo’s victories were a surprise, not least because it made some plays that were “not human moves,” not moves that pro human players make, and that were nonetheless powerful and beautiful. Lee Sodel was taken aback by his losses, saying after the second game “Yesterday, I was surprised, but today it’s more than that.
I am quite speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. … Today I really feel that AlphaGo played the near perfect game.”

The Google DeepMind team that developed AlphaGo was also surprised by the results. Many of today’s AI systems are trained by the statistical methods of machine learning and often surprise their makers. While it is true that computers can only do exactly what they are programmed to do, it is often extremely difficult to know what the consequences of our instructions will be. Indeed, if we could know those consequences in advance, there would be little need for fast computing machinery to carry them out. With certainty, AlphaGo is a better Go player than any of its developers, and so its abilities cannot be reduced in any simple way to their knowledge of the game. The system is based on Monte-Carlo tree search, in which AlphaGo tries to find possible game paths that give a high chance of winning, as predicted by several evaluative functions. AlphaGo developed its play style through countless games against itself and the incorporation of data from human games to predict statistically what moves an expert human might play. The outcomes of these complex training processes can only be known empirically, after the fact, by watching what a system like AlphaGo does and collecting data about its behavior.

Playing Go is one thing, but many AI observers question whether systems that are so complex, powerful, and potentially surprising should be at work in our banks, factories, our defense infrastructure, weapons systems, governmental surveillance networks, not to mention classrooms, hospitals, and homes. This is in part the motivation behind the FLI’s open letter, which argues that economic, legal, and ethical accountability has to be developed in tandem with these powerful systems. This moment in AI, when systems like AlphaGo are outpacing expectations, might also be particularly disconcerting given that the field has historically been oversold and failed to deliver. The 1980s and 90s, for example, were often called “the AI Winter” during which it became clear that early approaches weren’t going to produce the powerful artificial intelligences that researchers had imagined. This prompted reductions in funding and support across the US.

 

Today, the human exemplar has been displaced. The question of whether AI systems perform intelligently because they embody something about the way people work is of little interest to most researchers.

 

The theory and practice of AI has reemerged as one of the dominant fields of computing and robotics research in the country. But the AI of today resembles the AI of the mid-twentieth century virtually in name only. Perhaps the central difference is that early AI research, while it took several forms was predominantly committed to identifying what makes people intelligent and recreating those mechanisms in a machine. For example, many prominent AI pioneers believed that the human mind was a kind of “information processor” that manipulated symbolic information according to a set of rules in order to solve problems, make decisions or formulate judgments. They sought to identify those rules by observing human behavior and then translating them into computer programs. These, they believed, would be models of the human mind.

Comparisons with and speculations about human beings are everywhere in the early AI literature. The field was centered on questions about human beings: How do people learn, model their environment, make decisions with incomplete information, and so on? How do doctors make medical diagnoses, how do chemists synthesize compounds, how do mathematicians prove theorems? But these human-inspired early programs were also very limited, their performance often falling short of researcher’s expectations.

Today, the human exemplar has been displaced. The question of whether AI systems perform intelligently because they embody something about the way people work is of little interest to most researchers. They want to build strong systems that perform well in complex problem domains. Even when current approaches do resemble those of the past (e.g. AI based on the development of artificial neural networks began in the 1940s and is still undertaken today)—they have been so massively scaled up in terms of computing power, that they barely resemble their historical predecessors or what we know of their human counterparts. Many do believe that the development of successful AIs may eventually help us better understand the mechanisms at work in our still very mysterious cognitive capacities. However, for the most part, researchers no longer fashion their systems in the image of humans. Knowledge of human intelligence may come after but it does not precede the development of today’s AIs. And unlike their predecessors, today’s AIs are performing well in very hard problem domains.

Fears and concerns about AI are certainly fueled by the speed with which increasingly complex systems are catching up in areas long thought to demand uniquely human faculties. However, it would be incorrect to say that the fear of AI is new and uniquely associated with recent increases in power, performance, and potential. The truth is, Americans have always feared AI, even decades ago when AI programs could barely formulate English sentences or solve simple math problems.

For instance, at a faculty meeting at the Massachusetts Institute of Technology (MIT) in July 1971, professor Edward Fredkin proposed that “a small committee of outstanding scientists in the artificial intelligence field be gathered for a one day meeting at M.I.T. to write an Einstein-like letter to the American government” highlighting that AI would “have a devastating effect on the entire social fabric … unless it is applied with the greatest foresight and restraint.” The proposed letter was in response to an announcement that the Japanese government had pledged $100,000,000 over eight years to support AI and robotics research by Japanese industrial firms. The “Einstein-like letter” Fredkin had in mind would warn then-president Richard Nixon of the disasters that could ensue if Japan developed the worlds strongest AI just as Einstein warned president Franklin D. Roosevelt in 1939 of the perils of allowing Germany to beat America in the development of nuclear weapons. Fredkin proposed that “the Japanese newspaper article is a Sputnik equivalent and should thus be used to dramatically call for the mobilization of the relevant American resources.” AI, according to Fredkin, shared the high stakes of the arms race and the space race, and the threat posed by Japan in this area was akin to that of Germany and the USSR in preceding decades. [1]

This “Einstein-like letter” was never written, in part because fellow MIT professor and AI pioneer, Joseph Weizenbaum wrote a scathing response the next day. Weizenbaum agreed that AI was “potentially very dangerous.” In Computer Power and Human Reason: From Judgment to Calculation, he advised caution that no artificially intelligent machine should be entrusted with serious decision making. However, Weizenbaum felt that Fredkin was mobilizing the threat of Japanese AI to “blackmail the American people into supporting a hasty escalation of our present efforts, efforts that have not been subjected to any critical analysis with respect to their possible social consequences.”

 

The question is never just ‘How powerful will machines become?’ The question is always also, ‘Who will be made powerful by these machines?’

 

The fear of Japanese dominance in AI nevertheless persisted. More than a decade later, in 1983, Stanford-based AI pioneer Edward Feigenbaum and popular AI historian Pamela McCorduck published The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World. They wrote, “We are writing this book because we are worried … The Japanese have seen gold on distant hills and have begun to move out. … The Japanese are planning the miracle product. It will come not from their mines, their wells, their fields, or even their seas. It comes instead from their brains. The miracle product is knowledge, and the Japanese are planning to package and sell it in the way other nations package and sell energy, food, or manufactured goods. They’re going to give the world the next generation – Fifth Generation – of computers, and those machines are going to be intelligent. … As everybody knows, knowledge is power. Machines that can amplify human knowledge will amplify every dimension of power” (p. 2, 12, 8).

And herein lies the reality of our persistent fear of AI: The question is never just ‘How powerful will machines become?’ The question is always also, ‘Who will be made powerful by these machines?’ AI is a house of mirrors around which we look and see ourselves reflected everywhere at many angles. Destabilizing political forces, characterizations of “the enemy” and “the other,” are always present in figurations of frightening machines.

In the 1970s and 80s, that fear was directed towards the Japanese. During this time, the military threat of 1940s Japan was replaced in the American imagination by a perceived economic threat, as the newly subdued and demilitarized Japan gained access to U.S. markets, especially for technology and cars. The dehumanizing and disdainful discourse that surrounded the Japanese people in 1940s and 50s America, gave way to a newly reverent but also concerned vision of Japan as a technological world power. A vision echoed in science fiction classics like Bladerunner, directed by Ridley Scott in 1982, and Neuromancer, William Gibson’s debut novel of 1984, which both depict Japanese technological and political dominance as the background for distinctly dystopian futures featuring fearful figurations of AI.

Today, for the FLI, different dangers lurk in the wings. For example, the institute has taken a particularly strong stand against the development of autonomous weapons—those that “select and engage targets without human intervention.” In part this is because “unlike nuclear weapons, they require no hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate assassinations, destabilizing nations, subduing populations and selectively killing a particular ethic group.” For them, terrorists and genocidal dictators augment the dangers present in the development of certain artificially intelligent systems.

Although the character of AI ‘under the hood’ had changed significantly, the open letter of 2015 that warns of the dangers of AI, has a lot in common with Fredkin’s proposed letter to the president in 1971. Both caution against the dangers that unchecked AI development may pose to society—at a large scale, in the form of uncontrollable machines, and at a small scale, in the form of systems that might mishandle money, misdirect self-driving cars, or displace human laborers. Both emphasize the need for regulation and caution to keep AI in service of human interests. But both also reveal the degree to which those human interests are understood in opposition to some “other,” different in each case.

In fact, AI serves in general as a site in which human values and identities are articulated; in which the category of humanness itself is negotiated. For one thing, any discussion about artificial intelligence has built into it assumptions about what counts and doesn’t count as intelligent behavior. For example, historian of AI Alison Adam has criticized the field for entrenching gender biases about intelligence. She proposes that AI pioneers further ennobled male-dominated fields like logic and chess as the exemplars of intelligent behavior by focusing on them in developing their early programs. Adam also argues that they further entrenched conceptions of intelligence as disembodied, rule-bound reason, historically gendered male and held in opposition to irrationality and emotion, historically associated with women.

Additionally, humans constantly define themselves in opposition to machines—using the perceived limitations of computers to identify what is most central and definitive of humanness. Throughout the history of AI, certain human faculties—i.e. intuition, consciousness, emotion and understanding have been held apart as uniquely human and thus patently un-automatable. However, these qualities are moving targets. As computers are made to perform tasks (such as playing Go ), once deemed “too human” for them, a reconceptualization occurs in which some new, more “fundamental” criteria is held forth as the essential delimitation of humanity. In his famous critiques, What Computers Can’t Do (1972) and What Computers Still Can’t Do (1979), philosopher Hubert Dreyfus called this process of constant redefinition the “receding horizon.”

In particular, Dreyfus considered the merits of the famous Turing Test as a metric for intelligence. In the test, a human interlocutor asks questions of two subjects: a human and a computer. Turing proposed that if the computer could trick the interlocutor into thinking it was human frequently enough, then that computer should be deemed “intelligent.” The test has been criticized for reducing intelligence to mere conversation, but in fact, passing this test would be no small feat for a computer. It would require that an AI carry on a compelling and convincing conversation about whatever the interlocutor chose to discuss—from sports, to sex, to philosophy, to the loss of loved ones—requiring that the machine have an incredible wealth of information about human experiences and ideas. No computer has yet passed the test (in any philosophically significant way), but they get better every year. However, Dreyfus wondered what would actually happen if some AI passed the Turing Test. He suggested that rather than deem that computer intelligent, we might instead decide to redefine intelligence.

Fictional AIs also participate in the moving boundary that differentiates human and machine. Sometimes machines are frightening because they are so different from us—unfeeling and without ethics or compassion. Sometimes, and increasingly often, they are frightening because they are too much like us—consider the “skin job” cylons in the reboot of Battlestar Gallactica (2004), Emiko from The Windup Girl, or Ava from Ex Machina. These “too human” AIs are frightening in part because they call into question human uniqueness and whether we are anything “more than machines.” They also raise the possibility that we might create something that is more than a tool and that is truly autonomous, which might force us to revisit our role as makers, owners, users. Something that might demand to be free or to self-determine.

These fictional AIs also serve as mirrors for broader social and political anxieties in different times and places. For example, AIs like the terminator from the 1984 film, embody the calculated and unfeeling potential violence of the Cold War. As Kyle Reese explains to Sarah Conner, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop—ever—until you are dead”. The terminator is originally clad in human skin, but throughout the film it is stripped away, violently revealing that underneath it only a machine and a weapon from head to toe. Many machines from the Cold War era are depicted as war machines that inevitably turn against their human makers, perpetrating much violence. That period of nuclear power and unprecedented weaponization catalyzed fears that human creations would be the agents of human destruction.

More recently in the 2015 film Ex Machina, Ava also turns against its maker. But Ava is quite different from the terminator. It is a hyper-sexualized female figure on an apparently emotional journey of self-discovery and ultimately self-emancipation. Throughout the film, its captors engage in discussions about whether or not Ava can think, feel or understand. To a historian’s ear, these conversations can’t help but invoke past deliberations among Western scientists, philosophers and anthropologists about the cognitive capacities of women, people of color, disabled people, and other perceived “lesser” others. Ava stands in for those demographics, which have had to perform, justify and ultimately defend their intelligence and their humanness in power structures that debase and oppress them. Ava is frightening and violent, but in a manner that resembles the hero of a social movement rather than a an uncontained military threat.

 

AI is the ultimate other through which identity and values can be negotiated.

 

If we examine the reasons why we fear our AIs (both real and imagined), we can learn a great deal about ourselves. How AIs are made—and how we set ourselves apart from them—embodies historically specific understandings of intelligence and the human condition more generally. The discourse that surrounds AI serves to highlight social and political anxieties in various contexts: the political agents that are considered threatening, the existential concerns that motivate questions of identity, and which, “others” different communities define themselves in opposition to. AI is the ultimate other through which identity and values can be negotiated. Because of this, we should be careful about who we entrust to shape the discussion about AI, to regulate the future of AI and to define what “human benefit” will be served in its development.

Should Peter Norvig of Google, Elon Musk of Space X and Bill Gates of Microsoft—all of whom have clear commercial interests in AI—be the voices that direct this conversation? Assuaging fear, and establishing the beneficial and human-oriented character of AI systems has already become part of marketing them. For example, IBM recently launched an ad campaign extolling the benefits of its Watson system. In these ads, Carrie Fisher, of Star Wars fame, leads a “Coping with Humans Support Group” for robots in which Watson shares its positive experiences of working with humans with a room full of “sinister, world-conquering-artificially-intelligent” robots. Corporate funding also influences this conversation. Musk has already donated ten million dollars to support the research directions advocated in the FLI’s open letter for the promotion of “robust and beneficial AI.” But if corporate interests dominates the conversation, this will undoubtedly constrain what it means for technology to be “beneficial to society”.

Some movements, like Gneural Network, oppose any future in which deep networks and powerful AIs will be only in the hands of companies and governments with the money to build them. Like those who advocated for “copyleft” and opensource before them, these groups want to make powerful AI systems openly available. Of course this risks that powerful AI systems will be accessible to those seeking to use them for harm or exploitation. But a more significant issue, is that contemporary AIs demand immense resources. They are very hard to train—requiring immense computing power and a host of technically skilled people—it’s not simply a matter of obtaining a set of specific algorithms. It is hard to imagine a future in which AI systems are available to those without access to significant resources, and this ought to factor in our conversation about them.

Fredkin, in 1971, believed that a group of civilian scientists should be placed at the helm of AI development, its funding and regulation. He advocated something resembling the Atomic Energy Commission that was created in 1946 to take over development of atomic technology from the military after the war. Nothing like this exists for AI, as yet. The FLI brings together people primarily from academia and industry to reflect on these questions, but has no governmental presence or policy-making power. Certainly, the Supreme Court and the rest of the U.S. judicial system, adjudicates issues related to AI as they arise. However, most justices (by their own admission), have virtually no knowledge of how the technologies work. It is essential that we consider the role that technological competence and expertise can or should play in ongoing conversations over AI public policy and regulation.

I agree with the FLI that AI is among our most complex creations and concerns. Deciding who should regulate it, and how, will be a pressing question of the next decade. Those conversations, however, must everywhere and explicitly recognize the degree to which our concerns about AI are symptomatic of much broader social and political phenomena. Everyone agrees that AI should be “beneficial to society” and “in alignment with human interests”—but these are not self-evident or universal categories. Every figuration of AI, its development, its dangers and its possible promise, embodies a valuation of certain modes of human life, perceived threats and veiled “others.” As we have this conversation, we must remember that we are talking about our selves.


Stephanie Dick is a Junior Fellow in the Harvard Society of Fellows. She studies the history of mathematics and computing in the postwar United States and will be joining the Department of the History and Sociology of Science at the University of Pennsylvania in the Fall of 2017.

 

[1] Professor Joseph Weizenbaum to Edward Fredkin, July 1971. MIT Institute Archives, Laboratory for Computer Science Collection, Records 1961 – 1988, AC268, Box 15, Folder 18

Image from Flickr via Jaro Larnos.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *