Sunday, October 18, 2015

Brain Pickings Weekly - Book Review: What To Think About Machines ThatThink





What to Think About Machines That Think: Leading Thinkers on Artificial Intelligence and What It Means to Be Human

“Once we had neurons. Now we’re becoming the neurons.”

When Ada Lovelace and Charles Babbage invented the world’s first computer, their “Analytical Engine” became the evolutionary progenitor of a new class of human extensions — machines that think. A generation later, Alan Turing picked up where they left off and, in laying the foundations of artificial intelligence with his Turing Test, famously posed the techno-philosophical question of whether a computer could ever enjoy strawberries and cream or compel you to fall in love with it.
From its very outset, this new branch of human-machine evolution made it clear that any answer to these questions would invariably alter how we answer the most fundamental questions of what it means to be human.
That’s what Edge founder John Brockman explores in the 2015 edition of his annual question, inviting 192 of today’s most prominent thinkers to tussle with these core questions of artificial intelligence and its undergirding human dilemmas. The answers, collected in What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (public library), come from such diverse contributors as physicist and mathematician Freeman Dyson, music pioneer Brian Eno, biological anthropologist Helen Fisher, Positive Psychology founding father Martin Seligman, computer scientist and inventor Danny Hillis, TED curator Chris Anderson, neuroscientist Sam Harris, legendary curator Hans Ulrich Obrist, media theorist Douglas Rushkoff, cognitive scientist and linguist Steven Pinker, and yours truly.




Illustration by Syndey Padua from The Thrilling Adventures of Lovelace and Babbage

The answers are strewn with a handful of common threads, a major one being the idea that artificial intelligence isn’t some futuristic abstraction but a palpably present reality with which we’re already living.
Beloved musician and prolific readerBrian Eno looks at the many elements of his day, from cooking porridge to switching on the radio, that work seamlessly thanks to an invisible mesh of connected human intelligence — a Rube Goldberg machine of micro-expertise that makes it possible for the energy in a distant oil field to power the stove built in a foreign factory out of components made by scattered manufacturers, and ultimately cook his porridge. In a sentiment that calls to mind I, Pencil — that magnificent vintage allegory of how everything is connected — Eno explains why he sees artificial intelligence not as a protagonist in a techno-dystopian future but as an indelible and fruitful part of our past and present:
My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive, in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules of thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilization. 
Global Civilization is something we humans created, though none of us really know how. It’s out of the individual control of any of us — a seething synergy of embodied intelligence that we’re all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren’t paralyzed or terrorized by that fact — we still live in it and make use of it. We feed it problems — such as “I want some porridge” — and it miraculously offers us solutions that we don’t really understand.
[…]
We’ve been living happily with artificial intelligence for thousands of years.




Art by Laura Carlin for The Iron Giant by Ted Hughes. Click image for more.

In one of the volume’s most optimistic essays, TED curator Chris Anderson, who belongs to the increasingly endangered tribe of public idealists, considers how this “hive mind” of semi-artificial intelligence could provide a counterpoint to some of our worst human tendencies and amplify our collective potential for good:
We all know how flawed humans are. How greedy, irrational, and limited in our ability to act collectively for the common good. We’re in danger of wrecking the planet. Does anyone thoughtful really want humanity to be evolution’s final word?
[…]
Intelligence doesn’t reach its full power in small units. Every additional connection and resource can help expand its power. A person can be smart, but a society can be smarter still…
By that logic, intelligent machines of the future wouldn’t destroy humans. Instead, they would tap into the unique contributions that humans make. The future would be one of ever richer intermingling of human and machine capabilities. I’ll take that route. It’s the best of those available.
[…]
Together we’re semiunconsciously creating a hive mind of vastly greater power than this planet has ever seen — and vastly less power than it will soon see.
“Us versus the machines” is the wrong mental model. There’s only one machine that really counts. Like it or not, we’re all — us and our machines — becoming part of it: an immense connected brain. Once we had neurons. Now we’re becoming the neurons.




Art from Neurocomic, a graphic novel about how the brain works

Astrophysicist and philosopher Marcelo Gleiser, who has written beautifully about how to live with mystery in a culture obsessed with knowledge, echoes this idea by pointing out the myriad mundane ways in which “machines that think” already permeate our daily lives:
We define ourselves through our techno-gadgets, create fictitious personas with weird names, doctor pictures to appear better or at least different in Facebook pages, create a different self to interact with others. We exist on an information cloud, digitized, remote, and omnipresent. We have titanium implants in our joints, pacemakers and hearing aids, devices that redefine and extend our minds and bodies. If you’re a handicapped athlete, your carbon-fiber legs can propel you forward with ease. If you’re a scientist, computers can help you extend your brainpower to create well beyond what was possible a few decades back. New problems that once were impossible to contemplate, or even formulate, come around every day. The pace of scientific progress is a direct correlate of our alliance with digital machines. 
We’re reinventing the human race right now.
Another common thread running across a number of the answers is the question of what constitutes “artificial” intelligence in the first place and how we draw the line between machine thought and human thought. Caltech theoretical physicist and cosmologist Sean Carrollperforms elegant semantic acrobatics to invert the question:
We are all machines that think, and the distinction between different types of machines is eroding. 
We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.




Art from Alice in Quantumland by Robert Gilmore, an allegory of quantum physics inspired by Alice in Wonderland

Developmental psychologist Alison Gopnik, who has revolutionized our understanding of how babies think, considers the question from a complementary angle:
Computers have become highly skilled at making inferences from structured hypotheses, especially probabilistic inferences. But the really hard problem is deciding which hypotheses, out of all the many possibilities, are worth testing. Even preschoolers are remarkably good at creating brand-new, out-of-the-box concepts and hypotheses in a creative way. Somehow they combine rationality and irrationality, systematicity and randomness, to do this, in a way we haven’t even begun to understand. Young children’s thoughts and actions often do seem random, even crazy — just join in a three-year-old pretend game sometime… But they also have an uncanny capacity to zero in on the right sort of weird hypothesis; in fact, they can be substantially better at this than grown-ups. 
Of course, the whole idea of computation is that once we have a complete step-by-step account of any process, we can program it on a computer. And after all, we know there are intelligent physical systems that can do all these things. In fact, most of us have actually created such systems and enjoyed doing it, too (well, at least in the earliest stages). We call them our kids. Computation is still the best — indeed, the only — scientific explanation we have of how a physical object like a brain can act intelligently. But at least for now, we have almost no idea at all how the sort of creativity we see in children is possible. Until we do, the largest and most powerful computers will still be no match for the smallest and weakest humans.




Art by Ben Newman from Professor Astro Cat’s Frontiers of Space by computer scientist Dominic Walliman

In my own contribution to the volume, I consider the question of “thinking machines” from the standpoint of what thought itself is and how our human solipsism is limiting our ability to envision and recognize other species of thinking:
Thinking isn’t mere computation — it’s also cognition and contemplation, which inevitably lead to imagination. Imagination is how we elevate the real toward the ideal, and this requires a moral framework of what is ideal. Morality is predicated on consciousness and on having a self-conscious inner life rich enough to contemplate the question of what is ideal. The famous aphorism attributed to Einstein — “Imagination is more important than knowledge” — is interesting only because it exposes the real question worth contemplating: not that of artificial intelligence but of artificial imagination. 
Of course, imagination is always “artificial,” in the sense of being concerned with the unreal or trans-real — of transcending reality to envision alternatives to it — and this requires a capacity for accepting uncertainty. But the algorithms driving machine computation thrive on goal-oriented executions in which there’s no room for uncertainty. “If this, then that” is the antithesis of imagination, which lives in the unanswered, and often vitally unanswerable, realm of “What if?” As Hannah Arendt once wrote, losing our capacity for asking such unanswerable questions would be to “lose not only the ability to produce those thought-things that we call works of art but also the capacity to ask all the unanswerable questions upon which every civilization is founded.”
[…]
Will machines ever be moral, imaginative? It’s likely that if and when they reach that point, theirs will be a consciousness that isn’t beholden to human standards. Their ideals will not be our ideals, but they will be ideals nonetheless. Whether or not we recognize those processes as thinking will be determined by the limitations of human thought in understanding different — perhaps wildly, unimaginably different — modalities of thought itself.
Futurist and Wired founding editor Kevin Kelly takes a similar approach:
The most important thing about making machines that can think is that they will think differently. 
Because of a quirk in our evolutionary history, we are cruising as if we were the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses possible in the universe. We like to call our human intelligence “general purpose,” because, compared with other kinds of minds we’ve met, it can solve more kinds of problems, but as we continue to build synthetic minds, we’ll come to realize that human thinking isn’t general at all but only one species of thinking. 
The kind of thinking done by today’s emerging AIs is not like human thinking.
[…]
AI could just as well stand for Alien Intelligence. We cannot be certain that we’ll contact extraterrestrial beings from one of the billion Earthlike planets in the sky in the next 200 years, but we can be almost 100 percent certain that we’ll have manufactured an alien intelligence by then. When we face those synthetic aliens, we’ll encounter the same benefits and challenges we expect from contact with ET. They’ll force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be that humans are for inventing new kinds of intelligences that biology couldn’t evolve. Our job is to make machines that think differently — to create alien intelligences. Call them artificial aliens.




Art from a vintage children’s-book adaptation of Voltaire’s Micromégas, a seminal work of science fiction and an allegory of what it means to be human

Linguist and anthropologist Mary Catherine Bateson — whose mother happens to be none other than Margaret Mead — directly questions how the emergence of artificial intelligence will interact with our basic humanity:
Will humor and awe, kindness and grace, be increasingly sidelined, or will their value be recognized in new ways? Will we be better or worse off if wishful thinking is eliminated and, perhaps along with it, hope?
This, indeed, is another of the common threads — the question of moral responsibility implicit to the future of artificial intelligence. Philosopher Daniel Dennett, who has pondered the flaws of our intuition, counters our misplaced fears about artificial intelligence with the appropriate focus of our concerns:
After centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we’re on the verge of abdicating this control to artificial agents that can’t think, prematurely putting civilization on autopilot. The process is insidious, because each step of it makes good local sense, is an offer you can’t refuse. You’d be a fool today to do large arithmetical calculations with pencil and paper when a hand calculator is much faster and almost perfectly reliable (don’t forget about round-off error), and why memorize train timetables when they’re instantly available on your smartphone? Leave the map reading and navigation to your GPS; it isn’t conscious, it can’t think in any meaningful sense, but it’s much better than you are at keeping track of where you are and where you want to go.
But by outsourcing the drudgery of thought to machines, Dennett argues, we are rendering ourselves at once obsolete and helplessly dependent:
What’s wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don’t delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.
He drives the point home with a simple, discomfiting thought experiment:
As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is), but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That’s an event we should bend our efforts to averting now, because it could happen any day.
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.




Art from Alice in Quantumland by Robert Gilmore, an allegory of quantum physics inspired by Alice in Wonderland.

Computer scientist and inventor Danny Hillis similarly urges for prudent progress:
Machines that think will think for themselves. It’s in the nature of intelligence to grow, to expand like knowledge itself. 
Like us, the thinking machines we make will be ambitious, hungry for power — both physical and computational — but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We’ve been building ambitious semi-autonomous constructions for a long time — governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we aren’t perfect designers and they’ve developed goals of their own. Over time, the goals of the organization are never exactly aligned with the intentions of the designers.
He calls the notion of smart machines capable of building even smarter machines “the most important design problem of all time” and adds:
Like our biological children, our thinking machines will live beyond us. They need to surpass us too, and that requires designing into them the values that make us human. It’s a hard design problem, and it’s important that we get it right.
In the collection’s pithiest contribution, Freeman Dyson, he of great wisdom on the future of science, answers with a brilliant reverse Turing Test of sorts:
I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant.




Art by Lisbeth Zwerger for a special edition of Alice in Wonderland

The most lyrical essay in the volume comes from Oxford computer scientist Ursula Martin, who offers a Thoreauesque account of a marshland hike and extracts from it a beautiful metaphor for the dimensional meaning of intelligence:
Reading the watery marshland is a conversation with the past, with people I know nothing about, except that they laid the stones that shape my stride, and probably shared my dislike of wet feet.
Beyond the dunes, wide sands stretch across a bay to a village beyond. The receding tide has created strangely regular repeating patterns of water and sand, which echo a line of ancient wooden posts. A few hundred years ago salmon were abundant here, and the posts supported nets to catch them. A stone church tower provides a landmark, and I stride out cross the sands toward it to reach the village, disturbing noisy groups of seabirds.
The water, stepping-stones, posts, and church tower are the texts of a slow conversation across the ages. Path makers, salmon fishers, and even solitary walkers mark the land; the weather and tides, rocks and sand and water, creatures and plants respond to those marks; and future generations in turn respond to and change what they find.
[…]
What kind of thinking machine might find its own place in slow conversations over the centuries, mediated by land and water? What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity but was used as a concept to help understand the combination of human, natural, and technological activities that create the sea’s margin, and our response to it?
[…]
The purpose of the solitary walker may be straightforward — to catch fish, to understand birds, or merely to get home safely before the tide comes in. But what if the purpose of the solitary walker is no more than a solitary walk — to find balance, to be at one with nature, to enrich the imagination, or to feed the soul. Now the walk becomes a conversation with the past, not directly through rocks and posts and water but through words, through the poetry of those who have experienced humanity through rocks and posts and water and found the words to pass that experience on. So the purpose of the solitary walker is to reinforce those very qualities that make the solitary walker a human being, in a shared humanity with other human beings. A challenge indeed for a thinking machine.
What to Think About Machines That Think is an immeasurably stimulating read in its entirety, exploring the intersection of science, philosophy, technology, ethics, and psychology to unravel some of the most important questions worth asking. Complement it with Diane Ackerman’s poetic meditation on what the future of robots reveals about the human condition, then revisit previous editions of Brockman’s annual question, in which prominent thinkers address the most major misconception holding us back (2014), the only thing worth worrying about (2013), the single most elegant theory of how the world works (2012), and the best way to make ourselves smarter (2011).

No comments:

Post a Comment