Wednesday, July 25, 2012

The mind -- a terrible thing to make

Curt Suplee
Curt Suplee wrote the following article for the Washington Post, which was reprinted in the Syracuse, NY, Post Standard on October 9, 1986:

Neural Network
New Circuit Replicates Way Neurons Interact in the Brain

 © 1986 by Curt Suplee
 




A mind is a terrible thing to make. Or so we've long believed.


The prospect of machines that can actually think has vexed our fancy since the dawn of cybernetics, and haunted our pop mythology from "Forbidden Planet" to "2001: A Space Odyssey."

And it menaced again in the recent fad fervor for "artificial intelligence" programs, at least until apprehensive desk jockeys discovered that despite closet loads of AI software, their IBM PCs were still dumb as a toaster. 

But now a radically new form of computer architecture and a revolutionary conception of synthetic thought are bringing that prospect disconcertingly close to reality:
  • In Baltimore, a bucket of chips is teaching itself to read.
  • In Cambridge and San Diego, blindwires are learning to see in three dimensions.
  • And suddenly in labs across the country, formerly dreary and docile computers are becoming quirky, brilliant and inscrutable — becoming, in short, more like people.
Neural Network
At the heart of the new machines is a system called a neural network: a circuit designed to replicate the way neurons act and interact in the brain.

It differs from traditional system design as a conference call from a walkie-talkie; from traditional system behavior as an infant from an adding machine — it makes mistakes, finds solutions that are "pretty good" rather than perfect, can keep running even when badly hurt and organizes itself according to its own idiosyncratic rules.

Of course, it has its drawbacks. For openers, "It can't add 2 and 2," says Robert Hecht-Nielsen, manager of the electronic firm TRW's Artificial Intelligence Center at Rancho Carmel, Calif. "Don't have a neural net do your bank book." [Note: TRW's lab was at 1 Rancho Carmel  San Diego, CA 92128.]

Robert Hecht-Nielsen
"Our customers like the idea that it might be able to take a few bullets and keep on running." 






And don't count on it for your Christmas-card list. "Networks are more naturally suited to the kinds of problems that human beings are good at," says Johns Hopkins biophysicist Terrence Sejnowski.

T. Sejnowski
"We're not good at memorizing or doing arithmetic." Moreover, "it will make errors. But they're not errors that you'll be uncomfortable with."

This is a computer? Yes, but not like any you've seen. Almost every computing device in use today shares a common structure derived from the work of mathemetician John von Neumann (1903-57). All elements of the system are expressed in binary digits (0 or 1, on or off; hence the term "digital") and stored at specific memory addresses like Post Office pigeonholes.

All work is done through a single central processing unit (CPU) or main chip. When the software requests something, the CPU proceeds to locate the relevant units of data, pull them down, process them and then reach out for the next specified bunch.

Each transaction must be handled one after another by this postal-clerk CPU, whence the expression "serial" processing. It's .dandy for running a spread-sheet. But if your brain worked that way, it would take you a month to tie your shoes.

Fortunately, it doesn't. "Look closely at the brain," says Christof Koch of MIT's Artificial Intelligence Laboratory, "and the distinction between hardware and software disappears" in what he calls a "computational soup." In the "wetware" of the human nervous system, there is no central processor. Each neuron is connected to as many as 1,000 others.

It collects two kinds of "input" — excitatory ("do something") or inhibitory ("stay cool") — from other neurons, then sums and processes those signals. Scientists struggled unsuccessfully for decades to duplicate this structure on computers. But in 1982 a Caltech biochemist named John J. Hopfield suggested a model, and interest revived with a fury.

Hopfield's prototypical neural network uses an amplifier to mimic the neuron's core, and a set of mathematical routines called algorithms to determine how each pseudo-neuron will process its data.

Incoming lines from other "cells" are run through a set of capacitors and resistors that control the neuron's resting threshhold. And to simulate the difference between excitatory and inhibitory signals, the amplifier has two output lines, one positive, one negative. Such systems are capable of astounding speed, because, as Hopfield and David Tank (of Bell Laboratories' Department of Molecular Biophysics) write in Biological Cybernetics, "a collective solution is computed on the basis of the simultaneous interactions of hundreds of devices" producing a sort of blitzkrieg committee decision.

Neural networks are besting mainframes at some of the toughest problems in the computational chipstakes. Astonishing new products are expected by the early '90s, and research is expanding in a dozen directions.

"Listen to that," says Johns Hopkins biophysicist Terrence Sejnowski, ear cocked toward the tape player. The sound is an eerie, tweetering gargle like some aborigine falsetto  — ma-mnamnamnaneeneenee-irmunu-bleeeeeeeeee.

"It's discovering the difference between vowels and consonants," Sejnowski says. He's listening to a neural network teaching itself to read aloud. Working with Charles R. Rosenberg of Princeton's Psychology Department, Sejnowski designed a network whose task was to learn to pronounce  correctly a group of sentences containing 1,000 common English words.

They had been read previously by a little boy, and a linguist had transcribed the boy's speech into phonemes (the discrete parts of words), which would serve as the benchmark for the network's accuracy. Sejnowski and Rosenberg fed the letters of each word sequentially into the network for processing by three successive tiers of proto-neuronal "cells," each of which receives data that "fan in" to it from various cells in the layer below, manipulate the data and then send the result up a level, finally exiting into a speech-synthesizer.

If the machine had "known how to read" from the outset, each of the cells would already have contained the correct program equations for assigning certain sounds to certain clusters of letters.

Instead, Sejnowski and Rosenberg filled the cells with mathematical garbage generated at random. The system was thus designed to begin in complete ignorance and "learn" just as a child does — by being told he is wrong. That is, the output end of the system would record each squawk the network sent to the speech-synthesizer, compare it with the correct phonemes recorded by the linguist and send an error signal to inform the network how far off it had been from the desired sound.

Through such correction, each of the system's 200 cells has modified its equations hundreds of times. The scientists know it has taught itself. But they don't know how. Nor can they predict exactly where in the mess it will store its knowledge.

"Cut just one wire on a conventional computer," says Sewjnowski, "and the machine will stop dead. But you can cut large sections out of this network, and it doesn't even feel it. It'll make a few more errors occasionally," like the brain after a concussion. "But no single connection is essential."

That's a net plus for TRW's Hecht-Nielsen, whose work is funded in part by the Pentagon's Defense Advanced Research Projects Agency

"Our customers like the idea that it might be able to take a few bullets and keep on running."

No comments:

Post a Comment