Friday, July 27, 2012

Why am I here?


A Simple Dream?
Can I have my Slurpee machine now?
The 18-year-old James Eagan Holmes who made a presentation as the Salk Institute's 2006 summer intern at Miramar College summer camp was introduced as being a boy whose dream was to own a Slurpee machine. Does that fact help to explain how he ended up as "The Joker" in the Aurora nightmare?


Video courtesy ABC News.

The New York Daily News reported that jailers have told journalists that Holmes, claiming to have no memory of the shooting, has repeatedly asked them why he is in jail. Strangely, there are other reports that he had mailed a package to a "psychiatrist (who is also a professor at the school)," who had not received it. The "notebook documenting the chilling details of Holmes's planned attack" was only discovered to have been sitting in the university mail room since July 12 after a fake package was received by the same psychiatrist on July 23 "reported receiving a package believed to be from Holmes." 

There is a story there somewhere that we have not yet heard. Who is this psychiatrist, who sent him a fake package, and why did the mail room fail to deliver the real parcel for eleven days?

We also want to know where and for whom James worked within the Anschutz campus, since according to a UPS driver, "Holmes had 90 packages delivered to his workplace on the University of Colorado medical campus."

Who Is James Holmes?

We know a few things about Holmes from the type of gossip that serves as news in the mainstream media. He graduated from the University of California, Riverside, and according to Pastor Jerald Borgie, he belonged to Penasquitos Lutheran Church for about 10 years; his mother, Arlene, attending every Sunday, in addition to being a volunteer.

We know his father, Dr. Robert M. Holmes, Jr.,  received a PhD in Statistics in 1981 from the University of California at Berkeley, began working for San Diego-based HNC Software, Inc.in 2000, two years before the company merged with Fair Isaac Co. (now FICO). Robert's father was Lt. Col. Robert M. Holmes, Sr.--a 1948 Turkish language graduate of the Army Language School, later the Defense Language Institute, in Monterey, California.  

In 1980 Robert Jr.  co-authored with Chester Pabiniak a paper entitled, "Forecasting PCS ORT Moves Using Tree Classifications,"prepared for the Navy Personnel Research and Development Center in San Diego, California 92152-6800. According to its foreword: "The objective of this task was to develop a defensible method to forecast Permanent Change of Station (PCS) operational, rotational, and training (ORT) move counts for use in budget development." He is said by San Diego Reader editor Matt Potter to have a 
PhD in statistics from Cal-Berkeley, a Master's in biostatistics from UCLA, and a bachelor's in mathematics from Stanford. Over the last ten years, he has developed predictive models for financial services, and credit and fraud risk models. He is one of several scientists who patented a predictive model system used to detect telecommunications fraud.
Robert married Arlene Rosemary Eagan, who was born in Los Angeles in 1955, in the late summer of 1985. They had apparently met while students at Berkeley. They moved immediately after their marriage in Los Angeles to a small house in the Clairemont Mesa West section of San Diego and also lived in Crestmont, west of the current residence in Rancho Penasquitos

Between 1995 and 2001, the family lived in Castroville, California, where James Holmes went to elementary school. A  public records database shows Holmes' parents owned a home in Oak Hills there before moving back to San Diego in 2002.Both Crestmont and Rancho Penasquitos are a short distance north of the Miramar Marine Corps Air Station, formerly Naval Air Station Miramar, a United States Marine Corps installation that is home to the 3rd Marine Aircraft Wing--aviation element of the 1st Marine Expeditionary Force.

 
It was at nearby Miramar College in 2006 where the young James Holmes is shown in the video at the top of this page, talking about his mentor, John E. Jacobson.


Mediocre Student Intern?"
Los Angeles Times - July 22, 2012

A graduate student who worked with Holmes at the Salk Institute’s Computational Neurobiology Laboratory had a far different view, recalling him as a “mediocre” student who was enormously stubborn.

“I saw a shy, pretty socially inept person,” said John Jacobson, now a PhD candidate at UC San Diego in philosophy and cognitive sciences. “I didn’t see any behavior that would be indicative of violence then or in the future.”

But “he should not have gotten into the summer program,” Jacobson said. “His grades were mediocre. I’ve heard him described as brilliant. This is extremely inaccurate.”

Jacobson said Holmes was accepted as the Salk Institute’s summer intern because at the time the Institute was not marketing its program to the top math and physics high school students. Holmes was accepted because his resume indicated he had done some computer programming, Jacobson told the Los Angeles Times. But his high school transcripts showed Bs and B+s, and no Advanced Placement classes, Jacobson said.

Jacobson said after that summer, administrators changed recruiting policies and now get applications from very high-level math and physics students.

In a video of a summer-end presentation, Holmes names Jacobson as his “mentor.”

“That is not true. That’s almost slanderous,” Jacobson said. “I was never his mentor.”

Holmes worked briefly for him over eight weeks that Jacobson described as very frustrating, characterized by the young man’s unwillingness to follow Jacobson’s suggestions — contrary to the usually engaging experience Jacobson said he’s had working with high school students.
“My experience with him was quite bad,” Jacobson said.

He said he set Holmes to work writing computer code for an experiment Jacobson had done involving a game of rock-paper-scissors, in which the computer always beats the human, no matter who goes first.
An event co-hosted last April by John Jacobson, now a PhD candidate at UC San Diego in philosophy and cognitive sciences
Sponsored by:
At this TEDx event, we will discuss the promise and consequences of technologies which will augment and radically transform our minds, bodies, and cultures. These technologies range from visor cellphones, through more intimate cyborg interfaces, across biotech, and to in-silico life. Many see these transformations as inevitable outcomes of accelerating technological development and global market conditions. This conference aims to go deeper than the shiny veneer of hype, to investigate the scientific states-of-art, ethical and existential ramifications, and socio-economic consequences of human enhancement technologies. We are interested in both local short-term effects and broad, longer term questions. Conference accompanied by a Transhuman Art Exhibition and snacks are provided.
~~~~~~~~~~~~~


(Published online 2004 April 14)


This research was supported by the Howard Hughes Medical Institute (D.M.E. and T.J.S.) and a grant from the Chapman Foundation and NSF IGERT (J.E.J.). John E. Jacobson's address: Howard Hughes Medical Institute at the Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, California 92037, USA

Jacobson's co-authors in the above paper included the "renowned neuroscientist" Terry Sejnowski, whose "cutting edge research has unlocked many of the mysteries of the brain." He joined host, David B. Granet, M.D. of UCSD to discuss how and why people fall in love and other "mysteries of the brain." He also spoke in 2005 about how to predict the future.


Terrence Sejnowski's goal is to discover the principles linking brain mechanisms and behavior. His laboratory uses both experimental and modeling techniques to study the biophysical properties of synapses and neurons and the population dynamics of large networks of neurons.
Terrence J. Sejnowski, Christof Koch, Patricia S. Churchland, "Computational Neuroscience,"
SCIENCE, VOL. 241 - Sept. 9, 1988 

As of 1988, T. Sejnowski was in the Department of Biophysics at the Johns Hopkins University, Baltimore, MD 21218; C. Koch was with the Computation and Neural Systems Program at the California Institute of Technology, Pasadena, CA 91125. P.S. Churchland was in the Department of Philosophy at the University of California at San Diego, La Jolla, CA 92093.

The ultimate aim of computational neuroscience is to explain how electrical and chemical signals are used in the brain to represent and process information. This goal is not new, but much has changed in the last decade. More is known now about the brain because of advances in neuroscience, more computing power is available for performing realistic simulations of neural systems, and
new insights are available from the study of simplifying models of large networks of neurons. Brain models are being used to connect the microscopic level accessible by molecular and cellular techniques with the systems level accessible by the study of behavior.

UNDERSTANDING THE BRAIN IS A CHALLENGE THAT IS attracting a growing number of scientists from many disciplines. Although there has been an explosion of discoveries over the last several decades concerning the structure of the brain at the cellular and molecular levels, we do not yet understand how the nervous system enables us to see and hear, to learn skills and remember events, to plan actions and make choices.

Simple reflex systems have served as useful preparations for studying the generation and modification of behavior at the cellular level (1).

In mammals, however, the relation between perception and the activity of single neurons is more difficult to study because the sensory capacities assessed with psychophysical techniques are the result of activity in many neurons from many parts of the brain. In humans, the higher brain functions such as reasoning and language are even further removed from the properties of single neurons. Moreover, even relatively simple behaviors, such as stereotyped eye movements, involve complex interactions among large numbers of neurons distributed in many different brain areas (2-4).

Explaining higher functions is difficult, in part, because nervous systems have many levels of organization between the molecular and systems levels, each with its own important functions. Neurons are organized in local circuits, columns, laminae, and topographic maps for purposes that we are just beginning to understand (5-8)
More later.....

Wednesday, July 25, 2012

The mind -- a terrible thing to make

Curt Suplee
Curt Suplee wrote the following article for the Washington Post, which was reprinted in the Syracuse, NY, Post Standard on October 9, 1986:

Neural Network
New Circuit Replicates Way Neurons Interact in the Brain

 © 1986 by Curt Suplee
 




A mind is a terrible thing to make. Or so we've long believed.


The prospect of machines that can actually think has vexed our fancy since the dawn of cybernetics, and haunted our pop mythology from "Forbidden Planet" to "2001: A Space Odyssey."

And it menaced again in the recent fad fervor for "artificial intelligence" programs, at least until apprehensive desk jockeys discovered that despite closet loads of AI software, their IBM PCs were still dumb as a toaster. 

But now a radically new form of computer architecture and a revolutionary conception of synthetic thought are bringing that prospect disconcertingly close to reality:
  • In Baltimore, a bucket of chips is teaching itself to read.
  • In Cambridge and San Diego, blindwires are learning to see in three dimensions.
  • And suddenly in labs across the country, formerly dreary and docile computers are becoming quirky, brilliant and inscrutable — becoming, in short, more like people.
Neural Network
At the heart of the new machines is a system called a neural network: a circuit designed to replicate the way neurons act and interact in the brain.

It differs from traditional system design as a conference call from a walkie-talkie; from traditional system behavior as an infant from an adding machine — it makes mistakes, finds solutions that are "pretty good" rather than perfect, can keep running even when badly hurt and organizes itself according to its own idiosyncratic rules.

Of course, it has its drawbacks. For openers, "It can't add 2 and 2," says Robert Hecht-Nielsen, manager of the electronic firm TRW's Artificial Intelligence Center at Rancho Carmel, Calif. "Don't have a neural net do your bank book." [Note: TRW's lab was at 1 Rancho Carmel  San Diego, CA 92128.]

Robert Hecht-Nielsen
"Our customers like the idea that it might be able to take a few bullets and keep on running." 






And don't count on it for your Christmas-card list. "Networks are more naturally suited to the kinds of problems that human beings are good at," says Johns Hopkins biophysicist Terrence Sejnowski.

T. Sejnowski
"We're not good at memorizing or doing arithmetic." Moreover, "it will make errors. But they're not errors that you'll be uncomfortable with."

This is a computer? Yes, but not like any you've seen. Almost every computing device in use today shares a common structure derived from the work of mathemetician John von Neumann (1903-57). All elements of the system are expressed in binary digits (0 or 1, on or off; hence the term "digital") and stored at specific memory addresses like Post Office pigeonholes.

All work is done through a single central processing unit (CPU) or main chip. When the software requests something, the CPU proceeds to locate the relevant units of data, pull them down, process them and then reach out for the next specified bunch.

Each transaction must be handled one after another by this postal-clerk CPU, whence the expression "serial" processing. It's .dandy for running a spread-sheet. But if your brain worked that way, it would take you a month to tie your shoes.

Fortunately, it doesn't. "Look closely at the brain," says Christof Koch of MIT's Artificial Intelligence Laboratory, "and the distinction between hardware and software disappears" in what he calls a "computational soup." In the "wetware" of the human nervous system, there is no central processor. Each neuron is connected to as many as 1,000 others.

It collects two kinds of "input" — excitatory ("do something") or inhibitory ("stay cool") — from other neurons, then sums and processes those signals. Scientists struggled unsuccessfully for decades to duplicate this structure on computers. But in 1982 a Caltech biochemist named John J. Hopfield suggested a model, and interest revived with a fury.

Hopfield's prototypical neural network uses an amplifier to mimic the neuron's core, and a set of mathematical routines called algorithms to determine how each pseudo-neuron will process its data.

Incoming lines from other "cells" are run through a set of capacitors and resistors that control the neuron's resting threshhold. And to simulate the difference between excitatory and inhibitory signals, the amplifier has two output lines, one positive, one negative. Such systems are capable of astounding speed, because, as Hopfield and David Tank (of Bell Laboratories' Department of Molecular Biophysics) write in Biological Cybernetics, "a collective solution is computed on the basis of the simultaneous interactions of hundreds of devices" producing a sort of blitzkrieg committee decision.

Neural networks are besting mainframes at some of the toughest problems in the computational chipstakes. Astonishing new products are expected by the early '90s, and research is expanding in a dozen directions.

"Listen to that," says Johns Hopkins biophysicist Terrence Sejnowski, ear cocked toward the tape player. The sound is an eerie, tweetering gargle like some aborigine falsetto  — ma-mnamnamnaneeneenee-irmunu-bleeeeeeeeee.

"It's discovering the difference between vowels and consonants," Sejnowski says. He's listening to a neural network teaching itself to read aloud. Working with Charles R. Rosenberg of Princeton's Psychology Department, Sejnowski designed a network whose task was to learn to pronounce  correctly a group of sentences containing 1,000 common English words.

They had been read previously by a little boy, and a linguist had transcribed the boy's speech into phonemes (the discrete parts of words), which would serve as the benchmark for the network's accuracy. Sejnowski and Rosenberg fed the letters of each word sequentially into the network for processing by three successive tiers of proto-neuronal "cells," each of which receives data that "fan in" to it from various cells in the layer below, manipulate the data and then send the result up a level, finally exiting into a speech-synthesizer.

If the machine had "known how to read" from the outset, each of the cells would already have contained the correct program equations for assigning certain sounds to certain clusters of letters.

Instead, Sejnowski and Rosenberg filled the cells with mathematical garbage generated at random. The system was thus designed to begin in complete ignorance and "learn" just as a child does — by being told he is wrong. That is, the output end of the system would record each squawk the network sent to the speech-synthesizer, compare it with the correct phonemes recorded by the linguist and send an error signal to inform the network how far off it had been from the desired sound.

Through such correction, each of the system's 200 cells has modified its equations hundreds of times. The scientists know it has taught itself. But they don't know how. Nor can they predict exactly where in the mess it will store its knowledge.

"Cut just one wire on a conventional computer," says Sewjnowski, "and the machine will stop dead. But you can cut large sections out of this network, and it doesn't even feel it. It'll make a few more errors occasionally," like the brain after a concussion. "But no single connection is essential."

That's a net plus for TRW's Hecht-Nielsen, whose work is funded in part by the Pentagon's Defense Advanced Research Projects Agency

"Our customers like the idea that it might be able to take a few bullets and keep on running."