Neuromimetic Hybrid Processor White Paper

-The Neuromimetic Hybrid Processor (NHP) Brief non-enabling abstract

-Need for Silicon Alternatives

-Biomimicry- the way of the future

-Consortiums not the only way -Proprietary Issues

-Applications

-Proposal

NEUROMIMETIC HYBRID PROCESSOR Principal Investigator and Inventor - Mathew G. Whitney

The NHP is a computing device born of the biomimicry methodology. Its purpose is to replace silicon which is nearing the end of its capacity. Compared to the other silicon alternatives it stands out as having the only true neuromimetic architecture.

BIOMORPHED ELEMENTS

  • -neurons - polymer sheathed dendrimers (encapsulated electroactive objects)
  • -neural fluids - TCNQ/ polyelectrolyte saturated solution
  • -synapse - thermionic injection, FET induced arcing from source to drain, spontaneous clamping, floating deposition
  • -neural network - back propagation, feed forward, discrete time recurrent, continuous time recurrent
  • -determinism - transcopic electrochemical noise analysis, impedance spectroscopy, phase states vs. transition points

COMPONENTS and ARCHITECTURE

The NHP consists of 4 self metalizing polymer films and 2 semiconductive polymer TFT's arranged into a cube shape. Inside this cube is 5-10 cc's of a TCNQ/poly-electrolyte/dendrimer suspension. On the outside of the cube is a parabolic reflector mounted to a plotter control mechanism. The parabolic head is a thermionic injector for the suspended nucleation of the suspension required for certain neural networks.

Since the NHP is a hybrid computer, the digital TFT or analog suspension can act as the driver depending on the desired neural network. Unlike other computers, the NHP derives its determinism transcopically. This means that it is a molecular-quantum computer. What this equates to is a binary system with a fuzzy system enfolded within the fractional dimensions between 1 and 0.

The self-metalizing films are reference/auxilary electrodes injected with stochastic resonance, each opposing set has a different noise which creates a co-chaotic medium out of the suspension, in effect keeping it metastable and homogenous. The TFT's are the working electrodes perpendicular to the self-metalizing films. They are injected with a collapsing or periodic waveform signal. As the signal cascades through the phase states, it modulates the stochastic resonance via sympathetic resonance, phase lock, handshake or beat. This is what clamps the suspension into a molecular wire and gives a pathway weight. Once the pathway or filament is formed it will have non-volatile memory.

ADVANTAGES AND PROPERTIES

All the known advantages of a hybrid architecture are here. All the advantages of a fuzzy system are here. What is unique to the NHP is that it has incredible parallel processing power due to the shear number of filaments. Mobility and storage density are roughly 10,000 fold greater than that of a silicon chip.

The true innovation to computing is the potential for AI and the elimination of interface peripherals.

These processors, since they are neuromimetic, are ideal for biomimetic humanoid robots. Each neural network can subsumptively regulate a specific organ/subsystem. Visual, haptic, audio, kinematic, biological, power, pneumatic and hydraulic systems can be matched to the ideal neural network. Custom neural networks can be made by rearranging the components.

TRANSCOPIC DETERMINISM

In quantum computer theory it is believed we must create quantum error correction algorithms or filter the noise to get a dependable processor.

This is resistance to chaos. The quantum foam is literally the most chaotic phenomenon. To resist the quanta chaos is the most futile thing a neuromimetic computer could do. We overlook the role stochastic noise plays in our own consciousness so we don't consider it when designing a computer. We measure for the most part according to periodic or orderly methods. We also don't consider the underlying chaos in our macroscopic patterns.

The processors I designed utilize two signals, stochastic and periodic. These represent the chaos of the foam and the periodic functions of the macroscopic brain - alpha, beta, delta, Schuman resonance... Carrier and modulation functions are reversible depending on the desired neural network. Sympathetic stochastic resonance is the key to designating weight to a given path. The resultant beat of these two signals meshing is what brings order out of chaos. The more reinforced this analog memory becomes, the more weight it has and the more order it attains. But this type of order is neither pure chaos nor pure order as are the noise and periodic signals. The periodic signal maintains what is modulated but the stochastic noise signal is what makes it associative or holographic to the rest of the suspension.

SILICON ALTERNATIVES

-excerpted from MIT TECH REVIEW. May-June 2000 issue (by Charles C. Mann)

The end of Moore's Law has been predicted so many times that rumors of its demise have become an industry joke. The current alarms, though, may be different. Squeezing more and more devices onto a chip means fabricating features that are smaller and smaller. The industry's newest chips have "pitches" as small as 180 nanometers (billionths of a meter). To accommodate Moore's Law, according to the biennial "road map" prepared last year for the Semiconductor Industry Association, the pitches need to shrink to 150 nanometers by 2001 and to 100 nanometers by 2005. Alas, the road map admitted, to get there the industry will have to beat fundamental problems to which there are "no known solutions." If solutions are not discovered quickly, Paul A. Packan, a respected researcher at Intel, argued last September in the journal Science, Moore's Law will "be in serious danger."

Packan identified three main challenges. The first involved the use of "dopants," impurities that are mixed into silicon to increase its ability to hold areas of localized electric charge. Although transistors can shrink in size, the smaller devices still need to maintain the same charge. To do that, the silicon has to have a higher concentration of dopant atoms. Unfortunately, above a certain limit the dopant atoms begin to clump together, forming clusters that are not electrically active. "You can't increase the concentration of dopant," Packan says, "because all the extras just go into the clusters." Today's chips, in his view, are very close to the maximum.

Second, the "gates" that control the flow of electrons in chips have become so small that they are prey to odd, undesirable quantum effects. Physicists have known since the 1920s that electrons can "tunnel" through extremely small barriers, magically popping up on the other side. Chip gates are now smaller than two nanometers-small enough to let electrons tunnel through them even when they are shut. Because gates are supposed to block electrons, quantum mechanics could render smaller silicon devices useless. As Packan says, "Quantum mechanics isn't like an ordinary manufacturing difficulty-we're running into a roadblock at the most fundamental level."

Semiconductor manufacturers are also running afoul of basic statistics. Chip-makers mix small amounts of dopant into silicon in a manner analogous to the way paint-makers mix a few drops of beige into white paint to create a creamy off-white. When homeowners paint walls, the color seems even. But if they could examine a tiny patch of the wall, they would see slight variations in color caused by statistical fluctuations in the concentration of beige pigment. When microchip components were bigger, the similar fluctuations in the concentration of dopant had little effect. But now transistors are so small they can end up in dopant-rich or dopant-poor areas, affecting their behavior. Here, too, Packan says, engineers have "no known solutions."

Ultimately, Packan believes, engineering and processing solutions can be found to save the day. But Moore's Law will still have to face what may be its most daunting challenge-Moore's Second Law. In 1995, Moore reviewed microchip progress at a conference of the International Society for Optical Engineering. Although he, like Packan, saw "increasingly difficult" technical roadblocks to staying on the path predicted by his law, he was most worried about something else: the increasing cost of manufacturing chips.

When Intel was founded in 1968, Moore recalled, the necessary equipment cost roughly $12,000. Today it is about $12 million-but it still "tends not to process any more wafers per hour than [it] did in 1968." To produce chips, Intel must now spend billions of dollars on building each of its manufacturing facilities, and the expense will keep going up as chips continue to get more complex. "Capital costs are rising far faster than revenue," Moore noted. In his opinion, "the rate of technological progress is going to be controlled [by] financial realities." Some technical innovations, that is, may not be economically feasible, no matter how desirable they are.

Promptly dubbed "Moore's Second Law," this recognition would be painfully familiar to anyone associated with supersonic planes, mag-lev trains, high-speed mass transit, large-scale particle accelerators and the host of other technological marvels that were strangled by high costs. If applied to Moore's Law, the prospect is dismaying. In the last 100 years, engineers and scientists have repeatedly shown how human ingenuity can make an end run around the difficulties posed by the laws of nature. But they have been much less successful in cheating the laws of economics. (The impossible is easy; it's the unfeasible that poses the problem.) If Moore's Law becomes too expensive to sustain, Moore said, no easy remedy is in sight.

Actually, that's not all that he said. Moore also argued that the only industry "remotely comparable" in its rate of growth to the microchip industry is the printing industry. Individual characters once were carved painstakingly out of stone; now they're whooshed out by the billions at next to no cost. Printing, Moore pointed out, utterly transformed society, creating and solving problems in arenas that Gutenberg could never have imagined. Driven by Moore's Law, he suggested, information technology may have an equally enormous impact. If that were the case, the ultimate solution to the limits of Moore's Law may come from the very explosion of computer power predicted by Moore's Law itself-"from the swirl of new knowledge, methods and processes created by computers of this and future generations."

BIOMIMICRY

The NHP doesn't have to "make an end run around the difficulties posed by the laws of nature" like the myriad of other Silicon Alternatives. Molecular, Biological, and Quantum computing consortiums, many of which are sponsored by DARPA, may use biology inspired algorithms but they ultimately fail to mimic biology. The potential seems to be greater than results and the Silicon Industry feels safe because of this, but even the neural networks that run on silicon fail to mimic because, after all, they represent a 2D digital medium.

Biomimicry is a new science which treats nature as the standard for judging the "rightness" of our innovations. Nature is acknowledged to have billions of years of R+D and inherent superiority. Many old sciences are coming around to this humility and reverence for what nature can do, and it is most pronounced in the computing architecture fields. It is normal for humans to compare biology to the machine of the day - the brain is compared to a computer often but never the other way around. A new model of the brain, biomimicry methodology, and the NHP which this model supports, may make brain and computer truly synonymous.

The following discusses the new model of the brain:

By Eric J. Lerner (author of The Big Bang Never Happened)

How can the human brain, composed of hundreds of billions of individual neurons, produce a single mind, a single consciousness? How is thought possible? These questions have motivated researchers for generations and remain unanswered. But the past few years have seen a tidal shift in understanding the brain. Instead of considering the brain as a vast computer, processing and storing information in individual neurons, says Dr. Rafael Yuste, professor of biology at Columbia, the new model depicts the brain "harnessing together ever-shifting neuronal networks, which work in a complex pattern of synchrony like musicians in an orchestra." The process of thought is a symphony produced by the brain as a whole.

Brain as computer?

Since at least the mid-1970s, the dominant model of the brain has been hierarchical, modeled on digital computers. In brief, the hypothesis held that single neurons operated by detecting features in the environment. At the base level, neurons in the optic region of the brain, for example, would detect simple features such as lines and edges. Higher-order neurons, linked to hundreds of lower-order neurons, detected more sophisticated features: a neuron might be sensitive to red balls or green squares. At the highest level, cardinal neurons would integrate sensations to recognize concepts, such as "grandmother's face." In the hypothesis, a single neuron would be sensitive to each concept; there would thus be a "grandmother's face neuron." Memories would be laid down by changes in the ways neurons were linked to each other.

The similarity to computers is clear, with neurons replacing transistors. The brain, in this view, resembles a hierarchical corporation, with low-level office workers passing information up the chain to executives at the top and getting their orders in turn passed down the chain, each dealing only with a few superiors and subordinates.

There were problems with the hypothesis from the start. First was the problem of "binding": How could information obtained by different means be bound together into a single perception? How could the separate neurons that recognized Grandmother's face, her voice, and the word Grandmother all somehow work together to create a single perception of Grandmother? Closely linked with this was the problem of how individual neurons could produce a single consciousness. In addition, the amount of information that could be stored in even the hundreds of billions of neurons in the human brain seemed grossly inadequate to account for actual human memories, which still dwarf the memory capacities of any computer. More critically, the hypothesis lacked experimental support. Individual neurons did respond to certain features, like lines or edges; however, they responded not to just one, but to a statistical mix of several such features. Nor could a neuron, with an average of 10,000 contacts, or synapses, from other neurons, respond reliably to the input of another individual neuron. There was no one-to-one correspondence between the response of individual neurons and any kind of information.

Cooperation, not hierarchy

Almost as soon as the single-neuron doctrine was formulated, an alternative viewpoint started to emerge, initially among a small number of researchers, including E. Roy John of NYU and W. R. Freeman of the University of California at San Diego. They argued that information in the brain is stored and processed only when millions of neurons work together with their electric potentials correlated or synchronized in patterns of firings at various frequencies. The large-scale electrical fields produced by the brain and measured in electroencephalograms (EEG), these researchers reasoned, can be produced only by the cooperative actions of many neurons. These cell assemblies, as they are called, are not anatomical entities but temporary functional collections of neurons, scattered across wide areas of the brain or even the entire brain, whose firings are synchronized at a given frequency. Observers cannot gather information by examining the firing pattern of a given neuron, but only by looking at the correlations--synchronous activity--among many neurons. Importantly, each neuron can simultaneously be part of several assemblies, each operating at different frequencies. By analogy, this is like a person playing one drum in time with one group of drummers and another in time with a different, faster group of drummers, or playing cello in tune with one group and an oboe with another. Cell assemblies make the problem of binding more tractable. Assemblies concentrated in auditory areas and in visual areas, for example, can overlap in the frontal cortex, the hippocampus, and other central areas, linking visual and auditory inputs together with memories into a single perception.

The tide shifts

For years, this view remained the nearly heretical view of a small group of researchers, but in the 1990s this has begun to change. "The tide is definitely turning," explains Yuste, who trained under both Torsten Wiesel, one of the founders of the old model,and David Tank, a pioneer in the new one. "The weight of evidence in favor of a neural-network or cell-assembly model is growing, and there has been a lack of progress with the old paradigm." In particular, more researchers are coming to see that it is not the firing of a single neuron that matters, or the rapidity of that firing, but the timing relative to other neurons: which neurons are firing together, at what frequencies, and in what correlated patterns.

This growing evidence became clear at a major international workshop in 1997, at SdeBoker, Israel.1 "People were surprised at the extent that researchers using different methods were coming to the same conclusions," Yuste recalls. Yuste's own work is part of that body of evidence. He is developing a new techniqueto actually see neurons in action, using fluorescence. One of the problems in observing hundreds or thousands of neurons pulsing simultaneously is that the conventional probes are electrical and very localized; observing hundreds of neurons would take hundreds of micro-probes However, Yuste is able to look at many neurons simultaneously byusing a dye that fluoresces when the neurons fire spikes of electrical activity. Currently, he is using the brains of young rats, removed from the animal and kept alive for several hours in a nutrient bath. However, he hopes eventually to study the brains of living rats, exposed during operations.

"Our experience showed clearly that networks of neurons pulse in synchrony in a changing pattern," says Yuste, who worked together with Daniel Rabinowitz, associate professor of statistics at Columbia, to analyze the data. "We compared our results to simulations, where the neurons were pulsing randomly,and we're convinced these are real patterns. They are distributed throughout the region we're looking at (about half a millimeter wide), containing tens of thousands of neurons." Neurons react differently to input pulses they receive that are simultaneous with the output pulse they produce than to those that are not, Yuste and his colleagues found. "We are using fluorescence techniques to look at the dendritic spines which act as the inputs to a neuron, each contacting the axon carrying the output from another neuron. There are thousands of such spines for each neuron. When the spines got a pulse from another neuron, and the spine's own neuron pulsed simultaneously or within a very short time afterwards, the spine received a rush of calcium," Yuste explains. He believes that such large calcium fluxes could set off change in the spine that might make it more sensitive to pulses in the future, thus altering its behavior. Such changes in thousands or millions of neurons acting together might form the basis for memory. "Yuste's pioneering use of state-of-the-art laser-based techniques has really revolutionized this field, opening up new ways of visualizing how the brain works," comments Dr. Steven Siegelbaum, professor of pharmacology at the Center for Neurobiology and Behavior at Columbia's College of Physicians & Surgeons. "Not only do these techniques allow us to see how groups of neurons communicate together, but they also allow us to see what individual dendritic spines are doing. Previously, they were just too small to see their functioning in living tissue." A number of laboratories, including Dr. Siegelbaum's, are now applying the same techniques to related studies of the brain.

Electric harmony

While Dr. Yuste has focused his work on synapses and the spike potentials that flow across them, other researchers have shown that far more is going on in the brain. The patterns seen in EEG recordings are produced not only by the spike-like pulses of neurons but by wave-like electric fields, continuously produced by neurons at their surfaces, which spread through the space between the neurons in complex patterns. While this EEG field once was viewed as the background noise behind brain functions, evidence now indicates that it is an actual signal encoding information affecting the actions of individual neurons, helping to draw them into the synchrony that in turn produces the large-scale fields and simultaneous pulse firings.

One of the mysteries of the electric oscillations in the brain is how they are synchronized across large regions in a way that seems too tightly linked for the relatively slow pulses carried through the synapses. Roger Traub, formerly of Columbia's neurology department but now at the University of Birmingham, England, recently demonstrated that fast oscillations link neurons together directly thorough electric field interactions, traveling from neuron to neuron at high speed and keeping them in beat the way sound waves keep musicians in time with each other. These fast oscillations, beating at 50-200 Hz (roughly the same as the lowest two octaves on a piano) also seem to be intimately involved in the laying down of memories, Traub and his colleague found. When the oscillations occur in synchrony over a wide area, lasting changes in the electrical response of groups of neurons occurs, making it easier for such neurons to oscillate in synchrony in the future. It appears that the more the neurons oscillate in certain patterns, the easier it is for such patterns to recur, or to be recalled from memory.

If this new model is valid, observable differences should be seen in the correlation patterns of the EEG depending on what a subject is thinking, and experiments have demonstrated just that. For example, in work performed by Igor Hollander and colleagues at the Institute of Information Processing of the Austrian Academy of Sciences and at the Institute of Neurophysiology of the University of Vienna, correlation maps of a subject (a musician) were made at six different frequency bands from 1.3 to 32 Hz using19 electrodes at different points on the subject's head. The subject performed various tasks such as reading a newspaper, reading a score, listening to text and to a Mozart quartet, memorizing the quartet, and doing mental arithmetic. Maps were generated showing where the degree of correlation or synchrony between different parts of the brain increased or decreased from the resting state in each frequency band. Remarkable differences in the patterns emerged (see figure). Correlations were much stronger at the highest frequencies while listening to text and at the lowest while listening to music. High-frequency correlations were even stronger when mentally listening to the music (memorizing it), but the pattern was quite different when reading the score.

While a decade ago the dominant analogy for the brain was still the digital computer, today's brain models look more like a symphony orchestra or a chorus. Conscious states in this view consist of the pattern of variations in frequency, time, and space of the brain's electrical fields, generated by the correlated electrical activity of shifting assemblies of neurons, as members of a symphony orchestra or chorus work together in shifting patterns to produce a pattern of variations in frequency, time, and space of sound vibrations. Of course, the brain involves millions of "players" at any time, out of a population of hundreds of billions, and the "score" is improvised by the players collectively, like an extremely large jazz band.

The new model is still far from explaining how these shifting patterns of electrical and magnetic fields and the neurons that generate and interact with them produce the phenomenon of consciousness. There is no general agreement on this new model or its details. Yuste, for one, is skeptical that correlations really extend beyond very local areas of the brain. However, by focusing on the coordinated functioning of the brain as a whole, this approach seems to be a large step toward understanding that center of human experience.

CONSORTIUMS VS. INDEPENDENT INVENTOR

Being an independent inventor who has single handedly conceived the NHP, I am in a position to argue this point. I am also reminded of a scene from The Fly. Geena Davis' character says to Jeff Goldblum's character in reference to the teleportation pods, "You must be a genius." To which he replies, "I'm not a genius, the guys who made the parts are geniuses, I just put the parts together" or something to that effect.

I am aware of the value of interdisciplinary teams and I am aware of their weaknesses - the members of any given team are specialized and associating their respective fields of expertise is not something they are trained to do. A team may employ the skills of an in-house inventor but he too is specialized. This problem extends to collaborations between members of a consortium. In addition, simple communications problems can arise due to time constraints or distance.

Conversely, while I have none of these problems I lack expertise in any given field.

I took a class at Cornell University - Systems on a Chip: Interdisciplinary Computer Engineering. I have witnessed first hand the schism between the self-educated, and the formally educated, the wise intentions of interdisciplinary endeavors and how they fail. I saw how, despite the multiple professors, all the students remained specialized. I was disappointed by the class because it didn't teach anything about how a team should work together; none of the professors worked together, the individual subjects were never tied together. The grad student who organized the class and let me sit in on it saw the NHP, thought it was "great," and said he'd try to get some professors to look at it. One said that that kind of technology is "at least 20 years away." Another looked at it and said it was over his head. This discrepancy in attitude and knowledge is a big concern and it's what gives me an advantage as an independent. I left the class because one day the class had to break up into groups. I couldn't be in a group. The grad student, in an emphatic tone, told everyone that, "this is about quantifying. Do NOT use your imaginations!"

This statement is pure blasphemy to an inventor.

The NHP was conceived admittedly about 9 years ago in the "EUREKA" fashion with two words: "CRYSTAL BRAIN." It wasn't until about a year ago that it all came together when I did some intensive studying online to find components. In that process I learned of all the other silicon alternatives and saw how those efforts were not yielding very much. It was my intent to adhere to biomimicry methodology and the reason was simple: I should be able to reverse engineer the very thing that allows me to - i.e., the brain. Whereas corporate and academic circles may frown upon my methods for doing this, it nonetheless worked. Call it an unfair advantage but one brain is all that's needed to explain itself. Of course when studying and associating is all that's demanded of you, being exceptionally productive is possible. I had no red tape to contend with, or people, or deadlines.

Basically, I feel that I would be an ideal interdisciplinary team leader for the NHP or any endeavor.

An interesting Anecdote

At the same time I took my class at Cornell, a man named Scott J. Rosen took it upon himself to be my business advisor. He arranged a conference call for me with a consultant from Compaq to help determine the feasibility of the NHP. After a few minutes he told me that what I had was, "A bowl of goo" and that I was, "just another kid trying to reinvent the wheel" and that the NHP was, "a solution looking for a problem."

I then realized that as a representative of the silicon industry he was either threatened or ignorant or both.

PROPRIETARY ISSUES

The only setback to building a proof-of-concept prototype, besides finding funding, facilities, and a proper team, is in acquiring the individual components. Polymer thin film transistors are relatively rare. The manufacturer of the TFT that the NHP would use is Opticom. It would take an entity like ______ to sway them into the use of their TFT. The self-metalizing polymer is available through NASA. The impedance spectroscopy software (the NHP driver) is available through Gamry. The polyelectrolyte, TCNQ, and dendrimers should be available through various university sources.

APPLICATIONS

Now obviously the first thing ______ is interested in is Encryption. The NHP could definitely handle that, but the it was designed to answer decades of questions about AI, soulcatching, massive parallel processing, and even some paranormal questions (ex: in quantum theory we should be able to turn on the NHP with direct thought).

The NHP can recreate any gate or neural network - it is universal in every sense. The NHP is ideal for all robotics apps, all physics modeling, math.