Sunday, February 12, 2012
NEURAL NANOCIRCUITS AS A MODEL FOR THE PREDICTION OF EARTHQUAKES
NEURAL NANOCIRCUITS AS A MODEL
FOR THE PREDICTION OF EARTHQUAKES
Willard W. Olson
August 25, 2008
ABSTRACT
The extra-genetic auto-organization of neural nanocircuits provides a
dynamic model analogous to geologic areas of high seismic activity. Information encoding mechanisms of
neurons is compared to the dynamic informational history of seismic zones. Both models are predicated upon
dynamism and functional history.
This analysis of neural and geologic structures allows an ability to
predict future seismic events using proven pattern recognition techniques on
desktop computer hardware.
INTRODUCTION
Nearly a century ago, Ramon Cajal published his monumental work
entitled, THE HISTOLOGY OF THE NERVOUS SYSTEM OF MAN AND THE VERTEBRATES. (1)
These two volumes (in French) comprise more than 1000 pages of highly
descriptive anatomical illustrations and microscopic studies of the various regions
and components of the vertebrate brain. Cajal’s work remains the bedrock of neuroscience
today. Since the early 20th
Century, however, technology has altered and generally improved the accuracy
and descriptive analysis of brains and their basic components, neurons. Cajal noted that all brains are
remarkably similar in general architecture and most particularly in their
cellular structures and interrelationships. Regardless of animal from a housefly to a human, the basic
components of the brain are present.
In addition to the work of Cajal and the studies on human cerebral
cortex by Lorente de No (11,12), brain maps or atlases of various vertebrates
have been developed in recent years which illustrate quite conclusively that the
anatomical descriptions of Cajal were extremely accurate and perceptive. (7)(8)
The brain is one of the most complex and dynamic systems in
existence. This statement is as
valid in application to the brain of an insect as it is for a human. Both brains are enormously complex and
as yet poorly understood. Part of
the author’s intent is to bring some clarity and simplicity to these complex
systems.
All brains have a similar gross architecture. At the lowest level of chordates, the brain is largely
composed of a brain stem (medulla oblongata), a hypothalamic area, and a
pituitary area. The simplest
vertebrate brains have a discernable structure whose theme is seen throughout
the vertebrates including man. The
idea that the human brain is unique and different is simply not true. The Atlas of an Insect Brain (18) serves well to
illustrate that even this non-vertebrate not only has a brain, but a brain with
a remarkable structure analogous to brain stem, midbrain, primitive cortical
and sensory areas, and a pituitary gland at the base of that brain. The insect brain also serves well to
illustrate our inclusion in the evolution of all life. The similarity between an insect brain
and that of higher vertebrates including man goes even deeper than some
superficial relationship between brain regions. The basic building blocks of an insect brain are neurons
just as they are in the rat, the cat, the monkey, the birds, and humans. These neurons are so similar that only
an expert might guess the genus and species of any particular neuron. There are differences between insect
neurons and human neurons, of course, but they are highly subtle and often require
massive magnification to visualize.
The similarities are such that it nearly suffices to state that brains
are brains and neurons are neurons.
It is also worth noting that the dogma of modern physiology does not fully
understand the operations of either very well. Ramon Cajal and Lorente de No were
neuroanatomists—biologists, physiologists and descriptive anatomists. In recent years especially since the
elucidation of the structure of DNA, the study of the brain has become the
playground of physicists, physical chemists, biochemists, computer engineers,
cyberneticists, psychologists, molecular biologists and most of all
mathematicians. The vision—the
map—the encompassing global integration provided by Cajal has been increasingly
ignored in favor of reductionist approaches, equations and statistical
relationships. This brew has
changed the study of the nervous system and has resulted in some very
problematic conclusions that increasingly haunt scientific progress in the
neurosciences.
Although Cajal provided a highly detailed physical description of
vertebrate nervous systems, his work did not provide information on how such
systems actually operated. The
work of Hodgkin and Huxley (5) described the generation and transmission of a
nerve impulse on the giant axon of a squid. Their
discoveries provided vital answers to the function of the nervous system. Although the Hodgkin/Huxley paradigm
was couched in extremely elaborate mathematics, it simply described the
depolarization of a neural membrane as a consequence of the translocation of
metal ions through that membrane.
They described the generation and transmission of a nerve impulse along
an axon (output fiber of a neuron) through the movement of sodium (Na+) and
potassium (K+) ions across a membrane. The exact mechanisms and motive factors driving ion translocation
were not precisely known by Hodgkin and Huxley in 1952, it was known that these
two ions {Na+ and K+} were paramount in the generation of an electrical pulse
on the axon of a nerve cell.
It was also known that sodium was about 7 times more abundant than
potassium in and about neurons. It
was also known that potassium and sodium did not move through the same ion
channels of the neural membrane because of large size differences of their
atoms. The Hodgkin/Huxley formula
suggested that during the resting phase, sodium ions were actively pumped out
of the neuron at the expense of energy while potassium ions were pumped into
the neuron. As a consequence of
this action, a large zeta potential (transmembrane charge) developed across the
neural membrane with the outside of the neuron becoming partially positive
while the interior of the neuron became partially negative. A “resting” neuron—a neuron
with a large zeta potential as a consequence of ion translocation-- could
depolarize with input stimulus of a neurotransmitter to allow sodium channels
to open allowing sodium ions to enter the neuron or axon with a consequent
reversal of membrane charge. Since unlike charges attract, the zeta
potential of a resting neuron caused sodium ions to rush into a neuron through
open sodium channels causing very rapid depolarization. This reversal of charge initiates
a wave of depolarization over the cell body of the neuron and along the axon to
other neurons. A nerve impulse is generated
as a consequence of depolarization and repolarization as sodium ions are
immediately pumped back out of the neuron at the expense of energy.
A few years after the great work of Huxley and Hodgkin pointed toward
the importance of ion translocation, Albert Lehninger described in great detail
the linkage between ion translocation, the production of cellular energy as
ATP, and aerobic respiration in his seminal book, THE MITOCHONDRION. (9) The
addition of the work of Lehninger to that of Huxley and Hodgkin left little
doubt about the active and highly dynamic nature of cell respiration at the
mitochondria and its relationship to the dynamic nature of nervous activity. The energy produced at the mitochondria drives
the Na+/K+ATPase pumps translocating sodium and potassium ions in a living,
dynamic neuron. Without
mitochondrial activity, neural activity is not possible. Mitochondria are an integral
aspect of neural depolarization.
THE OBJECTIVE
A brain is a massively parallel, dynamic, distributed and
self-organizing system. The
fundamental building blocks of a brain, neurons, also provide an example of
this type of physical system. The
neuron could be viewed as a nanosensory array that is the living micro-equivalent
of a sensor monitored, seismically active, geographical area—a local seismic
array. (10)(20) These two seemingly different entities
are, in fact, analogous. The
purpose of this paper is to provide the rationale necessary to characterize the
dynamic information from both a neuron and a seismic array in a way that will
allow recognition of this information.
It is important to understand that the earth and most certainly
seismically active regions are dynamic systems. All dynamic systems have an historic behavior—historic
behavior that is predictive of future behavior. Understanding neural dynamics as well as the dynamic
behavior of geologic sensory arrays provides a rationale to predict future
seismic behavior in earthquake prone regions using proven computer image
recognition technology. The
ability to recognize real-time streaming data from geologic sensory arrays
allows the association of the seismic history of prior events to current
seismic behavior and to predict future seismic events minutes or even hours
before these events occur. (15)
IMPLICATIONS OF COMPUTATIONAL NEUROSCIENCE
During World War II, efforts were made to use concepts of neural
systems in the recognition of radar signals. These early efforts toward the development of Artificial
Neural Networks generally took the form of adding “digital tap weights” to
repetitive analog waveforms of known radar signals. It was assumed—assumed—that a repetition of a known signal
should be given a stronger weight as a means toward future identification. This concept more than anything else gave birth to the notion
that learning or memory must result from an increase in the strength or weight
of connections in an artificial neural network system OR in the synapses of a dynamic
living brain. This concept has
become one of the great blunders in the history of science!
Early attempts at radar or image recognition were largely exercises in
mathematics and applied physics with little to no input from neuroscience. Since the limited experience with
digital tap weighted analog systems seemed to aid identification of unknown
radar signals, the idea of increasing weight with learning gained
credibility. One of the earliest
efforts to create an artificial neuron was based upon the idea of learning as a
function of increasing digital weights or the strength of synapses. The artificial neuron of McCullogh and
Pitts (13) was conceived as a digital neuron whose synapses increased in weight
with learning. In addition, these
developers considered the neuron to be a natural analog to a calculator summing
inputs or creating AND/OR gates to generate an output at the axon based upon
the threshold (zeta potential) of the neuron. This creation did not work either in simulation or
real world applications owing to the fact that the physics involved differed
little from that of a fluorescent light bulb.
Part of the difficulties that have arisen in the application of
neuroscience to practical application in image recognition may relate to the
nature of the experiments performed by Hodgkin and Huxley. They were not trying to show how an
axon works in a living system.
They had excised the giant axon of a squid from its normal biological
mileau and created a controlled laboratory experiment allowing the measurement
of ions and the transmembrane potential of this isolated axon. The nervous tissue used by Hodgkin and
Huxley was a stable rather than a dynamic system. The threshold
(zeta potential) of this excised axon had to remain a constant as a basis for
ionic measurements. This may have
been the only means to determine the flux of metal ions. The concept of McCullogh and Pitts should
suggest that it is misleading to expect dynamic activity from static
conditions. They were ahead of
their time in 1943 and should be given credit for realizing that the neuron and
its activities are paramount in the emergent properties of the nervous system.
After Hodgkin and Huxley were awarded the Nobel Prize for their
contribution to science, their work was widely cited and utilized. It may also have been expanded beyond
the intent of the principal investigators. In the years immediately after the publication of this work,
the “All or Nothing Principle” emerged in many forms in computational
neuroscience. Most biologists and
neuroscientists knew intuitively that this concept was wrong, but few had any
idea of why they felt that way.
The “All or Nothing Principle” implies that a neuron would either fire
or not fire and would always fire with a constant intensity. Such a neuron had to be digital in
concept and based upon a uniform or static threshold condition. Neither situation normally exists in
living systems. This may be why
biologists and neuroanatomists were alarmed by this concept. What do we mean by “digital in
concept”? The distinguishing
feature of a digitized system is that it exists as a two state system---ON/OFF---Charge/No
Charge—0 or 1. In the case of a
neuron, a digitized neuron implies that living neurons can exist in only two
states –ON or OFF. Such systems
cannot be dynamic unless the rate of change in ON/OFF states is so fast that it
can mimic actual dynamic activity.
In a semiconductor system with clocking rates in the gigahertz range,
digital states can simulate—simulate—a dynamic system. Even at semiconductor clocking rates,
this does not mean that such systems are actually dynamic—they simply appear to
be that way. A living
biological system cannot exist as a digital system for two fundamental
reasons. 1. The rate of biological reactions cannot
attain rates that approximate those of even the earliest semiconductor
systems. The fastest known
biological rates are ~10 kilohertz.
This is the maximum rate of oscillation of the mitochondria of the
flight muscle of the blowfly. (9) It
is the fastest biological clocking rate known in the natural world. Even at this rate, it is much too slow
to simulate dynamic activity from a digital system in a neuron. The dynamic activities of living neurons cannot exceed the
oscillatory rate (respiratory rate) of their innate mitochondria. Secondly, the “All or Nothing
Principle” appears to have been based upon the static threshold of the excised,
in vitro, squid axon described by Huxley and Hodgkin. The known existence of inhibitory neurons in living
systems continually affecting the state of the threshold of other neurons
suggests that the “All or Nothing Principle” is imaginary and not germane to
the description of living neurons or neuron systems.
There is absolutely no denial of the power of digital technology. This power is evident in nearly every
aspect of our daily lives from personal computers to large flat screen
televisions to mobile telephones with gigabyte memory capacities. Digital systems and brains seem work in
very different ways. For more than
50 years, immense efforts have been made to “find” computer analogies in the
brain with virtually no success.
This lack of success most certainly arises from the probability that
computational neuroscience researchers are seeking something that does not
exist in nature. It is useful to
look at a reverse proposition.
Would we expect a contemporary microprocessor to “self-organize” from
its chemical components without the need for highly skilled human circuit
designers? The answer is obviously
no as it relates to computer design.
Self-organization and continual re-organization are ongoing activities
of all vertebrate brains. Clearly, the modus operandi of
brains and computers are very different although each may produce similar
results.
It has been suggested that we might be able to conceive, design and construct
a computer similar to a human brain with equal or superior qualities. Is this realistic? How about starting out with the brain
of a housefly? They do marvelous
feats of acrobatic flying as well as having the capacity to follow molecular
traces in the air that are measured in parts per billion. It is a fact that modern science does
not fully understand the operations of the brain even those of a housefly. Possibly the learning process toward
that understanding might begin by looking at the brain for clues rather than
assuming that the brain must operate as a human invention—the digital computer. It might seem rather flippant, but it
is relevant to suggest that houseflies are not mathematicians as computational
neuroscience might suggest. They
don’t fly as a consequence of solving differential equations or by calculating
angles and trajectories through trigonometry. If these mathematical abilities of houseflies seem
preposterous, it is reasonable to look for other ways of solving the problems
confronted by houseflies in their daily lives. It is not
reasonable to expect living systems to operate mathematically. Mathematics is a human invention—a
symbolic language—rather than a discovery of natural physiology. Since the operations of the neurons of
an insect brain appear to be nearly identical to those of all vertebrates including
man, an understanding of one might be applicable to all brains. We must understand the normal
biological operations of a neuron.
That is one of the basic aims of this paper. (16)
The concept of Artificial Neural Networks seemingly holding such
promise in the late 80’s and early 90’s has not fulfilled expectations. Part of the difficulty may
lie in the fact that ANN’s had little relationship to biology or the nervous
system and were conceptually flawed.
They were more related to the Chaos Theory mathematics and weather
forecasting than any aspect of a functioning brain. In this mathematical view, an artificial brain was a highly
connected set of connecting points or nodes. This massive connectivity seemed to borrow from the
parallelism and connectivity of a brain; yet, most Artificial Neural Network
schemes had no reference to the actual structure of any brain or even
consideration for neurons—only points of connection. These schemes were further envisioned to operate upon the
notion that pathways and connecting points (nodes) would strengthen with
“training” and hence increase their “weight”. These systems were, in short, superficial attempts to mimic
erroneous conceptions of the brain.
It is no wonder that they did not work. Simple tasks that even the simplest brain could perform in a
matter of seconds required hours and hours of computing time on desktop
computers before settling upon some semblance of an average solution. The incredibly slow operation of neural networks was prima
facie evidence that no brain could or would operate in such a manner. In practice, ANN’s required “normalization” periods during
this grinding routine to a potential answer to a simple question. What is normalization? Since this concept of memory depends
upon a continual increase in connection strength or weight, the synaptic weights
of active ANN’s become unmanageably large after many iterations and have to be
“normalized” by dividing all calculated numbers by a common denominator—usually
100 or even 1000. (17) Contemporary theories of human memory
formation particularly those involving the hippocampus appear to be based upon
the same completely discredited concepts as Artificial Neural Networks. Vertebrate brains,
especially fast learning human brains, cannot operate in such a manner. Human brains cannot be
“normalized” as if they were a computer.
If this sort of scheme was even remotely valid, we could not wear
clothes, glasses, shoes, or deal with any repetitive stimulus. Normal human life as we know it could
not exist as every input stimulus—every sensation—would produce greater and
greater effects upon the human nervous system. In spite of these facts, scientific conferences are
regularly held with the sole subject of Computational Neuroscience. It is for this reason that the author
stated earlier that this concept has become one of the greatest blunders in the
history of science. (16)
Brains, specifically synapses, must operate in precisely the opposite
manner. Response must decline rather than increase with continual
repetition of a stimulus. The
“weight” or the magnitude of depolarization of synapses does not increase with
learning and memory formation—it declines. It is basically for this reason that humans become bored and
tired of repetitious stimuli ranging from food to even sexual partners. Humans cherish and pursue novelty
often taking great risks to do so.
Human behavior and physiology are predicated upon a quest for new
experiences and new horizons.
Is it any wonder that computational neuroscience and Artificial Neural
Networks have been such an abysmal failure?
In the early 1980’s, Freeman began a series of experiments on the
representation of information in the olfactory lobe of the rabbit brain.
(3) He surgically implanted 8X8
electrode arrays (sensor arrays) upon the olfactory bulbs of a number of
rabbits. He then exposed these
rabbits to different odors—fresh cedar bedding chips, alfalfa rabbit pellets,
dog or cat urine, etc.—and recorded the field effect patterns of electrical
activity generated by individual rabbits in response to different odors. Each odor produced a unique
electrical pattern from their 8X8 grid.
The finding that a distinct odor might evoke a distinct electrical field
pattern on the olfactory bulb was expected. In addition to unique electrical patterns corresponding to a
given odor, each odor resulted in a different pattern or signature. This was also expected. The most significant and completely
unexpected aspect of this experiment, however, resulted from the observation
that different rabbits represented the same odor with different electrical
patterns. The patterns were
consistent signatures in any given rabbit, but these odor/electrical pattern
combinations were unique to individual rabbits. It is quite probable that Freeman did not expect this result
as it implied that each rabbit brain was absolutely unique and different in its
representation of sensory information.
Each rabbit brain seemed to be completely different from all other
rabbits.
Freeman is one of the principal advocates of computational neuroscience
and one of the founders of the International Neural Network Society. (4) It seems probable that he did not
expect the outcome of these rabbit olfactory experiments. The rabbit experiments clearly established that each rabbit
brain represents information—in this case, olfactory information—in an
absolutely unique form. It is an
understatement to suggest that this experiment is not consistent with any
conceivable mathematical representation of information in a brain—rabbit or
otherwise. What does this
experiment imply? In its most
basic form, this experiment suggests that every rabbit brain is distinctively
and uniquely self-organized. It
means that even the brains of rabbits are “one of a kind”. This experiment seems to make the
solution of this set of problems insurmountable. There is, however, a completely different way to interpret
this experiment.
At the time Freeman undertook his rabbit experiments, the sequencing of
various vertebrate genomes had not been attempted or completed. In the early 80’s, the central
dogma of biology was based upon the notion of “one gene/one protein”. Humans were assumed to possess at least
100,000 protein encoding genes corresponding to the roughly 100,000 known human
proteins. This entire gene/protein
dogma has now evaporated as the Human Genome projects have determined that
humans have ~25,000 protein encoding genes. During the same period, the number of known human proteins
has nearly doubled to about 200 thousand.
As a consequence of the usefulness of the rabbit in research and as a
source of monoclonal antibodies, the rabbit genome has also been
characterized. The rabbit genome
has slightly less the 15,000 protein encoding genes.
In the early 80’s, the rabbit genome could have been conceived as being
much larger just as the human genome was once regarded. Clearly both the rabbit and human have
only about ¼ the number of genes that were anticipated. This reality has profound implications
for Freeman’s rabbit dilemma.
There are not enough rabbit genes to encode every synapse in the rabbit
brain. The final termination of
the hundreds of millions of cortical synapses of the rabbit brain has to be
absolutely random. These synapses could
not have been structurally oriented or programmed by genetics to represent
information in a standardized mathematical manner. This is the reason that every rabbit in Freeman’s experiment
was unique and represented the information of a given odor in a distinctive,
individual manner. Freeman could
not possibly have foreseen this relationship. It is a vast understatement to suggest that the massive numbers
of cortical synapses of the human brain must self-organize in an absolutely
unique manner in every human brain.
Even the brains of genetically identical twins must be different in the terminal
patterns of cortical synapses. The
reason is very simple. There are
hundreds of billions of synapses just in the cerebral cortex of the human
brain. There is not enough
genetic information, even if humans actually had 100 thousand genes, to dictate
the precise location and structure of even a small fraction of those terminal
connections. This realization does not imply that the
organization of a brain is random or haphazard. The structure of all brains is determined by genetic
design. This genetic architecture
has a very ancient origin as evidenced by the similarity of the design of all
vertebrate brains and the cells, neurons and neuroglia making up those
brains. There is no question that
virtually all of the brain is structurally determined by genetics with the
exception of the final terminations of cortical synapses. This concept goes to the heart of the
nurture versus nature debate that has raged for at least a century. This debate is so important that it
must be addressed. There should be
no question that basic animal behavior has a genetic basis that may be partially
modified by subsequent learning. Human
brains are remarkably similar to all other vertebrates in basic design. We may note that birds act like
birds, cats act like cats, dogs act like dogs, monkeys act like monkeys, and
humans act like humans. They do not
need to be taught to attain these species distinctive behaviors. All vertebrates are born with genetic
information encoded in the synapses of the sub-cortical, nuclear parts of the
brain that have developed before the animals become subjected to the external
environment. The genetic basis of behavior is a
virtual certainty scientifically although the implications of this reality may
be politically controversial.
It is certain from the exhaustive studies of Cajal and Lorente de No (1)(11)(12)
that the connections between the thalamus and the cerebral cortex as well as
from cortical area to cortical area are precise and predictable. These connections as well as all major
pathways are not random. Human
cerebral cortex is precisely ordered into six layers. Input from the thalamic nuclei arrives at the first or outermost
level of neurons (level one) of the cortex while output is nearly always from
the fourth layer of pyramidal neurons. The general structure of cerebral cortex must be
genetically determined in the human brain. The fact that all human brains as well as all vertebrate
brains are remarkably alike structurally is logical proof of genetic
design. The final terminations of
the massive numbers of connections (synapses) in the cerebral cortex, however,
must be random. It appears that a precise design
of synaptic patterns involving the placement of billions of synapses must far exceed
the capacity for all vertebrate genomes.
Freeman’s rabbits are a
good example of the previous statements.
At first glance, these facts appear to make any understanding of either
a rabbit or human brain impossible.
It doesn’t. This makes it
possible. With these realizations,
It becomes possible to describe the mechanisms of information representation at
the synapses and the functional role of neurons. These mechanisms are universal and are as applicable to a
housefly as they are to a human.
THE NEURON AND ITS SYNAPTIC CONNECTIONS
The highly detailed work of Suga et al (19)(21) on the auditory cortex in
echo-location of the bat as well as Hubel and Wiesel (6) on the development of
the visual cortex of the cat illustrate quite conclusively that the synaptic
connections in the sensory areas of brains in bats and cats respectively adapt
to become templates or filters for incoming sensory data. The development of the nuclear
areas of the brain of vertebrates encoding behavior also adapts (learns) to become
templates and filters in a nearly identical mechanism as the adaptation of the
sensory cortex of bat. In the case
of the auditory cortex of humans, adaptation of the synapses of this region
provides a template for language recognition. In other words,
the “recognition” of a sound pattern of human language by the template or
filter of the primary auditory cortex in the temporal lobe of the human brain
results in an instant association with the words and meanings of the pattern of
sounds. In the case of the bat,
the template of frequencies from an adapted auditory cortex provides instant
association with flying insects, trees, and other structures. The automatic dynamism moves seamlessly
from one time domain to the next.
Images flow in time in both the human and bat examples.
Suga has shown that the auditory cortex of a bat adapts or specializes
to become responsive to a very narrow frequency of input—to within a few
hundredth of a hertz. In this
sense, the synapses of the auditory cortex of a bat adapt by becoming tuned to
a very specific, narrow frequency band.
It is convenient to think of learning and adaptation as a window that
progressively closes until only a very specific frequency may enter and
activate the synapse. Stated
differently, these adapted synapses on the neurons of the auditory cortex of
the bat decrease in magnitude while increasing in speed of activity—giving a
shorter time of activation—a short window—a narrow window—of activity. This adaptation of the sensory cortex creates a template or
filter that becomes highly discriminating. It is also becomes sharply focused in time and space. This is another way to describe the
development of entropy. Physical
systems tend to move toward stable states at the lowest expenditure of
energy. A similar effect occurs in
the developing visual cortex of a cat in the initial cortical development after
the birth of the animal before and shortly after its eyes open to receive external
visual stimuli. In short, the synapses of sensory cortex
that develop to be templates or filters progress with experience and learning
from an acceptance of a broad range of input and high magnitude to a
progressively narrower input until these synapses only respond to a very
specific frequency of sound or wavelength of light. The net effect of an adapted primary sensory cortex is that
it provides nearly instant temporal and spatial images of input. The bat can instantly identify an
insect or a cat can distinguish a color or image. Human sensory cortex operates in an identical manner.
The above synaptic effects described by Suga, Hubel and Wiesel can be
illustrated by the example of the semiconductor floating gate device. Normally this type of transistor is
utilized as a digital device, but it can also be described theoretically as an
analog device in which the distinction is not just the presence of charge but
the precise amount of charge on the gate.
The difference is simply this:
a digital floating gate device is a two state device—a charge on the
gate— [1]—or no charge on the gate—[0]. An analog floating gate device is also a fuzzy device as
it can represent any value between zero [0] and one [1]. The amount of charge on the fuzzy
(analog) gate is directly proportional to the time required to charge and
discharge the gate—the larger the charge on the gate—the greater the number of
electrons on the gate, the greater the time required to charge or discharge
that gate. The physics of a
semiconductor floating gate device is a scientific fact. The relationship of this description of
a synapse to a floating gate device is a priori proof of this concept of a
synapse. A synapse is directly
analogous to a semiconductor floating gate device operated as a fuzzy, analog
device. Because of these
relationships, a synapse is a living representation of time—time in relation to
the time values of all other synapses within a given brain—certainly all the
synapses upon the surface of a given neuron. This is the source of the dynamic representation in all
brains. Since all neurons
discharge to other neurons, the concomitance of time values flows smoothly into
another time continuum.
Images or associations are linked together by their confluence in time
and flow seamlessly as a stream of time and its associated events.
The famous studies of Pavlov leading to the concepts of the Conditional
Reflex also presents an example of the organization of information in a brain
on the basis of time—this could be called concomitance—events occurring in the
same time domain. Pavlov rang a
bell at the same time he fed his dogs.
In anticipation of food and eating, the dogs began salivating. After many trials of ringing a bell and
feeding the dogs—at the same time—the act of ringing a bell alone resulted in
salivation just as if they were being fed even though they did not receive
food. Why? The bell and food became incorporated
into the same neural network of synapses and neurons linked together by their
confluence of activation. The
activation of any part of the larger image must result in the activation
(association) of the whole image.
The concept of content addressability has time as a key.
Synapses upon neurons of the auditory cortex of a bat adapt to encode a
precise frequency. In the process
of adapting to a given precise value, the actual magnitude of the response of the
synapse must decline to the narrow window of the specific frequency of
activation. Upon activation, this
synapse denotes a time value analogous to the magnitude of local
depolarization. In this sense,
biological synapses are switches—analog gates—transistors. These synapses encode a magnitude
value, a temporal value as well as a position in space relative to all other
synapses in the same time domain both on a given neuron and all active neurons
of that brain. A neuron may have
hundreds or even thousands of synapses upon its surface.
Since the synapse is a biological molecular device, the relationship of
magnitude to time relates directly to numbers of ions that move across the
neural membrane. Because of this
relationship, the magnitude of synaptic depolarization is directly proportional
to the time of local depolarization.
Therefore, every synapse on the surface of a neuron represents a
magnitude value equivalent to a corresponding time value. All synapses represent a position
in time/space in relation to all other synapses in a given neural image. These relationships may also be
described biochemically.
Living brains consist of a mixed population of neurons. Some of these cells synthesize and
release neurotransmitters that stimulate action or depolarization on other
receiving neurons. The other major
category of neurons is inhibitory.
These neurons synthesize neurotransmitters producing an opposite effect
to the excitatory neurons. The net
effect of this mixed population is the modulation of brain activity. The mixed inhibitory/excitatory
population of neurons allows a vast expansion of memory representation and information
processing because this interaction may create immense variation of the
threshold (zeta potential) of every neuron. Neuron complexity of form and function greatly expands
the storage and functional capacities of the brain because it results in vast
numbers of populations of neurons relating to different conditions and a
virtually infinite combination of active synapses and a nearly infinite
variation of cell threshold levels.
Owing to the great complexity of the histology of the nervous system, it
is useful to confine discussion to a general and somewhat superficial
discussion of synaptic differences.
A generalized excitatory synapse stimulates depolarization upon the
surface of a receiving neuron. The biochemistry of this
excitatory synapse begins with the input of a neurotransmitter from another
cell. A neurotransmitter is
essentially a hormone that is released by one cell to affect another. These local neuronal hormones are very small
molecules and are released at the terminal endings of the branches of the
outgoing fibers of neurons (axons). Noradrenalin and acetylcholine are good examples
of excitatory neurotransmitters.
Once an excitatory neurotransmitter is released at the synapse, it
reacts with a membrane bound receptor molecule initiating an intracellular
chemical reaction in the receiving synaptic area. This chemical reaction may cause the receiving
neuron to reverse membrane polarity—to reverse local charge—to depolarize. It is useful to visualize a neurotransmitter as being
analogous to a key while the membrane bound receptor molecule acts as a
specific lock. This lock/key—hormone/receptor—relationship
is nearly universal in physiology as a mode of hormone action.
In nearly all cases of local excitation at the synapse, a
neurotransmitter molecule reacts with a membrane bound receptor called adenyl cyclase. The synaptic reaction equals neurotransmitter (NT) + adenyl
cyclase (AC) + ATP (adenosine triphosphate) àcyclic
AMP (cyclic adenosine monophosphate or cAMP) + ADP (adenosine
diphosphate).
This reaction initiates the synaptic response. The key active product from this
reaction is cyclic adenosine monophosphate—cyclic AMP—cAMP. Cyclic AMP catalyzes the reactions
opening the sodium channels allowing sodium ions to dive into the local
synaptic area resulting in depolarization or the reversal of charge. This reaction requires energy in the
form of adenosine triphosphate—ATP—the universal carrier of energy. ATP is produced from the oxidation of
pyruvic acid (1/2 glucose molecule) at the mitochondria of the neuron. Energy is released from the terminal
phosphate bond of ATP resulting in the formation of ADP + P + energy. This energy is released by the action of
various membrane bound enzymes called ATPase and catalyzed by cAMP. Ion channels for the sodium ion (Na+)
remain open as long as cyclic AMP is available to stimulate the reactions. Therefore, the synaptic reaction could be viewed in quantum
terms as time of channel opening (the length of the NT/AC reaction) determines
the number of Na+ ions—particles—that enter the neuron.
These reactions are terminated by another chemical compound called
phosphodiesterase—PDE.
Phosphodiesterase breaks the ring structure of cyclic AMP and
neutralizes this compound. As a
consequence of the PDE/cAMP reaction, sodium channels close and membrane bound
ionic pumps begin to pump sodium ions back out of the cell as a means of
restoring the charge balance. All
of these reactions require energy.
The only source of that energy is from cell respiration on the
mitochondria of the neuron. The many
thousands of tiny, bacteria like mitochondria are the only source of energy in
living cells as well as being the only part of cells requiring oxygen. We literally breathe air to supply
mitochondria with oxygen. Complex
life would not be possible without mitochondria. Brain function would not occur without the mediation and
energy produced by mitochondria.
The bottom line of the reactions of the synapse is that it is a
molecular switch—a molecular transistor.
It is a molecular
transistor with a nearly infinite range of activity as each sodium ion entering
the neuron upon activation represents a variable state of this living
switch. The generation of cyclic
adenosine monophosphate—cAMP turns ON the reaction while
phosphodiesterase—PDE—turns that reaction OFF. Direct evidence
of this relationship can be experienced by drinking coffee or tea. Caffeine and theobromine (tea) are
analogs (molecules that have a very similar structure) of PDE. Caffeine is a competitive inhibitor of
phosphodiesterase—PDE. Caffeine
inhibits the breakdown of cAMP by interfering with PDE thereby increasing the
molecular lifetime of cAMP and consequently increasing the activity of all
synapses. As direct result, coffee
and tea are stimulants of the brain as all excitatory synapses of equally
activated by PDE inhibition.
Once again, it is certain that the synapse is an analog switch—a
transistor—that equates local depolarization to a time value. Time is the filing—the
organizing—system of all brains.
Adaptation, memory and learning result from the synthetic relationship
of the molecular lifespan (time of reaction—generation to destruction) of
cyclic AMP and the destruction of cyclic AMP by phosphodiesterase—PDE. Memories are encoded as the dynamic
balance between factors activating and inactivating synapses. Global memories are composed of
millions of synapses that are active in the same time domain.
Because the generation and action of cyclic AMP requires the
expenditure of energy, the laws of entropy suggest that PDE will be synthesized
to minimize that energy utilization.
Therefore, adaptation, memory or learning must result in a decline of
synaptic magnitude.
APPLICATIONS TO EARTHQUAKE PREDICTION
Although living brains are extremely complex, it is certain that the
principles of operation are robust and relatively simple. The vertebrate brain is not a
recent evolutionary development.
The basic architecture and function of a brain extends back in time more
than 500 million years. The
fundamental structure and the principles of operation appear to have not
changed through the millions of years of natural selection. This suggests that the operating
principles of brains are rather basic and simple. This is precisely what the study of the synapse reveals. It is an analog switch. The brain is a vast organization of these fundamental analog
switches. The observation that
brains rarely ever fail to function suggests that their structure and function
must be robust and reliable.
Massive parallelism accounts for some of this reliability, but it does
not account for all of it. The
fundamental physiology of distributed, self-organizing systems accounts for
most of this fundamental reliability.
The brain illustrates that reliability arises from simplicity. As a general rule of engineering, the
more complicated a machine, the more likely that machine is to fail. Clearly, the brain—any brain from an
insect to a human—must operate from a fundamental simplicity.
For the sake of practical application, a neuron could be considered to
be an array of nanosensors (synapses) that can be activated in any conceivable
order outputting a stream of data to other neurons. In this highly simplified concept, it is useful to consider
the possible combinations of just 60 numbers as they are used in the Powerball
Lottery. The odds of winning the Powerball
lottery are one (1) chance in 146,107,962. Most cortical pyramidal neurons have at least 100 synapses
some may have more than 1000.
Because of the possible combinations, even one neuron has the capacity
for immense variation in its encoded information. Most neural images seem to involve many thousands or even
millions of different neurons. As
a result, it is possible to state that the capacity of a human brain—even a
single neuron in that brain—is virtually infinite.
The U.S. Patent entitled, METHOD AND APPARATUS FOR PATTERN
CLASSIFICATION USING DISTRIBUTED ADAPTIVE FUZZY WINDOWS (15), was conceived as
representing a simulation of a single neuron and its synapses. It provides a fast, reliable method of
encoding and recognizing naturally occurring analog waveform data. All naturally occurring
information from light to sound to handwriting exists as an analog waveform or
can be transformed into such a form.
Handwriting in any language from Chinese to English can be converted
into a highly stable waveform through the simple formula of X/T + Y/T where X
and Y are movements of the pen in vertical and horizontal coordinates as a
function of time (T). In practice,
this application of the Fuzzy Neuron is represented as a database of any
desired size depending upon memory allotment. (Figure 3) The
length of a handwriting waveform consists from the beginning to the end of the
movement of the pen. In other
applications, the beginning to the end of the waveform segment may consist of a
sampling of a time interval. For
instance, a continuous output signal may be sampled at regular intervals—every
20 seconds or even every 30 minutes in slow changing data streams. After the input of many samples into
the database, the sampled waveforms become stabilized as a predicted set of
points in the database. These operations occur within a few
hundred microseconds with this recognition algorithm on a personal computer
platform. With the input of highly
similar signatures, the fuzzy window surrounding a given waveform will begin to
narrow to the average of input until it will only respond (recognize) a new
input falling into that narrow window.
In practice, the database “neuron” could be designed with any number of
synapses from 6 or 8 up to potentially thousands. It has been observed from experience, however, that this
system tends to work best in the range of 32 to 128 datapoints.
The foregoing pages illustrate that although the physiology of the
nervous system is extremely complicated, the actual mechanisms of action are
rather simple. Any given neuron
has the potential to output a data stream derived from any combination of
hundreds or thousands of synapses.
It makes absolutely no difference as to the order of input to that
neuron. That is one of the essential criteria of “self-organizing”
systems. The only important point
as that inputs to a neuron occur within the same time domain. Computational neuroscientists might
like to consider all inputs into a neuron occurring at the same instant as
being summed. That conclusion,
however, substitutes summation for confluence in time. Inputs occurring together are simply
part of the same image. Once it is
conceived that there is no need for neurons to act as calculators and more
appropriately a means to integrate datapoints in the time domains, it becomes
possible to understand the nervous system at least elementally. For many years, researchers in
artificial neural networks have discussed the issue of “scalability” in the
nervous system without realizing that neurons performing calculations are not
scalable. Such neurons become the
endpoint. The brain must act as an
enormously complex set of circuits rather than a device that has inputs and
outputs similar to a calculator.
The Powerball example suggests the lack of any need for neurons to act
as calculators as a means of generating informational diversity or in the
creation of images.
The neuron provides a clear model toward understanding and predicting
the behavior of complex parallel systems.
Seismic zones, particularly areas with implanted sensors, are such
parallel systems. At its simplest,
a dynamic neuron constitutes an area where a number of sensors are located. An active seismic zone is a dynamic
geological area. When sensors are
placed in the region of seismic activity, the parallel between this area and a
neuron are analogous. Just as it
does not matter as to the placement of synapses or their number upon the
surface of a neuron, the number, location and even type of sensor placed in an
active seismic zone does not change the fact that these two entities are
similar in constitution.
Taiwan is divided into 18 grids that are monitored by an array of
sensors. The grids are as follows:
1. Chaiyi—41 sensors, 2. Changhua—28 sensors,
3. Hsinchu—23 sensors, 4. Hualein—69 sensors,
5. Keelung—13 sensors, 6. Kinmen—2 sensors,
7. Maoli—127 sensors, 8. Nantou—72 sensors, 9. Penghu—3 sensors,
10. Pingtung—77
sensors, 11. Taichung—109 sensors, 12. Tainan—104 sensors,
13. Taipei—118
sensors, 14. Taitung—58 sensors, 15. Taoyuan—20 sensors,
16. Kaohsiung—97 sensors, 17. Yilan—71 sensors,
18. Yunlin—128 sensors.
In every case, the grid structure of the seismic zones of Taiwan are
ideally suited for the development of an earthquake prediction system based
upon the sensor arrays of each region as well as the unique geology of those
regions. Taiwan is an extremely
complex region geologically. This
fact tends to result in unique local geological conditions as different types
of rocks, sediments, fault lines, thickness of rock distinguish each of these
regions. This local identity
constitutes a unique “fingerprint” for each of these sensor monitored areas.
(2)
CONCLUSIONS
As a consequence of the complexity of the geology of Taiwan, each of
the 18 grids should be considered as a dedicated recognition unit. The large numbers of fault lines
throughout Taiwan, the differences in type of minerals, the thickness of those
rock layers, suggests that every sector of this island is unique. Since the areas that are now monitored
by sensors are not only unique geologically, the actual location and type of
the sensors are distinctive to each seismic grid. In this sense, every sensory grid location is Taiwan must be
considered as having a unique and different signature. Although microscopic, every pyramidal
neuron of the cerebral cortex of a brain is also unique in precisely the same
way as a geological unit in Taiwan.
Owing to the fact that the terminal distribution of every cortical
synapse distributed upon the surface of a neuron must be random and self-organized,
individual neurons and their hundreds or thousands of synapses must be
unique. In short, a seismic array
and a cortical neuron although vastly different in size are quite similar in
basic form, structure and potential behavior.
It is vital to this theoretical conception that the confluence of the
potential behavior between a sensory grid in an active seismic area and a
neuron are fully understood. This
similarity of behavior between a hypothetical neuron and a seismic area could
be visualized from the standpoint of the “independence” of each synapse or
sensor in these complementary arrays.
This idea of “independence” suggests that the input of every synapse on
a neuron as well as every sensor in a seismic array must be independent of all
other sensors or synapses.
Hypothetically, if we view a neuron and a seismic array as having an
identical number of sensors or synapses, then these entities become
functionally identical. Any
combination of sensors or synapses in time and space may be activated to
produce an output from the sensory array or from a hypothetical neuron. In a seismic array just as with a neuron, the information of
the system lies in the activation sequence and the relationship between
individual sensors. It should be
obvious that the output from such an independent array must consist of a
“signature” of these dynamic sensor relationships much like a human handwritten
signature that has been transformed by the X/T + Y/T= algorithm. In practice, the output from the sensors of seismic grids
should be converted into a positive (to the right and above the x and y
coordinates) simplified waveform using this simple algorithm.
Why are these relationships important? The physical laws governing the behavior of seismic areas
must be universal. This application
of physical laws applies equally to biological systems. Since the geology of the seismic areas
must remain essentially the same over time, all movements must occur within the
constraints of the type and thickness of rocks together with the forces driving
tectonic plates. All of these elements
could be conceived as a given—certainly the relationship of these
elements—geology and tectonic forces. The informational content possible from any given
physical neuron is dependent upon the physical structure of the neuron (i.e., geology
of seismic area), the sequential activation of individual sensors (synapses)
and their interrelationship to each other. As a consequence, the information outputted from such a
neuron OR a seismic area must be identical if the sequence of sensors
(synapses) and their relationships with other sensors are recreated. If this were not true, biological
systems could not and would not retain informational relationships as
memory. The memory of seismic
events must be encoded within the four-dimensional structure of a geological
area in a similar manner as a consequence of the physical laws. Since the desired historical
information from the past behavior consists of the relationships between
sensors, their location is immaterial.
The processed samples can be viewed as simple two-dimensional
waveforms—signatures. As a
consequence of these relationships, an earthquake of a given magnitude at a
given latitude and longitude MUST result from very similar events in its
generation. This history is
recorded in the relativity between sensors in all geographical areas. There is no difference between this
definition of seismic history and memory in a neural system. Both are memory if a recreation of
events results in similar output.
The brain is often considered to be an association engine in the sense
that memories are linked together in time to form an association of events. As a memory device, any brain
would be worthless unless it associated an anticipated event with past
information. The past output
of the sensors from the various seismic grids of Taiwan provides a means to
predict future seismic behavior as well as to associate that future behavior
with potential location and magnitude of the event. This is precisely the same concept as the association of
information in a biological brain.
Each of the seismic units in Taiwan would require a dedicated computer
with adequate memory ~8 gigabytes dedicated to the analysis of the ongoing data
stream from the sensors of that grid.
All sensors regardless of type whether ground motion, pressure,
temperature, or any other type of sensor if read out linearly (using the above
algorithm) can produce a “waveform” reflective of activity relative of sensor
to sensor at any given moment in time.
Because of the nature of the recognition system (15), it makes
absolutely no difference what type of sensor or its location within the
monitored grid. The important
information—the signature—of the sensory grid is contained in the dynamic
relationship between sensors of the area.
The output from each of these sensory grids must be considered on the
time domain rather than output from sensor #1, then #2, etc.. It is expected that current
monitoring of these grids is linear. Once the software database is established, each
regional history must be encoded in the database. This historical database may be generated by analysis of the
sensor output in the hours and minutes before a significant earthquake. The sampling rate should be
determined by the unique nature of every sensory grid. This procedure must then become
continuous and automatic after the historical database has been
established. Each of the
historical signals may be associated with the time and location before a
significant seismic event.
Therefore when these signals are subsequently recognized, a probability
of location of activity as well as the potential for an earthquake of an
anticipated magnitude can be estimated.
In practice, the
recognition monitoring of each of these sensory grids would be based upon a
given sampling rate, i.e., every minute, every 5 minutes, or possibly longer
depending upon the dynamic history of the region.
It is fully realized that many geophysicists do not believe that predictions
can be realized in seismic areas. The
current concept considers several matters that may not be possible with current
sensing techniques. The proposed
system relies upon all signals regardless of magnitude and their relationship
to each other—their relativity—rather than abrupt change in individual sensor
data. The earth is dynamic. Seismic areas are the most dynamic of
all geological regions. Taiwan is
much more dynamic in its periodic movements than California or Alaska. The high frequency of seismic events in
eastern Taiwan makes this area ideal for the application of this image
recognition technology.
This approach has several advantages in addition to the probably of
actual success. The
computing power necessary to accomplish the recognition and prediction of
seismic events in the various regions of Taiwan and presumably other regions of
the seismically active world is small and very affordable. A modern PC or and iMAC will suffice
with more than adequate processing power and memory to accomplish this task. The difficulty lies in the building of
the database of historical data prior to previous earthquakes. This data has been archived by the
Central Weather Bureau of Taiwan.
Many years of historical data from the sensor arrays are available for
development of the necessary databases.
The building of these databases will require the work of skilled
developers as they peruse the historical sensory records of these seismic areas
prior to seismic events. In
practice, this should not be technically difficult. If an hour warning is desired, then the data emitted from
the sensory array in the hour or hours before a significant earthquake needs to
be examined. If the human eye can
see the patterns in the historical data, a desktop computer operating with this
system will recognize that data and associate it with location of the probable
event and its potential magnitude in real-time.
REFERENCES
1.
Cajal,
S.R. y, Histologie du Systeme Nerveux de l’Homme et des Vertebres, 2 vols. (in
French), Paris: Malone, 1911
2.
Chen,
C-H., Geological Map of Taiwan, Central Geological Survey, Ministry of Economic
Affairs. 2000 http://eng.wra.gov.tw/public/Data/gh012_htm
3.
Freeman,
W.J., The Physiological Basis of Mental Images, Biological Psychiatry, vol. 18.
No. 10, 1983
4.
Freeman,
W.J., A Physiological Hypothesis of Perception, Perspectives in Biological
Medicine, Summer, pp. 561-593, 1981
5.
Hodgkin,
A.L. and Huxley, A.F., 1952, Currents carried by sodium and potassium ions
through the membrane of the giant axon of Loligo, Journal of Physiology
116:449-472
6.
Hubel,
D.H. and Wiesel, T.N., Receptive Fields, Binocular Interaction, and Functional
Architecture in the Cat’s Visual Cortex, Journal of Physiology, vol.160, pp.
106-154, 1962
7.
Karten,
H.J. and Hodos, W., A Stereotaxic Atlas of the Brain of the Pigeon (Columbia
Livia), forward by Walle J.H. Nauta, The Johns Hopkins Press, Baltimore, MD,
1966
8.
Kruger,
L., Sporta, S. and Swanson, L.W., Photographic Atlas of the Rat Brain: The Cell
and Fiber Architecture Illustrated in three planes with Stereotaxic
Coordinates, Cambridge University Press, 1995 ISBN 0-521-41342-7
9.
Lehninger,
A.L., The Mitochondrion—Molecular Basis of the Structure and Function, W.A.
Benjamin, New York, Amsterdam 1964
10. Liang, W-T., Huang, B-S., Hiu, C-C., and
Kao, H., Broadband Array in Taiwan for Seismology (BATS): The current status
and future development. BATS—2003,
FDSN Meeting
11. Lorente de No, R., 1933 Studies on the
Structure of the Cerebral Cortex.
I. The area entorhinalis. J. Psychol u Neurol 45: 381-438
12. Lorente de No, R., 1934 Studies on the
Structure of the Cerebral Cortex.
II. Continuation of the Study on the Ammonic System. J. Psychol u Neurol 46: 113-177
13. McCullogh, W.S. and Pitts, W., A Logical
Calculus of the Ideas Immanent in Nervous Activity, Bulletin of Mathematical
Biophysics, vol. 5, pp. 115-133, 1943
14. Minsky, M. and Papert, S.,
Perceptrons. Cambridge, MA: MIT Press, 1969
15. Olson, W.W., U.S. Patent # 5390261, Methods
and Apparatus for Pattern Classification using Distributed Adaptive Fuzzy
Windows, February 1995
16. Olson, W.W. and Huang, Y-W., Toward Systemic Neural Network
Modeling, Proceedings IEEE/INNS International Joint Conference on Neural
Networks (IJCNN-89), Washington, D.C. 1989
17. Rosenblatt, F., Principles of
Neurodynamics: Perceptrons and the
Theory of Brain Mechanisms.
Washington, D.C., Spartan Books, 1961
18. Strausfeld, N.J., Atlas of an Insect Brain,
Springer Verlag, Berlin, New York, 1976
19. Suga, N., Neural Computation for Auditory
Processing—in Proceedings of the First Annual INNS Meeting, 1988, p. 276
20. Wu, Y-M., and Kanamori, H., Development of
an Earthquake Early Warning System Using Real-Time Strong Motion Signals,
Sensors 2008, 8, 1-9
21. Xiao, Z. and Suga, N., Reorganization of
the Auditory Cortex specialized for echo-delay processing in the mustached
bat. PNAS February 10, 2004: 101
(6): 1769-1774
Subscribe to:
Comments (Atom)