1436207229921

Xavier High School? Holy shit, Kurt Vonnegut was part of the X-Men.
 
[video=vimeo;132833445]https://vimeo.com/132833445[/video]​
 
Jacobi’s Inspirational Poster Of The Week

11709611_740186976090022_2067746599819530289_n.jpg


@Jacobi
 
If anyone appreciated that one, I thought it would have been you.
Is that all our friendship means to you!?

:pout:

Well, maybe that was- wait you think we're friends?

130.gif
 
Well, maybe that was- wait you think we're friends?

130.gif


[video=youtube;5FjWe31S_0g]https://www.youtube.com/watch?feature=player_embedded&v=5FjWe31S_0g[/video]​
 
[MENTION=5667]Jacobi[/MENTION]


tumblr_mwod9rofEX1r4gei2o2_400.gif
 

[MENTION=5807]AJ_[/MENTION] @Kgal

This relates to what all of us have been talking about in different ways.


Time’s Taboos:
Dirty Thoughts on Systems, Syntropy, and Psi




Classical physics, with its totally determinative, forward-in-time, billiard-ball causation, requires sweeping anomalies like psi under the rug, not to mention resigning ourselves to an absence of higher meaning and direction in the universe.

Even the local islands of order allowed within the framework of dynamical systems theory that emerged in the middle of the last century with the work of Ludwig von Bertalanffy, Ilya Prigogine, and many others seem (to some) like disappointing consolation prizes in a universe still largely governed by the second law of thermodynamics.

Thus in an effort to bridge science and spirituality and transcend bleak mechanistic materialism, a lot of anti-materialist writers are now tweaking thermodynamics in ways that make time appear more symmetrical, the universe less disorderly, and consciousness more central.

The expansive, dissipative past, born in a primordial explosion, seems to demand a receptive orderly destiny to cozily balance things, lest it all end in chaos.

An interesting recent example is the work of Ulisse Di Corpo and Antonella Vannini, who have resurrected mathematician Luigi Fantappie’s mid-20th-century concept of “syntropy
ir
,” a postulated countervailing principle to entropy, drawing systems toward complexity, coherence, and order.

Syntropy is retrocausal: Future nodes of convergence and harmony, or “attractors,” exert a pull on the past, according to Di Corpo and Vannini.

On the molecular level, special properties of hydrogen bonds (the “hydrogen bridge” discovered by Wolfgang Pauli) make water a uniquely syntropic medium, capable of organizing itself and serving as the basis for the emergence of complex, anti-entropic biological systems out of the entropic, prebiological matrix.

In humans and other sentient organisms, emotion acts as a signal current from future attractors; love is a signal of being on a harmonious, life-conducive path, whereas anxiety signals deviation from it. (Thought, by the same token, reflects signals from the past, based on learning and experience.)

In this view, as we move toward the future state of order, information is increased—or at least, it does not lose ground in the information/entropy see-saw.

The authors draw on a wide range of research, from quantum mechanics to systems theory to findings in parapsychology, to support their argument.

They point for instance to anomalies that seem to indicate consciousness’s power to reverse or inhibit entropy.
Robert Jahn and Brenda Dunne’s famous experiments with random event generators at the Princeton Engineering Anomalies Research (PEAR) lab showed that directed attention reduced randomness in the machines; this sort of effect has been found in many different types of experiments in many different laboratories.

The authors also invoke Rupert Sheldrake’s arguments about “formative causation”—the notion that there is an extra-genetic template guiding the development of complex organisms and preserving a “memory” of past experiences of a species.

Presaging and hovering over syntropy theory and other ‘complementaristic’ attempts at a more meaning-congenial synthesis are Carl Jung’s theories of synchronicity and archetypes, which were, in their day, also intended to supplement the cold meaningless thermodynamic universe with a sense of meaning, purpose, and direction.

Jung described the collective unconscious as “a pre-existent psyche that organizes matter”; synchronicity, his proposed ‘acausal’ connecting principle, was perhaps what we would now call a retrocausal principle, in which events are, as in syntropy, drawn toward some future coherence.

Archetypes, the nodes of this coherence, are much like Plato’s “ideal forms” and Di Corpo and Vannini’s attractors—patterning structures latent within the collective unconscious and giving direction to our lives and fates.





Such ideas appeal to our human love of balance and symmetry: The expansive, dissipative past, born in a primordial explosion, seems to demand a receptive orderly destiny to cozily balance things, lest it all end in chaos.

But is that really the case?
Is it really impossible to account for complexity and order, explain psychic anomalies, and make a home and a role for consciousness without departing from a traditional systems framework?

Balance Within Imbalance

In thinking about systems and complexity, I have always taken inspiration from Eric Jantsch’s breathtaking 1980 summa, The Self-Organizing Universe, which showed how entropy-exporting (or “dissipative”) systems arise and flourish and complexify—and give rise to meaningful order—within the traditional principles of thermodynamics.

Dissipative systems generate complex emergent forms, including not only the complex forms of galaxies, animal and plant life, and the brain, but also the regularities of social existence and universal symbolic structures related to our life as humans, including the most profound cultural symbols.

I think systems theory, and its more recent offshoots like chaos theory, fractal geometry, and so on, can actually go a long way toward explaining the “balance within imbalance” of the cosmos without invoking new complementary principles.

Some portion of the uncanny regularity we detect in our lives arises from how we unconsciously cast meaning forward and reel it in, responding to our own future potentialities.

We may even, without knowing it, be co-creating the systems of the natural world.


For example, orthodox systems theory demands no syntropic attractor to account for the geometrically perfect, chambered shell of a nautilus, and doesn’t even suggest there is a blueprint for it within the mollusc’s DNA.

A very specific schedule of protein synthesis and chemical reactions triggered by DNA transcription gives rise to cellular structures, which give rise to further structures, in a kind of recursive cascade of emergent forms of increasing complexity; the adult form of the animal, including the intricate structure and pattern of its shell, is a more or less predictable outcome of its growth and energy exchange with its environment.

What’s more, even mechanisms for ‘Lamarckian’ morphological change over the generations in response to life events and environmental pressures are now being revealed through epigenomics: Changes to the cellular environment alter gene expression, with no need to invoke something like Sheldrake’s morphogenetic fields.

The symphony of molecular and cellular processes is mindbogglingly intricate, but there is no reason or need to think that the final completed form of the animal is (retro)causative in some abstract Platonic fashion, or that there is any prototype for the nautilus either ahead in the future or out there in some nonlocal informational ether.





Attractors as chaos theory understands them (versus in syntropy theory) areregularities that emerge as a result of multiple interacting variables that produce feedback loops in the “phase space” of a complex system; these exert a kind of gravitational attraction toward a certain form, like a whirlpool, but there are no blueprints for them.

It’s just that the myriad mutually interfering/balancing forces create a regularity of outcomes that looks in hindsight somehow intentional, orderly, and even intelligently designed.

It is in this sense that I invoked the concept of attractors in thinking about now misrecognized psi leads to apparent synchronous occurrences in our lives; the difference from Jung’s concept of synchronicity or from Di Corpo and Vannini’s concept of syntropy is that the attractor, the meaningful pattern, is a result, not a cause.

One of Jung’s important insights about religion was that humans personify abstract functions and forces in order to conceptually manipulate them using the familiar metaphors of social interaction (“negotiating” with gods and spirits, for example).

Yet his archetypes are themselves sort of artificial personifications—or you might say, mechanizations—of universal human psychic and experiential regularities.

The fact that humans everywhere experience certain common themes like heroism and motherhood and the ironic self-undercutting of the “trickster” arguably just reflects the regularities of the mindbogglingly complex systems (within systems within systems…) of human life and mind (including psi, in the case of the trickster), not the machine-like patterning and organizing activity of a preexistent organizing psyche.

I am suggesting that there is no real blueprint for our unfolding “out there” in the collective unconscious, or in the future as understood by syntropy theory.

We are radically free agents, and it seems important not to lose sight of this in our attempts to rescue order and meaning (and meaningful anomalies) in the universe.

Syntropy and related ideas may be reifying the regular outcomes of systems whose complexity just happens to exceed our human capacity to grasp.

Penetrating Time

Where I strongly agree with both syntropy theory and Jung’s theory of synchronicity, however, is that I think consciousness plays a crucial, decisive role in making our future and shaping reality.

Many interpretations of quantum theory insist upon this.
And I also strongly agree that the crucial X-factor missing from the standard thermodynamic picture is specifically the future’s ability to affect the past via consciousness (i.e., psi).

Systems and attractors, even if they are not determinative, are indeed atemporal, and a future systems theory will need to accommodate consciousness’s ability to penetrate the veil of time.

The boundary we call the “present” may not be a knife edge but a blurry mess, with latent potentials extending well into the past and future.

I have been suggesting on this blog that some portion of the uncanny regularity we detect in our lives arises from how we unconsciously cast meaning forward and reel it in, responding to our own future potentialities.

This may include even “we” as observers of other, natural and biological systems (like the slimy systems that produce molluscs and seashells)—and thus we may even, without knowing it, be co-creating the systems of the natural world.

In other words, our fate as well as the fate of systems under our purview and observation emerge from a pattern of interaction with the field of future potentialities that we unconsciously detect and respond to, ever misrecognizing (or ‘misunderestimating’) the creative role of consciousness in the whole picture. (I even wonder if some of the effects of globally changing biological systems described by Sheldrake, such as lab animals on one continent learning a new task more readily after conspecifics are trained on that task on another continent, may reflect the role of humanconsciousness, i.e. the knowledge and intentionality of the experimenters, in shaping those systems.)





Yet, while psi appears to be an interaction with the future (and past), I would argue it is not symmetrical with entropy, and not some kind of natural, born complement, like yin to yang.

This may seem like a quibble, but it actually makes a world of difference, because it restores an instability or imbalance—indeed, incoherence—that needs to be there.

Noncoherence, the nonidentity of things with themselves—what the Buddhists call “no self” and what Slavoj Žižek calls “parallax”—is essential to the openness and nondeterminism of the universe and the real existence of free will.

The past is half of time, but its complement is purely virtual or imaginary, and the boundary we call the “present” may not be a knife edge but a blurry mess, with latent potentials extending well into the past and future, insofar as they remain unobserved and unmeasured, and thus opening up a whole realm of PhilDickian effects.

If psi is related to the perception of quantum potentials and probabilities, as I have suggested in previous posts, it may arise from precisely this asymmetry in the order of things (i.e., parallax).

Both the mind and our culture balk at such an asymmetry, though, as well as at the notion that the past and future can interpenetrate. Despite ostensibly offering a way for the future to affect the past, I think concepts like syntropy and synchronicity also paint a picture of a pristine, family-friendly cosmos in which truly messy asymmetries in the order of things, and taboo possibilities like time travel, are papered over.

The Future Looks Shitty From Here

Part of the problem syntropists have with existing dynamical systems theory is that systems “shit”: They take in energy, convert it to order/information, but ultimately excrete (“dissipate”) chaos.

They are not content to find meaning in local islands of order (dissipative systems) so long as there is seemingly a larger gain in disorder across the universe as a whole.

Even the most beautiful and perfect systems wallow in a bigger pigsty universe where the shit (chaos) just piles higher and deeper.
In a purely classical universe, that shit would indeed ultimately swallow and engulf the whole.

The latent Not Yet in quantum physics is often described as a “smear” of potentials existing in a state of quantum superposition.
Thus we should not miss how disgusting the world of unactualized potentiality is.


Quantum physics (at least as I understand it) is not as interested in this classical “future” and “past,” as demarcated neatly on some dimensional timeline, as it is in the distinction between the Actual and the Potential, or what I like to think of as the Is and the Not Yet.

At any given point in time, from the standpoint of an observer, the past may be a good-enough proxy for the Is, or what has been “collapsed” through observation, but in fact even the landscape of the Is is permeated by the Not Yet in the form of vast reserves of unobserved and thus uncollapsed potential that remain out of view, just under the skin of the visible universe—like Schrodinger’s cats trapped in the walls.

This may help account for various retrocausal effects seen in laboratories, such as the rather mindbending experiments of Helmut Schmidt, where subjects affected a random number generator in the past via their intention; according to Henry Stapp, it may also account for Benjamin Libet’s paradoxical findings that seemed to disprove the existence of conscious will (I’ll return to this in a later post on Stapp’s fascinating work).

That latent Not Yet in quantum physics is often described as a spread-out “smear” of potentials existing in a state of quantum superposition.

Thus we should not miss how disgusting the world of unactualized potentiality is: Isn’t that “smear” a little bit like the pigsty future promised by classical thermodynamics?

When you think about it, unactualized potentials are the ultimatemess; it is only when consciousness engages in observation that that mess is “cleaned up” so to speak, to become something real and solid and shiny and pristine and definite.

Mentally intervening or meddling in the not-yet-actual via psi, though, is basically wallowing in that disgusting gray smear of quantum possibility.




The syntropy model with its future attractors drawing us toward them with love sort of bowdlerizes the disgusting-sounding possibilities latent in quantum physics (and psi), by replacing the indeterminate gray smear with an “already existing future” that looks rather like a scene from a Jehovah’s Witnesses pamphlet: Shiny beautiful glowing perfect people beckoning to you to come join their church picnic.

But I think the syntropists (and Platonists, and Jungians) needn’t worry: Consciousness itself is bound to transform the from-here-disordered-looking future into some kind of order and information, even if there’s no way to predict what that order will look like.

And we do home in on that future precognitively.
But that precognitive engagement with our future potential actually trespasses on taboos even more basic than the scatological: Psi, insofar as it is ‘intercourse’ with the hidden Not Yet under the skin of reality, is basically the temporal equivalent of incest: fundamentally prohibited, and declared “impossible” because we just don’t want to have to form a mental picture of something that is deeply, deeply awkward.

Time and the Oedipus Complex

Even if we are ‘intellectually’ on board with the notion of precognition, some part of us balks at the perversity of a universe that includes information (let alone objects and people) traveling backward in time.

The perversity of time travel is, I think, the real meaning of the myth of Oedipus.

A story about a prince taking the place of his own father and marrying his own mother is about the closest the ancient world had to a story about time travel and its paradoxes.

Sophocles was Greece’s Phil Dick.


The ancients reckoned the forward, unidirectional, linear flow of time (chronos) through generations—the inevitable, always-forward-moving structure of kinship as well as the forward march of political regimes passed down via offspring.

In a sense, kinship and politics were kind of equivalent to the second law of thermodynamics for us, something inexorable and irreversible, moving in a single direction, never flowing backwards, and always basically getting worse (thus always dramatized as tragedy).

Thus a story about a prince taking the place of his own father and marrying his own mother is about the closest the ancient world had to a story about time travel and its paradoxes.




That Oedipus was effectively a time traveler has long been part of the esoteric understanding of that myth.
Consider the role of the Sphinx, whose riddle Oedipus correctly answers before he becomes King of Thebes.

Sphinxes are symbolic guardians of Time.
When HG Wells’ Time Traveler (in The Time Machine) visits the distant future, for example, he finds that a great Sphinx structure has been erected more or less on the site of his laboratory.

After his machine is confiscated, he must penetrate that structure to find it and return to Victorian England.

The way one defeats time is by reordering it, signaled by the “creature” in the Sphinx’s riddle: What goes on four feet in the morning, two in the afternoon, and three in the evening?

Viewers of the play knew the answer was “man,” who first crawls, then walks upright, then moves with the aid of a cane; but the story implied that most “men” don’t get the answer, and this ignorance is their Achilles heel, enabling the Sphinx (/Time) to defeat them.

Oedipus was himself the completion of the riddle, in some sense; he walks with a limp, and his name means “swollen foot” (his father Laius and other male ancestors all also had names connoting “lame” or “limping”).

Thus Oedipus was the sole (get it, sole?) human who walked upon only one (good) foot—thus completing the quaternity of the Sphinx’s riddle, but further destroying its numerical sequence.

Blindness and Enjoyment

Di Corpo and Vannini argue that emotions, principally love, are the cord drawing us toward future order.
If I am right, it may be something more like enjoyment that transcends time and acts as the carrier of information from our future. (Love per se is a special condition in which we experience enjoyment in common with other people—a unique problem at the heart of social existence and a crucial way in which psi guides and directs us toward others to reproduce and form complex social systems.)

I have argued, based on a metaphysically broadminded reading of Lacan and Žižek, that enjoyment must in some sense be “nonlocal”; it is through repetition that a symptom acts as a mechanism amplifying the future’s effect on the past and vice versa.

Symptoms are atemporal/acausal formations within the sea of enjoyment.
Hence the connection between prophecy, neurosis, art, and ritual/repetition.

How many of our dreams tell of future events that might have occurred but didn’t, because we live in an open, nondeterministic universe? Psi may always be visible only in a very narrow band at the edge of our consciousness.

Enjoyment, which “impossibly” connects the future to the past, is thus what turns psi into a psychoanalytic problem: The point is not merely that Oedipus “traveled into the past” by marrying his mother and killing his father; it is that he committed these crimes and enjoyed them, and only belatedly discovered what it was that he had been enjoying.

His guilt was not over his actions but over his misrecognized enjoyment.
Our ignorance as to our enjoyment (i.e., our blindness to it) allows both the past and future to affect our lives in uncanny and seemingly “impossible” ways like synchronicity.

Oedipus’s self-blinding upon discovering his crime is always seen as a kind of dramatic literalization of his own blindness at not having heeded various prophecies, like those of the blind Tiresias—another character who (psychically, in this case) “travels through time.”

It suggests to me a secret connection or even identity between these two figures: They are two sides of the same coin.
The past and future cannot affect the present except insofar as we are blind to our true enjoyment; they derive their power (and ability to travel through time and space, their nonlocal “cloaking” from the eyes of Heisenberg) from being unseen and unknowable—at least by the persons they most closely affect.

Hence prophets are, at least figuratively, blind; and we are largely blind to psi’s actions (and enjoyment) in our lives.




As I mentioned in my post on Vallee and remote viewing, fundamental philosophical conundrums effectively “hobble” or at least severely restrict psi, including the Platonic inability to know what we don’t know.

This as well as other factors, such as the “perverse” fact that penetrating the veil of time involves secret/disavowed enjoyment, may tend to work against individually “knowing the future” in a literal or actionable way.

Ordinarily, the future seeps into our consciousness to the degree that we misinterpret it, and our most vivid foreknowledge is only verified after the fact, in dreams, artworks, and other nonliteral “transmissions” that usually don’t seem very useful in consciously altering our course or changing our destiny.

By the same token, when psi does provide premonitions or warnings we heed, the status of those “predictions” as information may evaporate because we have no way to verify them (except in vivid cases like airplane crashes or ocean liner sinkings).

How many of our dreams tell of future events that might have occurred but didn’t, because we live in an open, nondeterministic universe? Psi may always be visible only in a very narrow band at the edge of our conscious awareness.

The methods devised at SRI to amplify the psi signal and make it efficacious in the real world involved protocols to work around these types of problems.

Remote viewers must be “blind” (in the figurative, experimental sense) to the target, first of all.
Thus psi really only works in groups of at least two, preferably three people, who possess different degrees of knowledge about the target or question to which an answer is sought.

Also, the rigid formalistic protocol itself works to distract and occupy the conscious mind so that psi information can be received more easily via unconscious channels.

And, most importantly, confirmation is necessary, which may be understood as providing the insecure psi mind with rewards (like sardines to condition the behavior of a dolphin) or may be understood as the actual target of remote viewing, if we accept the possibility that it may in fact just be precognizing our own future states of enjoyment/reward. (In his book Limitless Mind, Russell Targ considers feedback a usually essential part of Remote Viewing; although, in considering this question of its necessity, he does cite a few experiments that seem to show successful remote viewing in the absence of feedback.)

From Syntropy to Parallax

So to sum up: Order not only arises provisionally, contingently, within the “doomed” chaotic system described by classical thermodynamics, but also hovers over and lurks within it as consciousness in its constant interaction with unrealized potentiality.

Syntropy, like certain other concepts in post-materialist thought, might best be understood as an umbrella term covering various entropy-defeating phenomena in their as-yet mostly unmapped interaction, rather than as as a reified principle or “force” all its own.

I’m not sure Di Corpo and Vannini mean to suggest that that syntropy is actually causative, any more than Jung meant to imply that about synchronicity, yet these concepts are susceptible to that interpretation, and certainly Jung’s concept (which suffered from vagueness) has been ‘perverted’ in that way over the years.

Fortunately the unconscious, which has no sense of time, cannot be offended by the outrageous paradoxes and perversions that enable quantum physics—and psi—to work.

Einstein can serve as a warning about the haste to add new principles when we don’t immediately like what we see about reality.
He felt that the picture of the universe in disequilibrium that his own theories led to required a new yet-undiscovered principle, so he postulated a “cosmological constant” to make the equations add up in a more intellectually and aesthetically congenial way.

There was no evidence for such a thing, and later he regarded this as the biggest blunder in his career (although new theories of dark energy do sort of harken back to it).

The physical laws we know about may not be the only ones, and as Sheldrake importantly argues, they may not actually be set in stone; but they still may be able to do the job.

Quantum physics seems like it provides what we need, particularly given that it not only allows but actually requires precisely what was missing in the classical universe: a role for consciousness, and the possibility of causal interactions that defy our commonsense understandings of spacetime.

Part of what keeps us from embracing the discomfiting parallax and asymmetry of things is our sense of meaning as a kind of equals sign. “Meaning” in this sense collapses when we replace the Minkowski glass-block universe (where the future already exists) with a state of radical indeterminacy.

In lived fact, there is no meaning, just a succession of states within a larger turbulent, looping flow.
Those states appear “meaning-like” in hindsight, when we imagine time as flattened and static, but that fiction of meaning is a screen masking the acausal obscenities I described.

Fortunately the unconscious, which has no sense of time, cannot be offended by the outrageous paradoxes and perversions that enable quantum physics—and psi—to work.

 
"....emotions, principally love, are the cord drawing us toward future order..."

Yes! This is happening now on this planet and the rest of the galaxy.

Chaos, Dark Matter, Dark Energy, all those concepts describe the constant of the Universe....pure potential.

Good article!
 
11796311_10154107776858986_7913699154862700705_n.jpg
 
Some People With Severe Mental Disorders Mysteriously
Become Clear-Headed Just Before Death


iStock_000000353439_Large-676x450.jpg


People with schizophrenia, Alzheimer’s disease, and other conditions that cause severely impaired mental functioning, have sometimes inexplicably recovered their memories and clarity of mind shortly before death.

Their minds have seemed to return in an amazingly complete and coherent form, even as their brains have deteriorated further than ever.

Patients who aren’t even able to remember their own names for years may suddenly recognize their family members and have normal conversations with them about the past, present, and future.

No one knows how this happens.

For example, Scott Haig, M.D., wrote in an article for Time Magazine about a young patient of his named David whose tumor-riddled brain didn’t stop him from becoming lucid moments before his death.

David had stopped speaking and moving in the weeks before his death.
When his head was scanned, “There was barely any brain left,” Dr. Haig explained.

But on the night David died, he spent about five minutes fully conscious, saying goodbye to his family.

“It wasn’t David’s brain that woke him up to say goodbye,” Haig said. “His brain had already been destroyed. Tumor metastases don’t simply occupy space and press on things, leaving a whole brain. The metastases actually replace tissue. … The brain is just not there.

“What woke my patient … was simply his mind, forcing its way through a broken brain, a father’s final act to comfort his family.”
For Haig, it is clear that the mind exists apart from the brain.

Others look at possible physiological reasons for this phenomenon known as terminal lucidity.

The varying physiological states of people who experience terminal lucidity suggest that a single mechanism isn’t responsible, according to researchers at the University of Virginia and the University of Iceland, who published the paper “Terminal Lucidity: A Review and a Case Collection,” in the Archives of Gerontology and Geriatrics in 2012.

“At present, we think that it is not possible to formulate definitive mechanisms for terminal lucidity,” wrote the researchers Dr. Michael Nahm, Dr. Bruce Greyson, Dr. Emily Williams Kelly–all of the University of Virginia–and Dr. Elendur Haraldsson of the University of Iceland.

“Indeed, terminal lucidity in differing mental disorders might result from different processes, depending on the etiology of the diseases. For example, “cachexia” [weakness and wasting of the body] in chronically ill patients might conceivably cause shrinking of brain tissue, relieving the pressure exerted by space-occupying intracranial lesions and permitting fleeting return of some brain function.”

They also noted that, “Some patients for whom life support has been withdrawn may manifest an unexplained transient surge of electroencephalographic activity [electrical activity in the brain] as blood pressure is lost immediately prior to death. Although these patients have not been reported to show any clinical evidence of cognition, these findings suggest that the neuroscience of terminal states may be more complex than traditionally thought.”

Even if some parts of the brain are reactivated through a release of pressure or an electrical surge, it’s hard to imagine how a brain so badly damaged (or barely existent, as in David’s case) could allow a person to coherently recall memories and communicate.

In some cases, it’s as though the whole mind has returned unbroken.
Epoch Times asked some researchers at the University of Virginia’s Division of Perceptual Studies, including study co-author Dr. Greyson, how a damaged brain could produce the impression of such a complete mind in cases of terminal lucidity.

It is a good question, they said, but one they couldn’t answer.

Terminal lucidity was well-known in 19th century medicine, noted Nahm and his co-authors.

But it is almost absent in medical literature of the 20th century.
They reviewed 83 cases mentioned in the literature of the last 250 years.

The study was conducted in the hopes of further understanding the mind—brain relationship.
The researchers also said that understanding terminal lucidity could be useful in helping develop treatments.

For example, Austrian physician Julius Wagner-Jauregg (1857—1940) observed that symptoms of mental derangement sometimes decreased during high fever.
He developed fever therapy for paralytic dementia (a neuropsychiatric disorder affecting the brain), earning him a Nobel Prize for Medicine.

Dr. Alexander Batthyany, a professor in the cognitive science department at the University of Vienna, has been studying terminal lucidity in recent years.
The findings of a recent study of his were presented at the International Association for Near-Death Studies (IANDS) 2014 Conference.

He surveyed 800 caregivers, of which only 32 responded.
These 32 caregivers had cumulatively cared for 227 Alzheimer’s or dementia patients.

About 10 percent of these patients had a sudden and brief return to lucidity.
Batthyany warned, however, that these caregivers were self-selected.

The low response rate may mean that the phenomenon is rare, and that he received replies primarily from those who had witnessed terminal lucidity in their dying patients. Nonetheless, witnessing terminal lucidity had a great impact on some of the caregivers.


One caregiver surveyed said, “Before this happened, I had become fairly cynical about the human vegetables I cared for. Now, I understand that I am caring for nurslings of immortality. Had you seen what I saw, you would understand that dementia can affect the soul, but it will not destroy it.”

Following are a few cases collected by Batthyany and by the University of Virginia researchers.

Cases of Terminal Lucidity

“An elderly woman with dementia, almost mute, no longer recognized people. … Unexpectedly one day, she called her daughter and thanked her for everything … [she] had a phone conversation with the grandchildren, exchanged kindness and warmth, and said farewell, and shortly afterward, she died,” according to Batthyany’s presentation at the IANDS conference.

Nahm and his colleagues wrote of a case from 1840 published in a medical text: “A woman of 30 years diagnosed with ‘wandering melancholy’ (melancholia errabunda) was admitted to an asylum, and shortly thereafter, she became manic. For four years, she lived exclusively in a confused and incoherent state of mind. When she fell sick with a fever, she vehemently refused to take any medicine. … Her health rapidly deteriorated. But the weaker her body became, the more her mental condition improved. Two days before her death, she became fully lucid. She talked with an intellect and clarity that seemed to exceed her former education. She inquired about the lives of her relatives, and in tears regretted her previous intractability toward taking medicine. She died soon thereafter.”

Another case recounted by Nahm was recorded by A. Marshall in his 1815 book “The Morbid Anatomy of the Brain in Mania and Hydrophobia”: “Marshall (1815) reported a case of a mad and furiously violent patient who suffered from memory loss to the degree that he did not even remember his own first name.

When he fell seriously ill after more than 10 years in the asylum, he grew calmer.
On the day before he died, he became rational and asked to see a clergyman.

He seemed to listen attentively to the minister and expressed his hope that God would have mercy on his soul.
Although Marshall (815) did not describe the mental state of the patient in more detail, his report suggests that the man had access to memories of his life again.”

 
Some People With Severe Mental Disorders Mysteriously
Become Clear-Headed Just Before Death


iStock_000000353439_Large-676x450.jpg


People with schizophrenia, Alzheimer’s disease, and other conditions that cause severely impaired mental functioning, have sometimes inexplicably recovered their memories and clarity of mind shortly before death.

Their minds have seemed to return in an amazingly complete and coherent form, even as their brains have deteriorated further than ever.

Patients who aren’t even able to remember their own names for years may suddenly recognize their family members and have normal conversations with them about the past, present, and future...


I worked for two years in a facility for dementia patients, and I took care of dozens of residents in their last days. During this time I witnessed this phenomena exactly. People who had previously been reduced to a near total infantile state, unable to speak, recognize their loved ones etc, suddenly had short periods of total lucidity before passing. They knew the names of their loved ones and were able to communicate with them. They also often spoke of seeing people who had already passed.
 
I worked for two years in a facility for dementia patients, and I took care of dozens of residents in their last days. During this time I witnessed this phenomena exactly. People who had previously been reduced to a near total infantile state, unable to speak, recognize their loved ones etc, suddenly had short periods of total lucidity before passing. They knew the names of their loved ones and were able to communicate with them. They also often spoke of seeing people who had already passed.

If you want to read the actual study review which is much more in depth but very interesting here is the link - http://deanradin.com/evidence/Nahm2011.pdf
The thought of people being lucid while they should have a non-functioning brain (if there is any brain left in some cases like the article outlined) fully goes along with the notion of the mind and brain being separate entities.
 
If you want to read the actual study review which is much more in depth but very interesting here is the link - http://deanradin.com/evidence/Nahm2011.pdf
The thought of people being lucid while they should have a non-functioning brain (if there is any brain left in some cases like the article outlined) fully goes along with the notion of the mind and brain being separate entities.

Absolutely, I will read it. I often considered that imaging/eegs of very late stage Alzheimer's patients might show someone close to brain dead. Certainly for long periods, sometimes months, even years, they would be near catatonic, exhibiting nothing more than stereotyped and saccade eye movements. Then for a short time, they would come to and be themselves again, their eyes showing emotion and recognition, able to form and seemingly understand simple expressions of love and even sometimes talk more extensively with loved ones. It shocks me to this day.
 
Last edited:
Absolutely, I will read it. I often considered that imaging/eegs of very late stage Alzheimer's patients might show someone close to brain dead. Certainly for long periods, sometimes months, even years, they would be near catatonic, exhibiting nothing more than stereotyped and saccade eye movements. Then for a short time, they would come to and be themselves again, their eyes showing emotion and recognition, able to form and seemingly understand simple expressions of love and even sometimes talk more extensively with loved ones. It shocks me to this day.

I just add it to my list of convincing phenomena, where the explanation of a brain/mind separation is the easiest conclusion.
If they disprove it then I’ll go with whomever has the most convincing evidence…and it seems there are mountains of evidence for deathbed lucidity.
 
I will present these in their associated parts as each is quite long and extensive.
Enjoy! @sprinkles
I thought you would particularly like this series.
My Uncle works on this type of programing.

The Brain vs Deep Learning Part I:
Computational Complexity — Or Why the Singularity Is Nowhere Near

By Tim Dettmers

In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning.
I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets.

Thereby we will see that a neuron and a convolutional net are very similar information processing machines.
While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power.

I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.

This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought.

I have tried to make this article as readable as possible, but I might have not succeeded in all places.
Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.

First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that.
Then I will start the integration of ideas between the brain and deep learning.

I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.

The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.

Part I: Evaluating current predictions of a technological singularity

There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life.

How was this prediction made?

Factors which help to predict a singularity

Ray Kurzweil has made many very accurate predictions and his methods to reach these predictions are quite simple for computing devices: Look at the exponential growth of computing power, efficiency, and size, and then extrapolate.

This way, you could easily predict the emergence of small computers which fit into your hands and with a bit of creativity, one could imagine that one day there would be tablets and smartphones.

The trends were there, you just needed to imagine what could be done with computers which you can hold in your hand.

Similarly, Ray Kurzweil predicted the emergence of strong AI which is as intelligent or more intelligent than humans.

For this prediction he also used data for the exponential growth of computing power and compared this to an estimate for the computational power of the brain.

He also acknowledges that the software will be as important as the hardware, and that the software development of strong AI will take longer because such software can only be developed once fast computer systems are available.

This can be felt in the area of deep learning, where solid ideas of the 1990s were unfeasible due to the slow computers.
Once graphic processing units (GPUs) were used, these computing limitations were quickly removed and rapid progress could be made.

However, Kurzweil also stresses that once the hardware level is reached, first “simple” strong AI systems will be developed quickly.
He sets the date for brain-like computational power to 2020 and the emergence of strong AI (first human like intelligence or better) to 2030.

Why these numbers?
With persisting growth in computing power in 2019 we will reach the computing power which is equivalent to the human brain — or will we?

This estimate is based on two things:
(1) The estimate for the complexity of the brain,
(2) the estimate for the growth in computing power.

As we will see, both these estimates are not up-to-date with current technology and knowledge about neuroscience and high performance computing.

Our knowledge of neuroscience doubles about every year.

Using this doubling period, in the year of 2005 we would only have possessed about 0.098% of the neuroscience knowledge that we have today.
This number is a bit off, because the doubling time was about 2 years in 2005 while it is less than a year now, but overall it is way below 1 %.

The thing is that Ray Kurzweil based his predictions on the neuroscience of 2005 and never updated them.
An estimate for the brains computational power based on 1% of the neuroscience knowledge does not seem right.

Here is small list of a few important discoveries made in the last two years which increase the computing power of the brain by many orders of magnitude:


  • It was shown that brain connections rather than being passive cables, can themselves process information and alter the behavior of neurons in meaningful ways, e.g. brain connections help you to see the objects in everyday life. This fact alone increases brain computational complexity by several orders of magnitude
  • Neurons which do not fire still learn: There is much more going on than electrical spikes in neurons and brain connections: Proteins, which are the little biological machines which make everything in your body work, combined with local electric potential do a lot of information processing on their own — no activation of the neuron required
  • Neurons change their genome dynamically to produce the right proteins to handle everyday information processing tasks. Brain: “Oh you are reading a blog. Wait a second, I just upregulate this reading-gene to help you understand the content of the blog better.” (This is an exaggeration — but it is not too far off)

Before we look at the complexity of the brain, let us first look at brain simulations.
Brain simulations are often used to predict human-like intelligence.

If we can simulate a human brain, then it will not be long until we are able to develop human-like intelligence, right?
So the next paragraph looks at this reasoning.

Can brain simulations really provide reliable evidence for predicting the emergence of artificial intelligence?

The problems with brain simulations

Brain simulations simulate the electrical signals which are emitted by neurons and the size of the connections between neurons.
A brain simulation starts with random signals and the whole system stabilizes according to rules which are thought to govern information processing steps in the brain.

After running these rules for some time, stable signals may form which can be compared to the signals of the brain.
If the signals of the simulation are similar to recordings of the brain, this increases our confidence that our chosen rules are somewhat similar to the rules that the brain uses.

Thus we can validate large scale information processing rules in the brain.
However, the big problem with brain simulations is, that this is pretty much all we can do.

We do not gain any understanding what these signals mean or what function they could possess.
We cannot test any meaningful hypotheses with this brain model other than the vague “our rules produce similar activity”.

The lack of precise hypotheses which make accurate predictions (“If the activity is like this, then the circuit detected an apple instead of an orange”) is one of the loudest criticism of the European brain simulation project.

The brain project is regarded as rather useless by many neuroscientists and even dangerous, because it sucks away money for useful neuroscience projects which actually shed light on neural information processing.

Another problem is that these brain simulations rely on models which are outdated, incomplete and which dismiss many biological parts in neurological information processing.
This is mainly so, because the electrical information processing in the brain is much better understood.

Another more conveniently reason is, that current models are already able to reproduce the needed output patterns (which is the main goal after all) and so there is no need to update these models to be more brain-like.

So to summarize, the problems with brain simulations are:


  • Not possible to test specific scientific hypotheses (compare this to the large hadron collider project with its perfectly defined hypotheses)
  • Does not simulate real brain processing (no firing connections, no biological interactions)
  • Does not give any insight into the functionality of brain processing (the meaning of the simulated activity is not assessed)
​
The last point is the most important argument against the usefulness of brain processing for strong-AI estimation.
If we could develop a brain simulation of the visual system, which would do well on say, the MNIST and ImageNet data sets, this would be useful to estimate progress in brain-like AI.

But without this, or any similar observable function, brain simulations remain rather useless with respect to AI.

With this said, brain simulations are still valuable to test hypothesized general rules of information processing in the brain —we have nothing better for this — but they are quite useless to make sense of what the information processing in the brain means, and thus constitute unreliable evidence for predicting the progress in AI.

Anything that relies on brain simulation as evidence for predictions of future strong-AI should be looked at with great skepticism.

Estimating the brains computational complexity

As mentioned in the introduction, the estimates of the brain’s complexity are a decade old and many new discoveries made this old estimate obsolete.
I never came across an estimate which is up to date, so here I derive my own estimate.

While doing this, I will focus mostly on the electrochemical information processing and neglect the biological interactions within the neuron, because they are too complex (and this blog post is already very long).

Therefore the estimate that is derived here can be thought of as a lower bound of complexity — it should always be assumed that the brain is more complex than this.

During the construction of this model of complexity, I will also relate every step in the model with its deep learning equivalents.

This will give you a better understanding of how close deep learning is related to and how fast deep learning really is compared to the human brain.

Defining reference numbers for the model

We know some facts and estimates which help us to start with our model building:


  • The brain uses learning algorithms which are very different from deep learning, but the architecture of neurons is similar to convolutional nets
  • The adult brain has 86 billion neurons, about 10 trillion synapse, and about 300 billion dendrites (tree-like structures with synapses on them)
  • The brain of a child has far more than 100 billion neurons, and has synapses and dendrites in excess of 15 trillion and 150 billion, respectively
  • The brain of a fetus has more than a trillion neurons; neurons which are misplaced die quickly (this is also the reason why adults have fewer neurons than children)

Location of the cerebellum which contains roughly 3/4 of all neurons and connections.

Location of the cerebrum; also referred to as “the cortex”.
More precisely, the cortex is the outer layer of the brain, which contains most neurons of the cerebrum.


  • The cerebellum, the super computer of the brain, contains roughly ¾ of all neurons (this ratio is consistent in most mammal species)
  • The cerebrum, the main driver of “intelligence”, contains roughly ¼ of all neurons
  • An average neuron in the cerebellum has about 25000 million synapses
  • An average neuron in the cerebrum has about 5000-15000 synapses

The number of neurons is well known; the number of synapses and dendrites is only known within a certain boundary and I chose conservative estimates here.

The average synapses per neuron differ wildly between neurons, and the number here is a rough average.

It is known that most synapses in the cerebellum are made between dendrites of Purkinje neurons and two different types of neurons that make connections that “climb up” or “cross parallel” with the Purkinje’s synapses.

It is known that Purkinje cells have about 100000 synapses each.
Because these cells have by far the largest weight in the cerebellum, one can estimate the complexity of the brain best if one looks at these neurons and at the interactions that they make.


There are many hundreds of different types of neurons; here some of the more common neurons.
Thanks to Robert Stufflebeam for this image (source).

It is important to differentiate between the complexity of a brain region and its functional importance.
While almost all computation is carried out by the cerebellum, almost all important functions are carried out by the cerebrum (or cortex).

The cortex uses the cerebellum to generate predictions, corrections and conclusions, but the cortex accumulates these insights and acts upon them.

For the cerebrum it is known that neurons almost never have more than 50000 synapses, and unlike the cerebellum, most neurons have a number of synapses within the range of 5000-15000.

How do we use these numbers?

A common approach for estimating the computational complexity of the brain is to assume all information processing in the brain can be represented by the combination of impulses when a neuron fires (action potentials) and the size (mostly number of receptors) of the synapses that each neuron has.

Thus one can multiply the estimates for the number of neurons and their synapses and add everything together.
Then one multiplies this by the rate of fire for the average neurons which is about 200 action potentials per second.

This model is what Ray Kurzweil uses to create his estimate.
While this model was okay a few decades ago, it is not suitable to model the brain from a modern view point, as it leaves out much of the important neurological information processing which is so much more than mere firing neurons.

A model which approximates the behavior of neurons more accurately is the extended linear-nonlinear-Poisson cascade model (LNP).
The extended LNP model is currently viewed as an accurate model of how neurons process information.

However, the extended LNP model still leaves out some fine details, which are deemed unimportant to model large scale brain function.
Indeed adding these fine details to the model will add almost no additional computational complexity, but makes the model more complex to understand — thus including these details in simulations would violate the scientific method which seeks to find the simplest models for a given theory.

However, this extended model is actually very similar to deep learning and thus I will include these details here.

There are other good models that are also suitable for this.

The primary reason why I chose the LNP model is that it is very close to deep learning.
This makes this model perfect to compare the architecture of a neuron to the architecture of a convolutional net.

I will do this in the next section and at the same time I will derive an estimate for the complexity of the brain.

 
Last edited:
Part II: The brain vs. deep learning – a comparative analysis

Now I will explain step by step how the brain processes information.
I will mention the steps of information processing which are well understood and which are supported by reliable evidence.

On top of these steps, there are many intermediary steps at the biological level (proteins and genes) which are still poorly understood but known to be very important for information processing.

I will not go into depth into these biological processes but provide a short outline, which might help the knowledge hungry readers to delve into these depths themselves.
We now begin this journey from the neurotransmitters released from a firing neuron and walk along all its processes until we reach the point where the next neuron releases its neurotransmitters, so that we return to where we started.

The next section introduces a couple of new terms which are necessary to follow the rest of the blog post, so read it carefully if you are not familiar with basic neurobiology.


Image sources: 1,2,3,4

Neurons use the axon – a tube like structure– to transmit their electric signals over long stretches in the brain.
When a neuron fires, it fires an action potential – an electrical signal– down its axon which branches into a tree of small endings, called axon terminals.

On the ending of each of these axon terminals sit some proteins which convert this electrical message back into a chemical one: Small balls – called synaptic vesicles – filled with a couple of neurotransmitters each are released into an area outside of the neuron, called synaptic cleft.

This area separates the axon terminal from the beginning of the next neuron (a synapse) and allows the neurotransmitter to move freely to pursue different tasks.

The synapses are most commonly located at a structure which looks very much like the roots of a tree or plant; this is the dendritic tree composed of dendrites which branch into larger arms (this represents the connections between neurons in a neural network), which finally reach the core of the cell, which is called soma.

These dendrites hold almost all synapses which connect one neuron to the next and thus form the principal connections.
A synapse may hold hundreds of receptors to which neurotransmitter can bind themselves.

You can imagine this compound of axon terminal and synapses at a dendrite as the (dense) input layer (of an image if you will) into a convolutional net.
Each neuron may have less than 5 dendrites or as many as a few hundred thousand.

Later we will see that the function of the dendritic tree is similar to the combination of a convolutional layer followed by max-pooling in a convolutional network.

Going back to the biological process, the synaptic vesicles merge with the surface of the axon terminal and turn themselves inside-out spilling their neurotransmitters into the synaptic cleft.

There the neurotransmitters drift in a vibrating motion due to the temperature in the environment, until they (1) find a fitting lock (receptor protein) which fits their key (the neurotransmitter), (2) the neurotransmitters encounter a protein which disintegrates them, or (3) the neurotransmitters encounter a protein which pulls them back into the axon (reuptake) where they are reused.

Antidepressants mostly work by (3) preventing, or (4) enhancing the reuptake of the neurotransmitter serotonin; (3) preventing reuptake will yield changes in information processing after some days or weeks, while (4) enhancing reuptake leads to changes within seconds or minutes.

So neurotransmitter reuptake mechanisms are integral for minute to minute information processing.
Reuptake is ignored in the LNP model.

However, the combination of the amount of neurotransmitters released, the number of synapses for a given neurotransmitter, and how many neurotransmitters actually make it into a fitting protein on the synapse can be thought of as the weight parameter in a densely (fully) connected layer of a neural network, or in other words, the total input to a neuron is the sum of all axon-terminal-neurotransmitter-synapse interactions.

Mathematically, we can model this as the dot product between two matrices (A dot B; [amount of neurotransmitters of all inputs] dot [amount of fitting proteins on all synapses]).

After a neurotransmitter has locked onto a fitting protein on a synapse, it can do a lot of different things: Most commonly, neurotransmitters will just (1) open up channels, to let charged particles flow (through diffusion) into the dendrites, but it can also cause a rarer effect with huge consequences: The neurotransmitter (2) binds to a G-protein which then produces a protein signaling cascade which, (2a) activates (upregulates) a gene which is then used to produce a new protein which is integrated into either the surface of the neuron, its dendrites, and/or its synapses; which (2b) alerts existing proteins to do a certain function at a specific site (create or remove more synapses, unblock some entrances, attach new proteins to the surface of the synapse).

This is ignored in the NLP model.

Once the channels are open, negatively or positively charged particles enter into the dendritic spine.
A dendritic spine is a small mushroom-like structure on to which the synapse is attached.

These dendritic spines can store electric potential and have their own dynamics of information processing.
This is ignored in the NLP model.


Dendritic spines have their own internals information processing dynamics which is largely determined by its shape and size. Image source: 1,2

The charge of the particles that may enter the dendritic spine are either negatively or positively charged – some neurotransmitters only open channels for negative particles, others only for positive ones.

There are also channels which let positively charged particles leave the neuron, thus increasing the negativity of the electric potential (a neuron “fires” if it becomes too positive).
The size and shape of the mushroom-like dendritic spine corresponds to its behavior.

This is ignored in the NLP model.

Once particles entered the spine, there are many things they can affect.
Most commonly, they will (1) just travel along the dendrites to the cell body in the neuron and then, if the cell gets too positively charged (depolarization) they induce an action potential (the neuron “fires”).

But other actions are also common: The charged particles accumulate in the dendritic spine directly and (2) open up voltage-gated channels which may polarize the cell further (this is an example of the dendritic spine information processing mentioned above).

Another very important process are (3) dendritic spikes.

Dendritic spikes

Dendritic spikes are a phenomenon which has been known to exist for some years, but only in 2013 the techniques were advanced enough to collect the data to show that these spikes were important for information processing.

To measure dendritic spikes, you have to attach some very tiny clamps onto dendrites with the help of a computer which moves the clamp with great precision.
To have some sort of idea where your clamp is, you need a special microscope to observe the clamp as you progress onto a dendrite.

Even then you mostly attach the clamp in a rather blind matter because at such tiny scale every movement made is a rather giant leap.
Only a few teams in the world have the equipment and skill to attach such clamps onto dendrites.

However, the direct data gathered by those few teams was enough to establish dendritic spikes as important information processing events.
Due to the introduction of dendritic spikes into computational models of neurons, the complexity of a single neuron has become very similar to a convolutional net with two convolutional layers.

As we see later the LNP model also uses non-linearities very similar to a rectified linear function, and also makes use of a spike generator which is very similar to dropout — so a neuron is very much like an entire convolutional net.

But more about that later and back to dendritic spikes and what exactly they are.

Dendritic spikes occur when a critical level of depolarization is reached in a dendrite.

The depolarization discharges as an electric potential along the walls of the dendrite and may trigger voltage-gated channels along its way through the dendritic tree and eventually, if strong enough, the electric potential reaches the core of the neuron where it may trigger a true action potential.

If the dendritic spike fails to trigger an action potential, the opened voltage-gated channels in neighboring dendrites may do exactly that a split second later.
Due to channels opened from the dendritic spike more charged particles enter the neuron, which then may either trigger (common) or stifle (rare) a full action potential at the neurons cell body (soma).


A shows a computer model of a neuron that does not model dendritic spikes; B models simple dynamics of dendritic spikes; C models more complex dynamics of dendritic spikes which takes into account the one dimensional diffusion of particles (which is similar to a convolution operation). Take note that these images are only snapshots in a particular moment of time. A big thanks to Berd Kuhn. Image copyright © 2014 Anwar, Roome, Nedelescu, Chen, Kuhn and De Schutter as published in Frontiers in Cellular Neuroscience (Anwar et al. 2014)


This process is very similar to max-pooling, where a single large activation “overwrites” other neighboring values.
However, after a dendritic spike, neighboring values are not overwritten like during max-pooling used in deep learning, but the opening of voltage-gated channels greatly amplifies the signals in all neighboring branches within the dendritic tree.

Thus a dendritic spike may heighten the electrochemical levels in neighboring dendrites to a level which is more similar to the maximum input – this effect is close to max-pooling.

Indeed it was shown that dendritic spikes in the visual system serve the same purpose as max pooling in convolutional nets for object recognition: In deep learning, max-pooling is used to achieve (limited) rotation, translation, and scale invariance (meaning that our algorithm can detect an object in an image where the object is rotated, moved, or shrunk/enlarged by a few pixels).

One can think of this process as setting all surrounding pixels to the same large activation and make each activation share the weight to the next layer (in software the values are discarded for computational efficiency – this is mathematically equivalent).

Similarly, it was shown that dendritic spikes in the visual system are sensitive to the orientation of an object.
So dendritic spikes do not only have computational similarity, but also similarities in function.

The analogy does not end here.
During neural back-propagation – that is when the action potential travels from the cell body back into the dendritic tree – the signal cannot backpropagate into the dendritic branch where the dendritic spike originated because these are “deactivated” due to the recent electrical activity.

Thus a clear learning signal is sent to inactivated branches.
At first this may seem like the exact opposite from the backpropagation used for max-pooling, where everything but the max-pooling activation is backpropagated.

However, the absence of a backpropagation signal in a dendrite is a rare event and represents a learning signal on its own.
Thus, dendrites which produce dendritic spikes have special learning signals just like activated units in max-pooling.

To better understand what dendritic spikes are and what they look like, I very much want to encourage you to watch this video (for which I do not have the copyright).
The video shows how two dendritic spikes lead to an action potential.

This combination of dendritic spikes and action potentials and the structure of the dendritic tree has been found to be critical for learning and memory in the hippocampus, the main brain region responsible for forming new memories and writing them to our “hard drive” at night.

Dendritic spikes are one of the main drivers of computational complexity which have been left out from past models of the complexity of the brain.
Also, these new findings show that neural back-propagation does not have to be neuron-to-neuron in order to learn complex functions; a single neuron already implements a convolutional net and thus has enough computational complexity to model complex phenomena.

As such, there is little need for learning rules that span multiple neurons – a single neuron can produce the same outputs we create with our convolutional nets today.

But these findings about dendritic spikes are not the only advance made in our understanding of the information processing steps during this stage of the neural information processing pathway.

Genetic manipulation and targeted protein synthesis are sources that increase computational complexity by orders of magnitude, and only recently we made advances which reveal the true extend of biological information processing.

Protein signaling cascades

As I said in the introduction of this part, I will not cover the parts of biological information processing extensively, but I want to give you enough information so that you can start learning more from here.

One thing one has to understand is that a cell looks much different from how it is displayed in text books.
Cells crawl with proteins: There are about 10 billion proteins in any given human cell and these proteins are not idle: They combine with other proteins, work on a task, or jitter around to find new tasks to work on.

All the functions described above are the work of proteins.
For example the key-and-lock mechanism and the channels that play the gatekeeper for the charged particles that leave and enter the neuron are all proteins.

The proteins I mean in this paragraph are not these common proteins, but proteins with special biological functions.

As an example the abundant neurotransmitter glutamate may bind to a NDMA receptor which then opens up its channels for many different kinds of charged particles and after being opened, the channel only closes when the neuron fires.

The strength of synapses is highly dependent on this process, where the synapse is adjusted according to the location of the NDMA receptor and the timing of signals which are backpropagated to the synapses.

We know this process is critical to learning in the brain, but it is only a small piece in a large puzzle.

The charged particles which may enter the neuron may additionally induce protein signaling cascades own their own.

For example the cascade below shows how an activated NMDA receptor (green) lets charged calcium CA2+ inside which triggers a cascade which eventually leads to AMPAR receptors (violet) being trafficked and installed on the synapse.


Image source: 1

It was shown again and again that these special proteins have a great influence on the information processing in neurons, but it is difficult to pick out a specific type of protein from this seemingly chaotic soup of 10 billion proteins and study its precise function.

Findings are often complex with a chain of reactions involving many different proteins until a desired end-product or end-function is reached.
Often the start and end functions are known but not the exact path which led from one to the other.

Sophisticated technology helped greatly to study proteins in detail, and as technology gets better and better we will further our understanding of biological information processing in neurons.

Genetic manipulation

The complexity of biological information processing does not end with protein signaling cascades, the 10 billion proteins are not a random soup of workers that do their tasks, but these workers are designed in specific quantities to serve specific functions that are relevant at the moment.

All this is controlled by a tight feedback loop involving helper proteins, DNA, and messenger RNA (mRNA).
If we use programming metaphors to describe this whole process, then the DNA represents the whole github website with all its public packages, and messenger RNA is a big library which features many other smaller libraries with different functions (something like the C++ boost library).

It all begins with a programming problem you want to solve (a biological problem is detected).
You use google and stackoverflow to find recommendations for libraries which you can use to solve the problem and soon you find a post that suggests that you use library X to solve problem Y (problem Y is detected on a local level in a cell with known solution of protein X; the protein that detected this defect then cascades into a chain of protein signals which leads to the upregulation of the gene G which can produce protein X; here upregulation is a “Hey! Produce more of this, please!” signal to the nucleus of the cell where the DNA lies).

You download the library and compile it (the gene G is copied (transcribed) as a short string of mRNA from the very long string of DNA).
You then do configure the install (the mRNA leaves the core) with the respective configuration (the mRNA is translated into a protein, the protein may be adjusted by other proteins after this), and install the library in a global “/lib” directory (the protein folds itself into its correct form after which it is fully functional).

After you have installed the library, you import the needed part of the library to your program (the folded protein travels (randomly) to the site where it is needed) and you use certain functions of this library to solve your problem (the protein does some kind of work to solve the problem).

Additional to this, neurons may also dynamically alter their genome, that is they can dynamically change their github repository to add or remove libraries.
To understand this process further, you may want to watch the following video, which shows how HIV produces its proteins and how the virus can change the host DNA to suit its needs.

The process described in this video animation is very similar to what is going on in neurons.
To make it more similar to the process in neurons, imagine that HIV is a neurotransmitter and that everything contained in the HIV cell is in the neuron in the first place.

What you have then is an accurate representation of how neurons make use of theirs genes and proteins:

[video=youtube;RO8MP3wMvqg]https://www.youtube.com/watch?feature=player_embedded&v=RO8MP3wMvqg[/video]

You may ask, isn’t it so that every cell in your body has (almost) the same DNA in order to be able to replicate itself?
Generally, this is true for most cells, but not true for most neurons.

Neurons will typically have a genome that is different from the original genome that you were assigned to at birth.
Neurons may have additional or fewer chromosomes and have sequences of information removed or added from certain chromosomes.

It was shown, that this behavior is important for information processing and if gone awry, this may contribute to brain disorders like depression or Alzheimer’s disease.
Recently it was also shown, that neurons change their genome on a daily basis to improve information processing demands.

So when you sit at your desk for five days, and then on the weekend decide to go on a hike, it makes good sense that the brain adapts its neurons for this new task, because entirely different information processing is needed after this change of environment.

Equally, in an evolutionary sense, it would be beneficial to have different “modes” for hunting/gathering and social activity within the village – and it seems that this function might be for something like this. In general, the biological information processing apparatus is extremely efficient in responding to slower information processing demands that range from minutes to hours.

With respect to deep learning, an equivalent function would be to alter the function of a trained convolutional net in significant but rule-based ways; for example to apply a transformation to all parameters when changing from one to another task (recognition of street numbers -> transform parameters -> recognition of pedestrians).

Nothing of this biological information processing is modeled by the LNP model.
Looking back at all this, it seems rather strange that so many researchers think they that they can replicate the brain’s behavior by concentrating on the electrochemical properties and inter-neuron interactions only.

Imagine that every unit in a convolutional network has its own github, from which it learns to dynamically download, compile and use the best libraries to solve a certain task.
From all this you can see that a single neuron is probably more complex than an entire convolutional net, but we continue from here in our focus on electrochemical processes and see where it leads us.

Back to the LNP model

After all this above, there is only one more relevant step in information processing for our model.
Once a critical level of depolarization is reached, a neuron will most often fire, but not always.

There are mechanisms that prevent a neuron from firing.
For example shortly after a neuron fired, its electric potential is too positive to produce a fully-fledged action potential, and thus it cannot fire again.

This blockage may be present even when a sufficient electric potential is reached, because this blockade is a biological function and not a physical switch.

In the LNP model, this blockage of an action potential is modeled as an inhomogeneous Poisson process which has a Poisson distribution.

A Poisson process with a Poisson distribution as a model means that the neuron has a very high probability to fire the first or second time it reached its threshold potential, but it may also be (with a exponentially decreasing probability) that a neuron may not fire for many more times.


A Poisson(0.5) distribution with a randomly drawn sample. Here 0,1,2,3 represents the waiting time until the neuron fires, thus 0 means it fires without delay, while 2 means it will not fire for two cycles even if it could fire physically.

There are exceptions to this rule, where neurons disable this mechanism and fire continuously at the rates which are governed by the physics alone – but these are special events which I will ignore at this point.

Generally, this whole process is very similar to dropout used in deep learning which uses a uniform distribution instead of a Poisson distribution; thus this process can be viewed as some kind of regularization method that the brain uses instead of dropout.

In the next step, if the neuron fires, it releases an action potential.
The action potential has very little difference in its amplitude, meaning the electric potential generated by the neuron almost always has the same magnitude, and thus is a reliable signal.

As this signal travels down the axon it gets weaker and weaker.
When it flows into the branches of the axon terminal, its final strength will be dependent on the shape and length of these branches; so each axon terminal will receive a different amount of electrical potential.

This spatial information, together with the temporal information due to the spiking pattern of action potentials, is then translated into electrochemical information (it was shown that they are translated into spikes of neurotransmitters themselves that last about 2ms).

To adjust the output signal, the axon terminal can move, grow or shrink (spatial), or it may alter its protein makeup which is responsible for releasing the synaptic vesicles (temporal).

Now we are back at the beginning: Neurotransmitters are released from the axon terminal (which can be modeled as a dense matrix multiplication) and the steps repeat themselves.

Learning and memory in the brain

Now that we went through the whole process back to back, let us put all this into context to see how the brain uses all this in concert.
Most neurons repeat the process of receive-inputs-and-fire about 50 to 1000 times per second; the firing frequency is highly dependent on the type of neuron and if a neuron is actively processesing tasks.

Even if a neuron does not process a task it will fire continuously in a random fashion.
Once some meaningful information is processed, this random firing activity makes way for a highly synchronized activity between neighboring neurons in a brain region.

This synchronized activity is poorly understood, but is thought to be integral to understanding information processing in the brain and how it learns.

Currently, it is not precisely known how the brain learns.

We do know that it adjusts synapses with some sort of reinforcement learning algorithm in order to learn new memories, but the precise details are unclear and the weak and contradicting evidence indicates that we are missing some important pieces of the puzzle.

We got the big picture right, but we cannot figure out the brain’s learning algorithm without the fine detail which we are still lacking.

Concerning memories, we know that some memories are directly stored in the hippocampus, the main learning region of the brain (if you lose your hippocampus in each brain hemisphere, you cannot form new memories).

However, most long-term memories are created and integrated with other memories during your REM sleep phase, when so called sleep spindles unwind the information of your hippocampus to all other brain areas.

Long-term memories are generally all local: Your visual memories are stored in the visual system; your memories for your tongue (taste, texture) are stored in the brain region responsible for your tongue, etcetera.

It is also known, that the hippocampus acts as a memory buffer.
Once it is full, you need to sleep to empty its contents to the rest of your brain (through sleep spindles during REM sleep); this might be why babies sleep so much and so irregularly –once their learning buffer is full, they sleep to quickly clear their buffer in order to learn more after they wake.

You can still learn when this memory buffer is full, but retention is much worse and new memories might wrangle with other memories in the buffer for space and displace them –so really get your needed amount of sleep.

Sleeping less and irregularly is unproductive, especially for students who need to learn.


The hippocampus in each hemisphere is shown in red. Image source: 1

Because memories are integrated with other memories during your “write buffer to hard-drive” stage, sleep is also very important for creativity.
The next time you recall a certain memory after you slept, it might be altered with some new information that your brain thought to be fitting to attach to that memory.

I think we all had this: We wake up with some crazy new idea, only to see that it was quite nonsensical in the first place – so our brain is not perfect either and makes mistakes.
But other times it just works: One time I tortured myself with a math problem for 7 hours non-stop, only to go to bed disappointed with only about a quarter of the whole problem solved.

After I woke, I immediately had two new ideas how to solve the problem: The first did not work; but second made things very easy and I could sketch a solution to the math problem within 15 minutes – an ode to sleep!

Now why do I talk about memories when this blog post is about computation?
The thing is that memory creation – or in other words – a method to store computed results for a long time, is critical for any intelligence.

In brain simulations, one is satisfied if the synapse and activations occur in the same distribution as they do in the real brain, but one does not care if these synapses or activations correspond to anything meaningful – like memories or “distributed representations” needed for functions such as object recognition.

This is a great flaw.
Brain simulations have no memories.

In brain simulation, the diffusion of electrochemical particles is modeled by differential equations.
These differential equations are complex, but can be modeled with simple techniques like Euler’s method to approximate these complex differential equations.

The result has poor accuracy (meaning high error) but the algorithm is very computationally efficient and the accuracy is sufficient to reproduce the activities of real neurons along with their size and distribution of synapses.

The great disadvantage is that we generally cannot learn parameters from a method like this – we cannot create meaningful memories.

However, as I have shown in my blog post about convolution, we can also model diffusion by applying convolution – a very computationally complex operation.

The advantage about convolution is that we can use methods like maximum-likelihood estimation with backpropagation to learn parameters which lead to meaningful representations which are akin to memories (just like we do in convolutional nets).

This is exactly akin to the LNP model with its convolution operation.

So besides its great similarity to deep learning models, the LNP model is also justified in that it is actually possible to learn parameters which yield meaningful memories (where with memories I mean here distributed representations like those we find in deep learning algorithms).

This then also justifies the next point where I estimate the brain’s complexity by using convolution instead of Euler’s method on differential equations.
Another point to take away from for our model is, that we currently have no complexity assigned for the creation of memories (we only modeled the forward pass, not the backward pass with backpropagation).

As such, we underestimate the complexity of the brain, but because we do not know how the brain learns, we cannot make any accurate estimates for the computational complexity of learning.

With that said and kept in the back of our mind, let us move on to bringing the whole model together for a lower bound of computational complexity.

Bringing it all together for a mathematical estimation of complexity



The next part is a bit tricky: We need to estimate the numbers for N, M, n and m and these differ widely among neurons.

We know that 50 of the 86 billion neurons in the brain are cerebellar granule neurons, so these neurons and their connection will be quite important in our estimation.

Cerebellar granule neurons are very tiny neurons with about 4 dendrites.
Their main input is from the cortex.

They integrate these signals and then send them along a T-shaped axon which feeds into the dendrites of Purkinje neurons.

Purkinje neurons are by far the most complex neurons, but there are only about 100 million of them.

They may have more than a 100000 synapses each and about 1000 dendrites.
Multiple Purkinje neurons bundle their outputs in about a dozen deep nuclei (a bunch of densely packed neurons) which then send signals back to the cortex.

This process is very crucial for non-verbal intelligence, abstract thinking and abstract creativity (creativity: Name as many words beginning with the letter A; abstract creativity: What if gravity bends space-time (general relativity)?

What if these birds belonged to the same species when they came to this island (evolution)?).
It was thought a few decades ago that the cerebellum only computes outputs for movement; for example while Einstein’s cerebrum was handled and studied carefully, his cerebellum was basically just cut off and put away, because it was regarded as a “primitive” brain part.

But since then it was shown that the cerebellum forms 1:1 connections with most brain regions of the cortex.
Indeed, changes in the front part of the cerebellum during the ages 23 to 25 may change your non-verbal IQ by up to 30 points, and changes of 10-15 IQ points are common.

This is very useful in most instances, whereas we lose neurons which perform a function which we do not need in everyday lives (calculus, or the foreign language which you learned but never used).

So it is crucial to get the estimation of the cerebellum right not only because it contains most neurons, but also because it is important for intelligence and information processing in general.

Estimation of cerebellar filter dimensions

Now if we look at a single dendrite, it branches off into a few branches and thus has a tree like structure.
Along its total length it is usually packed with synapses.

Dendritic spikes can originate in any branch of a dendrite (spatial dimension).
When we take 3 branches per dendrite, and 4 dendrites in total we have a convolutional filter of size 3 and 4 for cerebellar granule neurons.

Since linear convolution over two dimensions is the same as convolution over one dimension followed by convolution over the other dimension, we can also model this as a single 3×4 convolution operation.

Also note that this is mathematically identical to a model that describes the diffusion of particles originating from different sources (feature map) which diffuse according to a rule in their neighborhood (kernel) – this is exactly what happens at a physical level.

More on this view in my blog post about convolution.

Here I have chosen to represent the spatial domain with a single dimension.
It was shown that the shape of the dendritic tree is also important in the resulting information processing and thus we would need two dimensions for the spatial domain.

However, data is lacking to represent this mathematically in a meaningful way and thus I proceed with the simplification to one spatial dimension.

The temporal dimension is also important here: Charged particles may linger for a while until they are pumped out of the neuron. It is difficult to estimate a meaningful time frame, because the brain uses continuous time while our deep learning algorithms only know discrete time steps.

No single estimate makes sense from a biological perspective, but from a psychological perspective we know that the brain can take up unconscious information that is presented in an image in about 20 milliseconds (this involves only some fast, special parts of the brain).

For conscious recognition of an object we need more time – at least 65 milliseconds, and on average about 80-200 milliseconds for reliable conscious recognition.
This involves all the usual parts that are active for object recognition.

From these estimates, one can think about this process as “building up the information of the seen image over time within a neuron”.
However, a neuron can only process information if it can differentiate meaningful information from random information (remember, neurons fire randomly if they do not actively process information).

Once a certain level of “meaningful information” is present, the neuron actively reacts to that information.
So in a certain sense information processing can be thought of as an epidemic of useful information that spreads across the brain: Information can only spread to one neuron, if the neighboring neuron is already infected with this information.

Thinking in this way, such an epidemic of information infects all neurons in the brain within 80-200 milliseconds.

As such we can say that, while the object lacks details in the first 20 milliseconds, there is full detail at about 80-200 milliseconds.

If we translate this into discrete images at the rate of 30 frames per second (normal video playback) –or in other words time steps – then 20 milliseconds would be 0.6 time steps, and 80-200 milliseconds 2.4-6 time steps.

This means, that all the visual information that a neuron needs for its processing will be present in the neuron within 2.4 to 6 frames.

To make calculations easier, I here now choose a fixed time dimension of 5 time steps for neural processes.

This means for the dendrites we have spatio-temporal convolutional filters of size 3x4x5 for cerebellar granule neurons.
For Purkinje neurons a similar estimate would be filters of a size of about 10x1000x5.

The non-linearity then reduces these inputs to a single number for each dendrite.
This number represents an instantaneous firing rate, that is, the number represents how often the neuron fires in the respective interval of time, for example at 5 Hz, 100 Hz, 0 Hz etcetera.

If the potential is too negative, no spike will result (0 HZ); if the potential is positive enough, then the magnitude of the spike is often proportional to the magnitude of the electric potential –but not always.

It was shown that dendritic summation of this firing rate can be linear (the sum), sub-linear (less than the sum), supra-linear (more than the sum) or bistable (less than the sum, or more than the sum, depending on the respective input); these behaviors of summation often differ from neuron to neuron.

It is known that Purkinje neurons use linear summation, and thus their summation to form a spike rate is very similar to the rectified linear function max(0,x) which is commonly used in deep learning.

Non-linear sums can be thought of different activation functions.
It is important to add, that the activation function is determined by the type of the neuron.

The filters in the soma (or cell body) can be thought of as an additional temporal convolutional filter with a size of 1 in the spatial domain.
So this is a filter that reduces the input to a single dimension with a time dimension of 5, that is, a 1x1x5 convolutional filter (this will be the same for all neurons).

Again, the non-linearity then reduces this to an instantaneous firing rate, which then is dropped out by a Poisson process, which is then fed into a weight-matrix.
At this point I want to again emphasize, that it is not correct to view the output of a neuron as binary; the information conveyed by a firing neuron is more like an if-then-else branch: “if(fire == True and dropout == False){ release_ neurotransmitters(); }else{ sleep(0.02); }”

The neurotransmitters are the true output of a neuron, but this is often confused.
The source of this confusion is that it is very difficult to study neurotransmitter release and its dynamics with a synapse, while it is ridiculously easy to study action potentials.

Most models of neurons thus model the output as action potentials because we have a lot of reliable data here; we do not have such data for neurotransmitter interactions at a real-time level.

This is why action potentials are often confused as the true outputs of neurons when they are not.

When a neuron fires, this impulse can be thought of as being converted to a discrete number at the axon terminal (number of vesicles which are released) and is multiplied by another discrete number which represents the amount of receptors on the synapse (this whole process corresponds to a dense or fully connected weight in convolutional nets).

In the next step of information processing, charged particles flow into the neuron and build up a real-valued electric potential.
This has also some similarities to batch-normalization, because values are normalized into the range [0,threshold] (neuron: relative to the initial potential of the neuron; convolutional net: relative to the mean of activations in batch-normalization).

When we look at this whole process, we can model it as a matrix multiplication between two real-valued matrices (doing a scaled normalization before or after this is mathematically equivalent, because matrix multiplication is a linear operation).

Therefore we can think of axon-terminal-synapse interactions between neurons as a matrix multiplication between two real-valued matrices.

Estimation of cerebellar input/output dimensions

Cerebellar granule neurons typically receive inputs from about four axons (most often connections from the cortex).
Each axon forms about 3-4 synapses with the dendritic claw of the granule neuron (a dendrite ending shaped as if you would hold a tennis ball in your hand) so there are a total of about 15 inputs via synapses to the granule neurons.

The granule neuron itself ends in a T shaped axon which crosses directly through the dendrites of Purkinje neurons with which it forms about 100 synapses.

Purkinje neurons receive inputs from about 100000 connections made with granule neurons and they themselves make about 1000 connections in the deep nuclei.

There are estimates which are much higher and no accurate number for the number of synapses exists as far as I know.
The number of 100000 synapses might be a slight overestimate (but 75000 would be too conservative), but I use it anyways to make the math simpler.

All these dimensions are taken times the time dimension as discussed above, so that the input for granule neurons for example has a dimensionality of 15×5.
So with this we can finally calculate the complexity of a cerebellar granule neuron together with the Purkinje neurons.



So my estimate would be 1.075×10^21 FLOPS for the brain, the fastest computer on earth as of July 2013 has 0.58×10^15 FLOPS for practical application (more about this below).

 
[MENTION=6917]sprinkles[/MENTION]

I will put the final parts up later or tomorrow!
 
[MENTION=5045]Skarekrow[/MENTION]

The brain is amazing in it's adaptability.

Want to hear something funny? I'm so used to solving Rubik's cubes quickly that I can't solve them slowly anymore. If I try to slow down too much - say to show someone the moves - my brain starts to freak out because I'm throttling it when it wants to go fast and it does this time warpy thing where it is trying to anticipate things that haven't happened yet and I start to feel like the world is in slow motion and I start feeling dizzy and lightheaded.
 
Back
Top