11692531_740188749423178_2686442424636056834_n.jpg
 
@Skarekrow

The brain is amazing in it's adaptability.

Want to hear something funny? I'm so used to solving Rubik's cubes quickly that I can't solve them slowly anymore. If I try to slow down too much - say to show someone the moves - my brain starts to freak out because I'm throttling it when it wants to go fast and it does this time warpy thing where it is trying to anticipate things that haven't happened yet and I start to feel like the world is in slow motion and I start feeling dizzy and lightheaded.

I can see that…that’s kind of how it was in surgery for me…I hated training incompetent people…for me, there was one speed, one focus, one way to do things.
 
The third part to the Deep Learning article.
Enjoy!


Part III: Limitations and criticism

While I discussed how the brain is similar to deep learning, I did not discuss how the brain is different.
One great disparity is that the dropout in the brain works with respect to all inputs, while dropout in a convolutional network works with respect to each single unit.

What the brain is doing makes little sense in deep learning right now; however, if you think about combining millions of convolutional nets with each other, it makes good sense to do as the brain does.

The dropout of the brain certainly would work well to decouple the activity of neurons from each other, because no neuron can depend on information from a single other neuron (because it might be dropped out), so that it is forced to take into account all the neurons it is connected with, thus eliminating biased computation (which is basically regularization).

Another limitation of the model is that it is a lower bound.
This estimate does not take into account:


  • Backpropagation, i.e. signals that travel from the soma to the dendrites; the action potential is reflected within the axon and travels backwards (these two things may almost double the complexity)
  • Axon terminal information processing
  • Multi-neurotransmitter vesicles (can be thought of multiple output channels or filters, just as an image has multiple colors)
  • Geometrical shape of the dendritic tree
  • Dendritic spine information processing
  • Non-axodendritic synapses (axon-axon and axon-soma connections)
  • Electrical synapses
  • Neurotransmitter induced protein activation and signaling
  • Neurotransmitter induced gene regulation
  • Voltage induced (dendritic spikes and backpropagating signals) gene regulation
  • Voltage induced protein activation and signaling
  • Glia cells (besides having an extremely abnormal brain (about one in a billion), Einstein also had abnormally high levels of glia cells)

All these things have been shown to be important for information processing in the brain.
I did not include them in my estimate because this would have made everything:


  • Too complex: What I have discussed so far is extremely simple if you compare that to the vastness and complexity of biological information processing
  • Too special: Non-axodendritic synapses can have unique information processing algorithms completely different from everything listed here, e.g. direct electrical communication between a neighboring bundle of neurons
  • And/or evidence is lacking to create a reliable mathematical model: Neural backpropagation, geometry of the dendritic trees, and dendritic spines

Remember that these estimates are for the whole brain.
Local brain regions might have higher computational processing speed than this average when they are actively processing stimuli.

Also remember that the cerebellum makes up almost all computational processing.
Other brain regions integrate the knowledge of the cerebellum, but the cerebellum acts as a transformation and abstraction module for almost all information in the brain (except vision and hearing).

But wait, but we can do all this with much less computational power!
We already have super-human performance in computer vision!


I would not say that we have super-human performance in computer vision.
What we have is a system that beats human at naming things in images that are taken out of context of the real world (what happens before we see something in the real world shapes our perception dramatically).

We almost always can recognize things in our environment, but we most often just do not know (or care about) the name of what we see.

Humans do not have the visual system to label things.

Try to make a list of 1000 common physical objects in the real world –not an easy task.

To not recognize an object for us humans would mean that we see an object but cannot make sense of it.

If you forgot the name of an old classmate, it does not mean you did not recognize her; it just means you forgot her name.
Now imagine you get off a train stop and you know a good friend is waiting for you somewhere at the stop.

You see somebody 300 meters away waving their hands who is looking in your direction – is it your friend?
You do not know; you cannot recognize if it is her.

That’s the difference between mere labels and object recognition.

Now if you cannot recognize something in a 30×30 pixel image, but the computer can, this also does not necessarily mean that the computer has super-human object recognition performance.

First and foremost this means that your visual system does not work well for pixeled information.
Our eyes are just not used to that.

Now take a look outside a window and try to label all the things you see.
It will be very easy for most things, but for other things you do not know the correct labels!

For example, I do not know the name for a few plants that I see when I look out of my window.
However, we are fully aware what it is what we see and can name many details of the object.

For example, alone by assessing their appearance, I know a lot about how much water and sunshine the unknown plants need, how fast they grow, in which way they grow, if they are old or young specimens; I know how they feel like if I touch them – or more generally – I know how these plants grow biologically and how they produce energy, and so on.

I can do all this without knowing its name.
Current deep learning systems cannot do this and will not do this for quite some time.

Human-level performance in computer vision is far away indeed!
We just reached the very first step (object recognition) and now the task is to make computer vision smart, rather than making it just good at labeling things.

Evolutionarily speaking, the main functions of our visual system have little to do with naming things that we see:
Hunt and avoid being hunted, to orient ourselves in nature during foraging and make sure we pick the right berries and extract roots efficiently– these are all important functions, but probably one of the most important functions of our vision is the social function within a group or relationship.

If you Skype with someone it is quite a different communication when they have their camera enabled compared to if they have not.
It is also very different to communicate with someone whose image is on a static 2D surface compared to communicating in person.

Vision is critical for communication.

Our deep learning cannot do any of this efficiently.

Making sense of a world without labels

One striking case which also demonstrates the power of vision for true understanding of the environment without any labels is the case of Genie.
Genie was strapped into place and left alone in a room at the age of 20 months.

She was found with severe malnutrition 12 years later.
She had almost no social interaction during this time and thus did not acquire any form of verbal language.

Once she got in contact with other human beings she was taught English as a language (and later also sign language), but she never really mastered it.
Instead she quickly mastered non-verbal language and was truly exceptional at that.

To strangers she almost exclusively communicated with non-verbal language.
There are instances where these strangers would stop in their place, leave everything behind, walk up to her and hand her a toy or another item – that item was always something that was known to be something liked and desired.

In one instance a woman got out of her car at a stoplight at an intersection, emptied her purse and handed it to Genie.
The woman and Genie did not exchange a word; they understood each other completely non-verbally.

So what Genie did, was to pick up cues with her visual system and translated the emotional and cognitive state of that woman into non-verbal cues and actions, which she would then use to change the mental state of the woman.

In turn that the woman would then desire to give the purse to Genie (which Genie probably could not even see).

Clearly, Genie was very exceptional at non-verbal communication – but what would happen if you pitched her against a deep learning object recognition system?

The deep learning system would be much better than Genie on any data set you would pick.
Do you think it would be fair to say that the convolutional net is better at object recognition than Genie is?

I do not think so.

This shows how primitive and naïve our approach to computer vision is.

Object recognition is a part of human vision, but it is not what makes it exceptional.

Can we do with less computational power?

“We do not need as much computational power as the brain has, because our algorithms are (will be) better than that of the brain.”
I hope you can see after the descriptions in this blog post that this statement is rather arrogant.

We do not know how the brain really learns.
We do not understand information processing in the brain in detail.

And yet we dare to say we can do better?

Even if we did know how the brain works in all its details, it would still be rather naïve to think we could create general intelligence with much less.
The brain developed during many hundreds of millions of years through evolution.

Evolutionary, it is the most malleable organ there is:
The human cortex shrunk by about 10% during the last 20000 years, and the human brain adapted rapidly to the many ways we use verbal language – a very recent development in evolutionary terms.

It was also shown that the number of neurons in each animal’s brain is almost exactly the amount which it can sustain through feeding (we probably killed off the majority of all mammoths by about 20000 years ago).

We humans have such large brains because we invented fire and cooking with which we could predigest food which made it possible to sustain more neurons.
Without cooking, the intake of calories would not be high enough to sustain our brains and we would helplessly starve (at least a few thousand years ago; now you could survive on a raw vegan diet easily – just walk into a supermarket and buy a lot of calorie-dense foods).

With this fact, it is very likely that brains are optimized exhaustively to create the best information processing which is possible for the typical calorie intake of the respective species – the function which is most expensive in an animal will be most ruthlessly optimized to enhance survival and procreation.

This is also very much in line with all the complexity of the brain; every little function is optimized thoroughly and only as technology advances we can understand step by step what this complexity is made for.

There are many hundreds of different types of neurons in the brain, each with their designated function.
Indeed, neuroscientists often can differentiate different brain regions and their function by looking at the changing architecture and neuron types in a brain region.

Although we do not understand the details of how the circuits perform information processing, we can see that each of these unique circuits is designed carefully to perform a certain kind of function.

These circuits are often replicated in evolutionary distinct species which share a common ancestor that branched off into these different species hundreds of millions of years ago, showing that such structures are evolutionarily optimal for the tasks they are processing.

The equivalent in deep learning would be, if we had 10000 different architectures of convolutional nets (with its own set of activation functions and more) which we combine meticulously to improve the overall function of our algorithm ― do you really think we can build something which can produce as complex information processing, but which follows a simple general architecture?

It is rather naïve to think that we can out-wit this fantastically complex organ when we are not even able to understand its learning algorithms.
On top of this, the statement that we will develop better algorithms than the brain uses is unfalsifiable.

We can only prove it when we achieve it, we cannot disprove it.
Thus it is a rather nonsensical statement that has little practical value.

Theories are usually useful even when there is not enough evidence to show that they are correct.

The standard model of physics is an extremely useful theory used by physicists and engineers around the world in their daily life to develop the high tech products we enjoy; and yet this theory is not complete, it was amended just a few days ago when a new particle was proven to exist in the LHC experiment.

Imagine if there were another model, but you would only be able to use it when we have proven the existence of all particles.
This model would then be rather useless.

When it makes no predictions at all about the behavior in the world, we would be unable to manufacture and develop electronics with this theory.
Similarly, the statement that we can develop more efficient algorithms than the brain does not help; it rather makes it more difficult to make further progress.

The brain should really be our main point of orientation.

Another argument, which would be typical for Yann LeCun (he made a similar argument during a panel) would be:
Arguably, airplanes are much better at flying than birds are; yet, if you describe the flight of birds it is extremely complex and every detail counts, while the flight of airplanes is described simply by the fluid flow around an airfoil.

Why is it wrong to expect this simplicity from deep learning when compared to the brain?

I think this argument has some truth in it, but essentially, it asks the wrong question.

I think it is clear that we need not to replicate everything in detail in order to achieve artificial intelligence, but the real question is:
Where do we draw the line?

If you get to know that neurons can be modeled in ways that closely resemble convolutional nets, would you go so far and say, that this model is too complex and we need to make it simpler?

 
And finally the last part!!
Hope you enjoyed them all!!


Part IV: Predicting the growth of practical computational power


There is one dominant measure of performance in high-performance computing (HPC) and this measure is floating point operations per second (FLOPS) on the High Performance LINPACK (HPL) benchmark — which measures how many computations a system can do in a second when doing distributed dense matrix operations on hundreds or thousands of computers.

There exists the TOP 500 list of supercomputers, which is a historical list based on this benchmark which is the main reference point for the performance of a new supercomputer system.

But a big but comes with the LINPACK benchmark.
It does not reflect the performance in real, practical applications which run on modern supercomputers on a daily basis, and thus, the fastest computers on the TOP 500 list are not necessarily the fastest computers for practical applications.

Everybody in the high performance computing community knows this, but it is so entrenched in the business routine in this area, that when you design a new supercomputer system, you basically have to show that your system will be able to get a good spot on the TOP 500 in order to get funding for that supercomputer.

Sometimes such systems are practically unusable, like the Tianhe-2 supercomputer which still holds the top spot on the LINPACK benchmark after more than three years.
The potential of this supercomputer goes largely unused because it is too expensive to run (electricity) and the custom hardware (custom network, Intel Xeon Phi) requires new software, which would need years of development to reach the levels of sophistication of standard HPC software.

The Tianhe-2 runs only at roughly one third of its capacity, or in other words, it practically stands idle for nearly 2 out of 3 minutes.
The predecessor of the Tianhe-2, the Tianhe-1, fastest computer in the world in 2010 (according to LINPACK), has not been used since 2013 due to bureaucracy reasons.

While outside of China, other supercomputers of similar design fare better, they typically do not perform so well in practical applications.
This is so, because the used accelerators like graphic processing units (GPUs) or Intel Xeon Phis can deliver high FLOPS in such a setup, but they are severely limited by network bandwidth bottlenecks.

To correct the growing uselessness of the LINPACK benchmark a new measure of performance was developed:
The high performance conjugate gradient benchmark (HPCG).

This benchmark performs conjugate gradient, which requires more communication than LINPACK and as such comes much closer to performance numbers for real applications.
I will use this benchmark to create my estimates for a singularity.


The TOP500 for the last decade and some data for the HPCG (data collection only began recently). The dashed lines indicate a forecast. The main drivers of computational growth are also shown: Multicore CPU, GPU, and in 2016-2017 3D memory, and some new unknown technology in 2020. Will this growth be sustainable?


However, this benchmark still dramatically overestimates the computing power that can be reached for artificial intelligence applications when we assume that these applications are based on deep learning.

Deep learning is currently the most promising technique for reaching artificial intelligence.
It is certain that deep learning – as it is now – will not be enough, but one can say for sure that something similar to deep learning will be involved in reaching strong AI.

Deep learning, unlike other applications has an unusually high demand for network bandwidth.
It is so high that for some supercomputer designs which are in the TOP 500 a deep learning application would run slower than on your desktop computer.

Why is this so?
Because parallel deep learning involves massive parameter synchronization which requires extensive network bandwidth:
If your network bandwidth is too slow, then at some point deep learning gets slower and slower the more computers you add to your system.

As such, very large systems which are usually quite fast may be extremely slow for deep learning.

The problem with all this is that the development of new network interconnects which enable high bandwidth is difficult and advances are made much more slowly than the advances of computing modules, like CPUs, GPUs and other accelerators.

Just recently, Mellanox reached a milestone where they could manufacture switches and InfiniBand cards which operate at 100Gbits per second.
This development is still rather experimental, and it is difficult to manufacture fiber-optic cables which can operate at this speed.

As such, no supercomputer implements this new development as of yet.
But with this milestone reached, there will not be another milestone for many quite a while.

The doubling time for network interconnect bandwidth is about 3 years.

Similarly, there is a memory problem.

While the speed of theoretical processing power of CPUs and GPUs keeps increasing, the memory bandwidth of RAM is almost static.
This is a great problem, because now we are at a point where it costs more time to move the data to the compute circuits than to actually make computations with it.

With new developments such as 3D memory one can be sure that further increases in memory bandwidth will be achieved, but we have nothing after that to increase the performance further.

We need new ideas and new technology.
Memory will not scale itself by getting smaller and smaller.

However, currently the biggest hurdle of them all is power consumption.
The Tianhe-2 uses 24 megawatts of power, which totals to $65k-$100k in electricity cost per day, or about $23 million per year.

The power consumed by the Tianhe-2 would be sufficient to power about 6000 homes in Germany or 2000 homes in the US (A/C usage).


An overview about how the performance constraints changed from old to new supercomputers.
Adapted from Horst Simon‘s presentation

Physical limitations


Furthermore, there are physical problems around the corner.
Soon, our circuits will be so small that electrons will start to show quantum effects.

One such quantum effect is quantum tunneling.
In quantum tunneling an electron sits in two neighboring circuits at once, and decides randomly to which of these two locations it will go next.

If this would happen at a larger scale, it would be like charging your phone right next to your TV, and the electrons decide they want to go to your cell phone cable rather than to your TV; so they jump over to the phone cable cutting off the power to your TV.

Quantum tunneling will become relevant in 2016-2017 and has to be taken into account from there on.
New materials and “insulated” circuits are required to make everything work from here on.

With new materials, we need new production techniques which will be very costly because all computer chips relied on the same, old but reliable production process.
We need research and development to make our known processes working with these new materials and this will not only cost money but also cost time.

This will also fuel a continuing trend where the cost for producing computer chips increases exponentially (and growth may slow due to costs).
Currently, the tally is at $9bn for such a semiconductor fabrication plant (fab) increasing at a relatively stable rate of about 7-10% higher costs per year for the past decades.

After this, we are at the plain physical limits.
A transistor will be composed of not much more than a handful of atoms.

We cannot go smaller than this, and this level of manufacturing will require extensive efforts in order to get such devices working properly.
This will start to happen around 2025 and the growth may slow from here due to physical limitations.

Recent trends in the growth of computational power

So to summarize the previous section:
(1) LINPACK performance does not reflect practical performance because it does not test memory and network bandwidth constraints;
(2) memory and network bandwidth are now more important than computational power, however
(3) advances in memory and network bandwidth will be sporadic and cannot compete with the growth in computational power;
(4) electrical costs are a severe limitation (try to justify a dedicated power plant for a supercomputer if citizen face sporadic power outages), and also
(5) computational power will be limited by physical boundaries in the next couple of years.

It may not come to a surprise then that the growth in computational power has been slowing down in recent years; this is mainly due to power efficiencies which will only be improved gradually, but the other factors also take its toll, like network interconnects which cannot keep up with accelerators like GPUs.

If one takes the current estimate of practical FLOPS of the fastest supercomputer, the Tianhe-2 with 0.58 petaflops on HPCG, then it would take 21 doubling periods until the lower bound of the brain’s computational power is reached.

If one uses Moore’s Law, we would reach that by 2037; if we take the growth of the last 60 years, which is about 1.8 years per doubling period, we will reach this in the year 2053. If we take a lower estimate of 3 years for the doubling period due to the problems listed above we will reach this in 2078.

While for normal supercomputing applications memory bandwidth is the bottleneck for practical applications as of now, this may soon change to networking bandwidth, which doubles about every 3 years. So the 2078 estimate might be quite accurate.


Growth in computing performance with respect to the HPCG benchmark. Both computing performance and factory costs are assumed to keep growing steadily at an exponential rate with doubling period of 18 or 36 months, respectively.


Now remember that, (1) the HPCG benchmark has much higher performance than typical deep learning applications which rely much more on network and memory bandwidth, and (2) that my estimate for the computational complexity of the brain is a lower bound.

One can see that an estimate beyond 2100 might be not too far off.
To sustain such a long and merciless increase in computation performance will require that we develop and implement many new ideas while operating at the border of physical limitations as soon as by 2020.

Will this be possible?

Where there’s a will, there’s a way – the real question is:

Are we prepared to pay the costs?

Conclusion

Here I have discussed the information processing steps of the brain and their complexity and compared them to those of deep learning algorithms.
I focused on a discussion of basic electrochemical information processing and neglected biological information processing.

I used an extended linear-nonlinear-Poisson cascade model as groundwork and related it to convolutional architectures.
With this model, I could show that a single neuron has an information processing architecture which is very similar to current convolutional nets, featuring convolutional stages with rectified non-linearities which activities are then regularized by a dropout-like method.

I also established a connection between max-pooling and voltage-gated channels which are opened by dendritic spikes.
Similarities to batch-normalization exist.

This straightforward similarity gives strong reason to believe that deep learning is really on the right path.
It also indicates that ideas borrowed from neurobiological processes are useful for deep learning (the problem was that progress in deep learning architectures often preceded knowledge in neurobiological processes).

My model shows that it can be estimated that the brain operates at least 10x^21 operations per second.
With current rates of growth in computational power we could achieve supercomputers with brain-like capabilities by the year 2037, but estimates after the year 2080 seem more realistic when all evidence is taken into account.

This estimate only holds true if we succed to stomp limitations like physical barriers (for example quantum-tunneling), capital costs for semiconductor fabrication plants, and growing electrical costs.

At the same time we constantly need to innovate to solve memory bandwidth and network bandwidth problems which are or will be the bottlenecks in supercomputing.
With these considerations taken into account, it is practically rather unlikely that we will achieve human-like processing capabilities anytime soon.

Closing remarks

My philosophy of this blog post was to present all information on a single web-page rather than scatter information around.
I think this design helps to create a more sturdy fabric of knowledge, which, with its interwoven strains of different fields, helps to create a more thorough picture of the main ideas involved.

However, it has been quite difficult to organize all this information into a coherent picture and some points might be more confusing than enlightening.
Please leave a comment below to let me know if the structure and content need improvement, so that I can adjust my next blog post accordingly.

I would also love general feedback for this blog post.
Also make sure to share this blog post with your fellow deep learning colleagues.

People with raw computer science backgrounds often harbor misconceptions about the brain, its parts and how it works.
I think this blog post could be a suitable remedy for that.

Important references and sources

Neuroscience
Brunel, N., Hakim, V., & Richardson, M. J. (2014). Single neuron dynamics and computation. Current opinion in neurobiology, 25, 149-155.
Chadderton, P., Margrie, T. W., & Häusser, M. (2004). Integration of quanta in cerebellar granule cells during sensory processing. Nature, 428(6985), 856-860.
De Gennaro, L., & Ferrara, M. (2003). Sleep spindles: an overview. Sleep medicine reviews, 7(5), 423-440.
Ji, D., & Wilson, M. A. (2007). Coordinated memory replay in the visual cortex and hippocampus during sleep. Nature neuroscience, 10(1), 100-107.
Liaw, J. S., & Berger, T. W. (1999). Dynamic synapse: Harnessing the computing power of synaptic dynamics. Neurocomputing, 26, 199-206.
Ramsden, S., Richardson, F. M., Josse, G., Thomas, M. S., Ellis, C., Shakeshaft, C., … & Price, C. J. (2011). Verbal and non-verbal intelligence changes in the teenage brain.Nature, 479(7371), 113-116.
Smith, S. L., Smith, I. T., Branco, T., & Häusser, M. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature, 503(7474), 115-120.
Stoodley, C. J., & Schmahmann, J. D. (2009). Functional topography in the human cerebellum: a meta-analysis of neuroimaging studies. Neuroimage,44(2), 489-501.
High performance computing
Dongarra, J., & Heroux, M. A. (2013). Toward a new metric for ranking high performance computing systems. Sandia Report, SAND2013-4744, 312.
PDF: HPCG Specification
Interview: Why there will be no exascale computing before 2020
Slides: Why there will be no exascale computing before 2020
Interview: Challenges of exascale computing
 
“Those who do not think outside the box are easily contained."

~ Nicolas Manetta
 
[MENTION=5045]Skarekrow[/MENTION] & [MENTION=6917]sprinkles[/MENTION],

I apologize because you've probably already posted something about this, but I was hoping you could maybe point me in the right direction or offer some insight on something I've been thinking about...

I believe there was a conversation in the movie Clerks where one of the characters asks how a "lightsaber" knows when to stop - why doesn't it just keep projecting out (the answer was the "Force", and then the reply came "That's your answer for everything" :) )

Anyway, this is related. Correct me if I'm wrong, but quantum physicists are saying (or have been saying) that really, there is no matter, there is only energy and waves, and our brains are able to interpret these waves as matter. So the question is, how does an object "stop", or have distinct sides if it's just energy (an explanation for a 10 year old is what I'm looking for I guess :) ). I've heard that people who have taken mushrooms have witnessed objects morphing or fading away. Is there any point for one to want to see through this dimension? Once you see through it, wouldn't the regular world be boring? Just hoping to get your thoughts - thanks!
 
[MENTION=5045]Skarekrow[/MENTION] & [MENTION=6917]sprinkles[/MENTION],

I apologize because you've probably already posted something about this, but I was hoping you could maybe point me in the right direction or offer some insight on something I've been thinking about...

I believe there was a conversation in the movie Clerks where one of the characters asks how a "lightsaber" knows when to stop - why doesn't it just keep projecting out (the answer was the "Force", and then the reply came "That's your answer for everything" :) )

Anyway, this is related. Correct me if I'm wrong, but quantum physicists are saying (or have been saying) that really, there is no matter, there is only energy and waves, and our brains are able to interpret these waves as matter. So the question is, how does an object "stop", or have distinct sides if it's just energy (an explanation for a 10 year old is what I'm looking for I guess :) ). I've heard that people who have taken mushrooms have witnessed objects morphing or fading away. Is there any point for one to want to see through this dimension? Once you see through it, wouldn't the regular world be boring? Just hoping to get your thoughts - thanks!

Well for one thing, objects don't actually stop. What happens instead is everything in a local area starts moving at the same velocity - it's all still moving just as before but you just can't see it.

Objects appear to have different sides because the energy intensity is directional, and some energy is blocked or absorbed. For example when you look at the edge of a box it looks 3D because of the way light reflects off of it, and you can't see the other side because that energy is directed away from you. If energy were perfectly evenly distributed you would not be able to identify different sides of things.

Things feel solid because of energy being absorbed and redirected, and because of inertia. When you hit something large and heavy, you feel the sudden stop because the objects mass takes all your kinetic energy and disperses it, and basically because your energy isn't strong enough the object doesn't get pushed much. But if you hit a light object the reverse happens, but relativistically it's the same scenario.
 
  • Like
Reactions: AJ_
@Skarekrow & @sprinkles,

I apologize because you've probably already posted something about this, but I was hoping you could maybe point me in the right direction or offer some insight on something I've been thinking about...

I believe there was a conversation in the movie Clerks where one of the characters asks how a "lightsaber" knows when to stop - why doesn't it just keep projecting out (the answer was the "Force", and then the reply came "That's your answer for everything" :) )

Anyway, this is related. Correct me if I'm wrong, but quantum physicists are saying (or have been saying) that really, there is no matter, there is only energy and waves, and our brains are able to interpret these waves as matter. So the question is, how does an object "stop", or have distinct sides if it's just energy (an explanation for a 10 year old is what I'm looking for I guess :) ). I've heard that people who have taken mushrooms have witnessed objects morphing or fading away. Is there any point for one to want to see through this dimension? Once you see through it, wouldn't the regular world be boring? Just hoping to get your thoughts - thanks!

I don’t know about Star Wars physics or lightsabers, but I understand what you mean with the generalization.
It isn’t something that is easy for our minds to comprehend…our brains are wired to think practically first, and realizing that everything is energy and waves of this energy is a very abstract concept.
It’s like the Big Bang as explained by the author of ’The Hitchhiker’s Guide' … “In the beginning there was nothing, which exploded."
Lol
It's also hard to understand concepts of science that science itself only has partial working theories of.
So while there are all sorts of forces that are said to keep matter together it’s mostly electromagnetic in nature for the most part.
It is complex though and I think the folks at CERN do a fine job of explaining it….much better than I could.
Link - http://www.exploratorium.edu/origins/cern/ideas/standard2.html

Remember,
“What I am going to tell you about is what we teach our physics students in the third or fourth year of graduate school... It is my task to convince you not to turn away because you don't understand it. You see my physics students don't understand it... That is because I don't understand it. Nobody does.”

Richard Feynman, QED: The Strange Theory of Light and Matter

Also, just because quantum physics has some strange and bizarre properties doesn’t mean that it’s some force(s) that is/are unexplainable….it just hasn’t been explained yet…perhaps never will.
As far as us being able to see things morphing into each other or fading away…they very well could be, and even though I am an advocate of mushrooms, there are still people who will have a “bad trip”, and experience or see things that are unpleasant…this also doesn’t mean that this isn’t happening, IMO from extensive reading and studying on the subject; mushrooms open the doors of perception but you are still filtering this through your brain…just perhaps less so filtered and more of raw reality is what you experience…but also your anxieties, level of maturity, and many other things factor into what type of experience a person has.
As far as coming back to this reality and possibly experiencing it as “dull” or “boring” I would say is more of a problem for people who actually have the near death experience and have truly crossed that threshold between worlds untethered from their body in any way.
Most of the people who have been in the scientific experiments that have been done using psilocybin have rated their experience (granted these were high doses intravenously) as the most important experience in their life or as important as the birth of a child.
If one were to start, I would say prepare yourself mentally and spiritually, have a babysitter, some relaxing music and start slow.
Personally, I think the experience should be done in reverence and not just because it’s fun to trip.
I think it would only inspire my faith and drive in the idea of consciousness surviving and our brain/mind having a dual nature.
 
Last edited:
  • Like
Reactions: AJ_
[MENTION=5045]Skarekrow[/MENTION]

Also it's hard to imagine anything in more dimensions when one isn't willing to give up the rules of 3D. 4D and higher do not obey those same rules - the first thing that you see break when using a 3D mind is that space seems to fold in on itself and rather than just accept this idea, the 3D mind rejects it. We can't get into higher dimensions while doing that. We have to accept the oddities without judging them.
 
  • Like
Reactions: AJ_
@Skarekrow

Also it's hard to imagine anything in more dimensions when one isn't willing to give up the rules of 3D. 4D and higher do not obey those same rules - the first thing that you see break when using a 3D mind is that space seems to fold in on itself and rather than just accept this idea, the 3D mind rejects it. We can't get into higher dimensions while doing that. We have to accept the oddities without judging them.

Totally.
That hit the nail on the head.
 
So... I didnt have the patience to read through this whole thread today, (highly interested, how things work, my kinda thread) I hate being so late to this, and my OCD says I need to read every single thread, though I doubt I will actually do that. But the first page the song "Dust in the wind" came to mind, then when talking about rubix cubes and surgery and teaching people, I severely wonder if its a trait of INFJs.

When I need to know how to do something, I pick it up quickly, I typically simplify the process (work smarter not harder) and then when I have to train someone, if they are not as quick as me I get annoyed. For years I expressed this by being condescending, and when I learned that was socially not acceptable, I had to evolve to be nicer about it.
 
So... I didnt have the patience to read through this whole thread today, (highly interested, how things work, my kinda thread) I hate being so late to this, and my OCD says I need to read every single thread, though I doubt I will actually do that. But the first page the song "Dust in the wind" came to mind, then when talking about rubix cubes and surgery and teaching people, I severely wonder if its a trait of INFJs.

When I need to know how to do something, I pick it up quickly, I typically simplify the process (work smarter not harder) and then when I have to train someone, if they are not as quick as me I get annoyed. For years I expressed this by being condescending, and when I learned that was socially not acceptable, I had to evolve to be nicer about it.

Firstly, thank you for expressing your interest in my thread…I actually take great pride in the contents that I have accumulated over the course of a couple years now I think?
I can’t say that everything posted here is 100% correct, a good portion of it is speculation and theory…but it’s such things that my mind really enjoys pondering…I hope you enjoy what you read and feel free to post any related stories about anything that might pertain to the variety of subjects covered.

Yeah…training someone in surgery is a little unforgiving I guess is a good word.
It really is something you are cut out for or not (no pun intended).
I usually didn’t have to say too much once the surgery began, the Surgeons were more than happy to critique every. single. movement. you make.
Which is how it should be in such a field.
I was always a very strong patient advocate…as this person is under anesthesia and is at the mercy of the people in the room to make sure that everything is done exactly how it should be the first time…there are already too many unknown factors going into a surgery, then when you throw an inexperienced person in the mix, there were plenty of times where I had to step in…I mean, I would help them throughout the surgery with hints and clues and to what we are doing and what comes next (as well as the next 6 steps) but if I saw that harm could be done I stopped it before it began most of the time.
It usually took me some time of seeing someone is competent or that they are making the proper effort to correct anything that is identified, before I would let my guard down a little and back off (which autonomy being the goal was good).
 
My passion as a youngster was to take me into pediatrics, however after 2 heart wrenching days in the NICU with my daughter made me decide, I couldn't do it. I'm not cut out to have someone's life in my hands. Thank you for being that kind of person, and thank you for ensuring those you train are responsible. My line of work is much more forgiving. I could never decide on anything once my original plan fell through, so I am in a rut of entry level customer service jobs, which I usually get promoted to some sort of lead or supervisor, where my nerves get super frazzled, so I then move on to another entry level spot. Every place I worked ended up setting higher standards for all employees because if I could do it, well anyone should be able to. I'm sorry for those who have to deal with what I left behind.

I really feel like I hijacked this thread though lol. I feel bad now, I'll stop haha.
 
My passion as a youngster was to take me into pediatrics, however after 2 heart wrenching days in the NICU with my daughter made me decide, I couldn't do it. I'm not cut out to have someone's life in my hands. Thank you for being that kind of person, and thank you for ensuring those you train are responsible. My line of work is much more forgiving. I could never decide on anything once my original plan fell through, so I am in a rut of entry level customer service jobs, which I usually get promoted to some sort of lead or supervisor, where my nerves get super frazzled, so I then move on to another entry level spot. Every place I worked ended up setting higher standards for all employees because if I could do it, well anyone should be able to. I'm sorry for those who have to deal with what I left behind.

I really feel like I hijacked this thread though lol. I feel bad now, I'll stop haha.

Our careers define us when we first start out in the adult world, but I think as you get older it becomes less and less important (at least it has to me), and you see that someone’s profession is rarely a good measure of a person…I’ve known some really shitty Doctors…I mean like, bad people.
We all just do the best we can with the cards we are dealt.
I didn’t exactly plan to have arthritis in my back and gimp around the house when it really hurts, but such is life…oh how joyous it is…hahaha.
 
Our careers define us when we first start out in the adult world, but I think as you get older it becomes less and less important (at least it has to me), and you see that someone’s profession is rarely a good measure of a person…I’ve known some really shitty Doctors…I mean like, bad people.
We all just do the best we can with the cards we are dealt.
I didn’t exactly plan to have arthritis in my back and gimp around the house when it really hurts, but such is life…oh how joyous it is…hahaha.

I've known some shitty doctors too. 7 years of complaints before my tumor was even considered a possibility. Nope, all my issues were all normal in their minds. And my dad suffered a spinal cord injury last year and in the C4 and C5 vertebrae he is a quadriplegic. Well when I would show up at the hospital and his food was on his tray and cold, I would ask him about it, and he would tell me, well they didnt have time to feed me. I got kicked out of the hospital a few times. I hate confrontation, but dont mess with those that I love. And you work in a hospital where everyone has either a severe brain injury, or a spinal cord injury and most of them cant feed themselves. Find a new job ass wipes.
 
Also, I've had arthritis in my knees since i was 16 thanks to gymnastics. It's an awful terrible thing. I couldn't imagine it in my back. That must be terrible.
 
I've known some shitty doctors too. 7 years of complaints before my tumor was even considered a possibility. Nope, all my issues were all normal in their minds. And my dad suffered a spinal cord injury last year and in the C4 and C5 vertebrae he is a quadriplegic. Well when I would show up at the hospital and his food was on his tray and cold, I would ask him about it, and he would tell me, well they didnt have time to feed me. I got kicked out of the hospital a few times. I hate confrontation, but dont mess with those that I love. And you work in a hospital where everyone has either a severe brain injury, or a spinal cord injury and most of them cant feed themselves. Find a new job ass wipes.
Sorry to hear about the difficulties you have had to face.
I’ve gotten into my fair share of arguments with the Doctors when I saw something wasn’t right.
That’s one thing about us INFJs is if you are doing something that we judge to be harmful or an injustice to another person we step up for the most part and don’t back down even if this complicates our lives.

Also, I've had arthritis in my knees since i was 16 thanks to gymnastics. It's an awful terrible thing. I couldn't imagine it in my back. That must be terrible.
I’m okay…I get by with help from my loved ones.
I have good days and bad days physically, mentally, and spiritually…sometimes I crack after it has built up for some time (“it” being in one state of pain or another constantly).
Plus, the side effects from the drugs suck sometimes too.
But…you know…everyone’s life is subjective…it’s how we continue to live our lives when faced with such difficult things that really define you.
I would like to not let my pain be part of the definition of who I am, but that would be living in denial.
It is what it is.
 
But…you know…everyone’s life is subjective…it’s how we continue to live our lives when faced with such difficult things that really define you.
.

I fully agree with you on this one. I have some very not proud moments of judging others for not reacting the way I did in certain situations. But my actions/reactions made me the person I am today, and them the person they are, whether for better or worse. (I still dont like a few of them and their decisions though lol. I havent grown out of my immaturity of that)
 
Back
Top