Feeling the future:
A meta-analysis of 90 experiments on the anomalous anticipation of random future events

[version 1; referees: awaiting peer review]


Daryl Bem1, Patrizio Tressoldi2, Thomas Rabeyron3,4, Michael Duggan5
Author affiliations

Grant information: The author(s) declared that no grants were involved in supporting this work.


Abstract

In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition.

To encourage replications, all materials needed to conduct them were made available on request.
We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10-10 with an effect size (Hedges’ g) of 0.09.

A Bayesian analysis yielded a Bayes Factor of 1.4 × 109, greatly exceeding the criterion value of 100 for “decisive evidence” in support of the experimental hypothesis.

When DJB’s original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10-5, and the BF value is 3,853, again exceeding the criterion for “decisive evidence.”

The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by “p-hacking”–the selective suppression of findings or analyses that failed to yield statistical significance.

P
-curve analysis, a recently introduced statistical technique, estimates the true effect size of our database to be 0.20, virtually identical to the effect size of DJB’s original experiments (0.22) and the closely related “presentiment” experiments (0.21).

We discuss the controversial status of precognition and other anomalous effects collectively known as psi.

Corresponding author: Daryl Bem

How to cite: Bem D, Tressoldi P, Rabeyron T and Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events [version 1; referees: awaiting peer review] F1000Research2015, 4:1188 (doi: 10.12688/f1000research.7177.1)Copyright: © 2015 Bem D et al. This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).
Competing interests: No competing interests were disclosed.
First published: 30 Oct 2015, 4:1188 (doi: 10.12688/f1000research.7177.1)Latest published: 30 Oct 2015, 4:1188 (doi: 10.12688/f1000research.7177.1)


In 2011, the Journal of Personality and Social Psychology published an article by one of us (DJB) entitled “Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect” (Bem, 2011).

The article reported nine experiments that purported to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition.

The controversial nature of these findings prompted the journal’s editors to publish an accompanying editorial justifying their decision to publish the report and expressing their hope and expectation that attempts at replication by other investigators would follow (Judd & Gawronski, 2011).

To encourage replications from the beginning of his research program in 2000, Bem offered free, comprehensive packages that included detailed instruction manuals for conducting the experiments, computer software for running the experimental sessions, and database programs for collecting and analyzing the data.

As of September 2013, two years after the publication of his article, we were able to retrieve 69 attempted replications of his experiments and 11 other experiments that tested for the anomalous anticipation of future events in alternative ways.

When Bem’s experiments are included, the complete database comprises 90 experiments from 33 different laboratories located in 14 different countries.

Precognition is one of several phenomena in which individuals appear to have access to “nonlocal” information, that is, to information that would not normally be available to them through any currently known physical or biological process.

These phenomena, collectively referred to as psi, include telepathy, access to another person’s thoughts without the mediation of any known channel of sensory communication; clairvoyance (including a variant called remote viewing), the apparent perception of objects or events that do not provide a stimulus to the known senses; and precognition, the anticipation of future events that could not otherwise be anticipated through any known inferential process.

Laboratory-based tests of precognition have been published for nearly a century.
Most of the earlier experiments used forced-choice designs in which participants were explicitly challenged to guess on each trial which one of several potential targets would be randomly selected and displayed in the near future.

Typical targets included ESP card symbols, an array of colored light bulbs, the faces of a die, or visual elements in a computer display. When a participant correctly predicted the actual target-to-be, the trial was scored as a hit, and performance was typically expressed as the percentage of hits over a given number of trials.

A meta-analysis of all forced-choice precognition experiments appearing in English language journals between 1935 and 1977 was published by Honorton & Ferrari (1989).

Their analysis included 309 experiments conducted by 62 different investigators involving more than 50,000 participants.
Honorton and Ferrari reported a small but significant hit rate, Rosenthal effect size z/√n = .02, Stouffer Z = 6.02, p = 1.1 × 10-9.

They concluded that this overall result was unlikely to be artifactually inflated by the selective reporting of positive results (the so-called file-drawer effect), calculating that there would have to be 46 unreported studies averaging null results for every reported study in the meta-analysis to reduce the overall significance of the database to chance.

Just as research in cognitive and social psychology has increasingly pursued the study of affective and cognitive processes that are not accessible to conscious awareness or control (e.g., Ferguson & Zayas, 2009), research in psi has followed the same path, moving from explicit forced-choice guessing tasks to experiments using subliminal stimuli and implicit or physiological responses.

This trend is exemplified by several “presentiment” experiments, pioneered by Radin (1997) and Bierman (Bierman & Radin, 1997) in which physiological indices of participants’ emotional arousal are continuously monitored as they view a series of pictures on a computer screen.

Most of the pictures are emotionally neutral, but on randomly selected trials, a highly arousing erotic or negative image is displayed.
As expected, participants show strong physiological arousal when these images appear, but the important “presentiment” finding is that the arousal is observed to occur a few seconds before the picture actually appears on the screen–even before the computer has randomly selected the picture to be displayed.

The presentiment effect has now been demonstrated using a variety of physiological indices, including electrodermal activity, heart rate, blood volume, pupil dilation, electroencephalographic activity, and fMRI measures of brain activity.

A meta-analysis of 26 reports of presentiment experiments published between 1978 and 2010 yielded an average effect size of 0.21, 95% CI = [0.13, 0.29], combined z = 5.30, p = 5.7 × 10-8.

The number of unretrieved experiments averaging a null effect that would be required to reduce the effect size to a trivial level was conservatively calculated to be 87 (Mossbridge et al., 2012; see also, Mossbridge et al., 2014).

A critique of this meta-analysis has been published by Schwarzkopf (2014) and the authors have responded to that critique (Mossbridge et al., 2015).

Bem’s experiments can be viewed as direct descendants of the presentiment experiments.
Like them, each of his experiments modified a well-established psychological effect by reversing the usual time-sequence of events so that the participant’s responses were obtained before the putatively causal stimulus events occurred.

The hypothesis in each case was that the time-reversed version of the experiment would produce the same result as the standard non-time-reversed experiment.

Four well-established psychological effects were modified in this way.
(See Bem (2011) for more complete descriptions of the experimental protocols.)

Precognitive approach and avoidance

Two experiments tested time-reversed versions of one of psychology’s oldest and best known phenomena, the Law of Effect (Thorndike, 1898): An organism is more likely to repeat responses that have been positively reinforced in the past than responses that have not been reinforced.

Bem’s time-reversed version of this effect tested whether participants were more likely to make responses that would be reinforced in the near future.

On each trial of the first experiment (“Precognitive Detection of Erotic Stimuli”), the participant selected one of two curtains displayed side-by-side on a computer screen.

After the participant had made a choice, the computer randomly designated one of the curtains to be the reinforced alternative.
If the participant had selected that curtain, it opened to reveal an erotic photograph and the trial was scored as a hit; if the participant had selected the other curtain, a blank gray wall appeared and the trial was scored as a miss.

In a second experiment (“Precognitive Avoidance of Negative Stimuli”) a trial was scored as a hit if the participant selected the alternative that avoided the display of a gruesome or unpleasant photograph.

Retroactive priming

In recent years, priming experiments have become a staple of cognitive social psychology (Klauer & Musch, 2003).
In a typical affective priming experiment, participants are asked to judge as quickly as they can whether a photograph is pleasant or unpleasant and their response time is measured.

Just before the picture appears, a positive or negative word (e.g., beautiful, ugly) is flashed briefly on the screen; this word is called the prime.

Individuals typically respond more quickly when the valences of the prime and the photograph are congruent (both are positive or both are negative) than when they are incongruent.

In the time-reversed version of the procedure, the randomly-selected prime appeared after rather than before participants judge the affective valence of the photograph.

Retroactive habituation

When individuals are initially exposed to an emotionally arousing stimulus, they typically have a strong physiological response to it.
Upon repeated exposures the arousal diminishes.

This habituation process is one possible mechanism behind the so-called “mere exposure” effect in which repeated exposures to a stimulus produce increased liking for it (Bornstein, 1989; Zajonc, 1968).

It has been suggested that if a stimulus is initially frightening or unpleasant, repeated exposures will render it less negatively arousing and, hence, it will be better liked after the exposures–the usual mere exposure result–but if the stimulus is initially very positive, the repeated exposures will render it boring or less positively arousing and, hence, it will be less well liked after the exposures (Dijksterhuis & Smith, 2002).

In two time-reversed habituation experiments, pairs of negative photographs matched for equal likeability or pairs of erotic photographs similarly matched were displayed side by side on the screen and the participant was instructed on each trial to indicate which one he or she liked better.

After the preference was recorded, the computer randomly selected one of the two photographs to be the habituation target and flashed it subliminally on the screen several times.

The hypothesis was that participants would prefer the habituation target on trials with negative photographs but would prefer the nontarget on trials with erotic photographs.

The three time-reversed effects described above can be viewed as conceptual replications of the presentiment experiments in that all these experiments assessed affective responses to emotionally arousing stimuli before those stimuli were randomly selected and displayed.

Whereas presentiment experiments assess physiological responses, Bem’s experiments assessed behavioral responses.
Even the photographs used in the two kinds of experiments were drawn primarily from the same source, the International Affective Picture System (IAPS; Lang & Greenwald, 1993), a set of more than 800 digitized photographs that have been rated for valence and arousal.

Retroactive facilitation of recall

A commonplace phenomenon of memory is that practicing or rehearsing a set of verbal items facilitates their subsequent recall.
Two of Bem’s time-reversed experiments tested whether rehearsing a set of words makes them easier to recall even if the rehearsal takes place after the recall test is administered.

Participants were shown 48 common nouns one at a time on the computer screen.
They were then given a (surprise) recall test in which they were asked to type out all the words they could recall, in any order.

After the participant completed the recall test, the computer randomly selected half the words to serve as practice words and had participants rehearse them in a series of practice exercises.

The hypothesis was that this practice would “reach back in time” to facilitate the recall of these words and, thus, participants would recall more of the to-be-practiced words than the control non-practiced words.

This protocol is methodologically and conceptually quite different from the three time-reversed protocols described above.
In those, participants were required to make quick judgments on each trial with no time to reflect on their decisions.

The sequence of events within each trial occurred on a time scale of milliseconds and the putatively causal stimulus appeared immediately after each of the participant’s responses.

In terms of Kahneman’s (2011) dual-mode theory of cognition–as described in his book, Thinking, Fast and Slow–these experiments required cognitive processing characteristic of System 1, “Fast Thinking” (also see Evans, 2008, and Evans & Stanovich, 2013).

In contrast, the retroactive facilitation-of-recall protocol confronted participants with a single extended cognitive task that occurred on a time scale of minutes: Presenting the initial list of words took 2-1/2 minutes; the recall test took up to 5 minutes; and the post-test practice exercises took approximately 7 minutes.

This allowed participants time to implement deliberate conscious strategies involving working memory, active rehearsal, and verbal categorization, all cognitive processes characteristic of System 2, “Slow Thinking.”

Across all his experiments, Bem reported a mean effect size (d) of 0.22, with a Stouffer Z of 6.66, p = 2.68 × 10-11 (Bem et al., 2011).
Bem’s experiments have been extensively debated and critiqued.

The first published critique appeared in the same issue of the journal as Bem’s original article (Wagenmakers et al., 2011).
These authors argued that a Bayesian analysis of Bem’s results did not support his psi-positive conclusions and recommended that all research psychologists abandon frequentist analyses in favor of Bayesian ones.

Bem et al. (2011)
replied to Wagenmakers et al., criticizing the particular Bayesian analysis they had used and demonstrating that a more reasonable Bayesian analysis yields the same conclusions as Bem’s original frequentist analysis.

In a similar critique, Rouder & Morey (2011) also advocated a Bayesian approach, criticizing the analyses of both Bem and Wagenmakers et al.

Rather than continuing to debate this issue in the context of Bem’s original experiments, we here analyze the current database with both a frequentist analysis and the specific Bayesian analysis recommended by Rouder and Morey for meta-analyses.

Recently, Judd et al. (2012) have argued that psychologists should start treating stimuli statistically as a random factor the same way we currently treat participants.

As they acknowledge, this would constitute a major change in practice for psychologists.
To illustrate, they re-analyzed several published datasets from psychological journals, including one of Bem’s retroactive priming results, showing that when stimuli are treated as a random factor the results are statistically weaker than reported in the original articles.

They conclude that “As our simulations make clear, in many commonly used designs in social cognitive research, a likely consequence of only treating participants as a random effect is a large inflation of Type I statistical errors, well above the nominal .05 rate (p. 12).”

Francis (2012) and Schimmack (2012) take a different tack.
Instead of arguing that Bem’s results are weaker than he reports, they argue that, on the contrary, his results are actually too good to be true.

That is, given the statistical power of Bem’s effects, it is unlikely that eight of his nine experiments would have achieved statistical significance, implying that there is a hidden file-drawer of experiments or failed statistical analyses that Bem failed to report.

In his own discussion of potential file-drawer issues, Bem (2011) reported that they arose most acutely in his two earliest experiments (on retroactive habituation) because they required extensive pre-experiment pilot testing to select and match pairs of photographs and to adjust the number and timing of the repeated subliminal stimulus exposures.

Once these were determined, however, the protocol was “frozen” and the formal experiments begun.
Results from the first experiment were used to rematch several of the photographs used for its subsequent replication.

In turn, these two initial experiments provided data relevant for setting the experimental procedures and parameters used in all the subsequent experiments.

As Bem’s explicitly stated in his article, he omitted one exploratory experiment conducted after he had completed the original habituation experiment and its successful replication.

It used supraliminal rather than subliminal exposures.
He noted that this fundamentally alters the participant’s phenomenology of the experiment, transforming the task into an explicit ESP challenge and thereby undermining the very rationale for using an implicit response measure of psi in the first place.

Even that experiment was not left languishing in a file drawer, however, because he had reported and critiqued it at a meeting of the Parapsychological Association (Bem, 2003).

With regard to unreported data analyses, Bem analyzed and reported each experiment with two to four different analyses, demonstrating in each case that the results and conclusions were robust across different kinds of analyses, different indices of psi performance, and different definitions of outliers.

Following standard practice, however, he did not treat stimuli as a random factor in his analyses.
In his own critique, Francis (2012) remarks that “perhaps the most striking characteristic of [Bem’s] study is that [it meets] the current standards of experimental psychology.

The implication is that it is the standards and practices of the field that are not operating properly (p. 155).” Similarly, LeBel & Peters (2011) remark that “...t is precisely because Bem’s report is of objectively high quality that it is diagnostic of potential problems with MRP [Modal Research Practice].... Bem has put empirical psychologists in a difficult position: forced to consider either revising beliefs about the fundamental nature of time and causality or revising beliefs about the soundness of MRP (p. 371).”

LeBel and Peters conclude by recommending that we should put a stronger emphasis on replication.
We agree.

Rather than continuing to debate Bem’s original experiments, we seek in our meta-analysis to answer the one question that most decisively trumps such disputes: Can independent investigators replicate the original experiments?

Method


The methodology and reporting of results comply with the Meta-Analysis Reporting Standards (APA, 2008).
Additional materials needed to replicate our results independently can be found at http://figshare.com/articles/Meta-analysis_Implicit_Behavioral_Anticipation/903716.

Retrieval and coding of experiments

As noted above, the archival summary publication of Bem’s experiments appeared in 2011, but he had begun his first experiments as early as 2000, and began reporting results soon thereafter at departmental colloquia and annual meetings of the Parapsychological Association (Bem, 2003; Bem, 2005; Bem, 2008).

Simultaneously he made materials available to anyone expressing an interest in trying to replicate the experiments.
As a result, attempted replications of the experiments began to appear as early as 2001 (as reported in Moulton & Kosslyn, 2011).

No presentiment experiments are included in our database because, as noted above, a meta-analysis of those has already been published (Mossbridge et al., 2012).

We have, however, included 19 attempted replications of Bem’s Retroactive-Facilitation-of Recall experiment that had been previously meta-analyzed by Galak et al. (2012) because 8 additional replication studies of that protocol have been reported since then. (This was the only protocol included in Galak et al.’s. meta-analysis.)

Although the individual-difference variable of “stimulus seeking” emerged as a significant correlate of psi performance in several of Bem’s original experiments, we have not analyzed that variable in the present meta-analysis because too few of the replications reported on it–especially those that modified Bem’s original protocol.

Co-authors PT, TR, and MD conducted a search for all potentially relevant replications that became available between the year 2000 and September of 2013.

These included unpublished reports as well as peer-reviewed, published articles in mainstream psychological journals; specialized journals; proceedings from conferences; and relevant studies found in Google Scholar, PubMed and PsycInfo.

The same set of keywords–Bem, feeling the future, precognition– was used for all searches, and no MESH terms or Boolean operators were used.

Using email and academia.edu, they also contacted known psi researchers and mainstream researchers who had expressed an interest in replicating Bem’s experiments.

Of the ninety-three experiments retrieved, two were eliminated because they were severely underpowered: the first had only one participant; the second had nine (Snodgrass, 2011).

A third experiment, reporting positive results, rested on several post-hoc analyses, and so we deemed it too exploratory to include in the meta-analysis (Garton, 2010).

The final database thus comprises 90 experiments.

Co-authors PT and TR independently coded and categorized each study with respect to the following variables:
a) type of effect(s) tested;
b) number of participants enrolled in the study;
c) descriptive or inferential statistics used to calculate measures of effect size;
d) whether the replication had been conducted before or after the January, 2011 (Online First) publication of Bem’s original experiments; e) whether or not the experiment had been peer-reviewed; and
f) type of replication.

For this last variable, each experiment was categorized into one of three categories: an exact replication of one of Bem’s experiments (31 experiments), a modified replication (38 experiments), or an independently designed experiment that assessed the ability to anticipate randomly-selected future events in some alternative way (11 experiments).

To qualify as an exact replication, the experiment had to use Bem’s software without any procedural modifications other than translating on-screen instructions and stimulus words into a language other than English if needed.

The eleven experiments that had not been designed to replicate any of Bem’s experiments included five retroactive-priming experiments and six retroactive-practice experiments.

Percentages of agreement for each of the coding variables ranged from a minimum of 90% for the statistical data to 100% for the classification into one of the three categories of experiments.

Discrepancies in coding were resolved by discussion between PT and TR.

Frequentist analysis

All the main inferential statistics, weighted effect-size point estimations with corresponding 95% Confidence Intervals, and combined z values were calculated using the Comprehensive Meta-Analysis software v.2 by Borenstein et al. (2005).

Effect sizes (Hedges’ g) and their standard errors were computed from t test values and sample sizes. (Hedges’ g, is similar to the more familiar d [Cohen, 1988], but pools studies using n - 1 for each sample instead of n. This provides a better estimate for smaller sample sizes.)

When t test values were not available, we used the effect sizes reported by the authors or estimated them from the descriptive statistics. When more than one dependent variable was measured, a single effect size was calculated by averaging the effect sizes obtained by the different t values.

Heterogeneity within each set of experiments using a particular protocol (e.g., the set of retroactive priming experiments) was assessed using I2 (Huedo-Medina et al., 2006).

It estimates the percent of variance across studies due to differences among the true effect sizes.
If all the studies are methodologically identical and the subject samples are very similar, then I2 will be small (< 25%) and a fixed-effect model analysis is justified; otherwise a random-effects model is used (Borenstein et al., 2009).

A fixed-effect model assumes that all the studies using a particular protocol have the same true effect size and that the observed variance of effect sizes across the studies is due entirely to random error within the studies.

The random-effects model allows for the possibility that different studies included in the analysis may have different true effect sizes and that the observed variation reflects both within-study and between-study sampling error.

Bayesian analysis

A model comparison Bayesian analysis of an experiment pits a specified experimental hypothesis (H1) against the null hypothesis (H0) by calculating the odds that H1 rather than H0 is true–p(H1)/p(H0)–or the reverse.

The analysis assumes that each person comes to the data with a subjective prior value for these odds and then adjusts them on the basis of the data to arrive at his or her posterior odds.

A Bayesian analysis can be summarized by a number called the Bayes Factor (BF), which expresses the posterior odds independent of any particular individual’s prior odds.

For example, a BF of 3 indicates that the observed data favor the experimental hypothesis over the null hypothesis by a ratio of 3:1.
The posterior odds for a particular individual can then be calculated by multiplying his or her prior odds by BF.

For example, a mildly psi-skeptical individual might initially assign complementary probabilities of .2 and .8 to H1 and H0, respectively, yielding prior odds of .25.

If BF = 3 then the Bayesian formula indicates that this individual’s posterior odds should be .75.
If BF were to exceed 4, then the posterior odds p(H1)/p(H0) would exceed 1, implying that this individual now favors the experimental hypothesis over the null.

Jeffreys (1998) has suggested the following verbal labels for interpreting BF levels of p(H1)/p(H0):

  • BF = 1 — 3: Worth no more than a bare mention
  • BF = 3 — 10: Substantial evidence for H1
  • BF = 10 — 30: Strong evidence for H1
  • BF = 30 — 100: Very Strong evidence for H1
  • BF > 100: Decisive evidence for H1

To perform a Bayesian analysis, one must also specify a prior probability distribution of effect sizes across a range for both H0 and H1. Specifying the effect size for H0 is simple because it is a single value of 0, but specifying H1 requires specifying a probability distribution across a range of what the effect size might be if H1 were in fact true.

This specification can strongly impact the subsequent estimates of BF and, in fact, was the major disputed issue in the debate over Bem’s original experiments (Bem et al., 2011; Rouder & Morey, 2011; Wagenmakers et al., 2011).

For purposes of meta-analysis, Rouder & Morey (2011) argue that one should use the Jeffrey, Zellner and Siow (JZS) prior probability distribution (see, also, Bayarri & Garcia-Donato, 2007).

That distribution is designed to minimize assumptions about the range of effect sizes and, in this sense, constitutes what is known as an “objective” prior (Rouderet al., 2009).

Moreover, the resulting BF is independent of the measurement scale of the dependent variable, is always finite for finite data, and is consistent in the sense that as sample size increases, BF grows to infinity if the null is false and shrinks to zero if it is true–a consistency that does not obtain for p values.

Researchers can also incorporate their expectations for different experimental contexts by tuning the scale of the prior on effect size (designated as r).

Smaller values of r (e.g., 0.1) are appropriate when small effects sizes are expected; larger values of r (e.g., 1.0) are appropriate when large effect sizes are expected.

As r increases, BF provides increasing support for the null.
For these several reasons, we have adopted the JZS prior probability distribution for our Bayesian analysis.

For the estimation of Bayes Factors, we used the meta.ttest function of the BayesFactor package (Morey & Rouder, 2014).
In the expectation that the effect size will be small, we set r = 0.1.

To estimate the overall effect size and τ2, a measure of between-studies variance, we employed the DiMaggio (2013) script, which uses the R2jags package to run the “BUGS” program (Bayesian Analysis Using Gibb’s Sampling).

This provides a Monte Carlo Markov Chain simulation approach to parameter estimation using a normally distributed prior with a mean of 0.1 and a wide variance of 105.

The program chooses samples using either Gibbs or Metropolis Hasting algorithms.
Because this is a simulation-based approach, we repeated many draws or iterations and evaluated whether the chain of sample values converged to a stable distribution, which was assumed to be the posterior distribution in which we are interested.

We ran two 20,000 Markov Chain Monte Carlo iterations, each starting with different and dispersed initial values for the model.
We based our results on the final 20,000 iterations and assessed whether the chain of values had converged to a stable posterior distribution by monitoring and assessing a graph of the chain and by calculating the Brooks Gelman and Rubin statistic, a tool within the CODA package of R programs for this purpose.

The results are presented as mean values of the posterior distributions and their 95% credible intervals (CrI).


Results and discussion

The complete database comprises 90 experiments conducted between 2001 and 2013.
These originated in 33 different laboratories located in 14 countries and involved 12,406 participants.

The full database with corresponding effect sizes, standard errors, and category assignments is presented in Table S1 along with a forest plot of the individual effect sizes and their 95% confidence intervals.

Dataset 1.Table S1.
Experiments in the meta-analysis, N, task type, effect size, standard error, peer-review and replication classifications (Tressoldi et al., 2015).

Download the data

The first question addressed by the meta-analysis is whether the database provides overall evidence for the anomalous anticipation of random future events.

As shown in the first and second rows of Table 1, the answer is yes: The overall effect size (Hedges’ g) is 0.09, combined z = 6.33, p = 1.2 × 10-10.

The Bayesian BF value is 5.1 × 109, greatly exceeding the criterion value of 100 that is considered to constitute “decisive evidence” for the experimental hypothesis (Jeffreys, 1998).

Moreover, the BF value is robust across a wide range of the scaling factor r, ranging from a high value of 5.1 × 109when we set r = 0.1 to a low value of 2.0 × 109 when r = 1.0.


[TH="bgcolor: #BDCCD4"]
[/TH]
[TH="bgcolor: #BDCCD4, colspan: 4"] Num… [/TH]

[TD="bgcolor: #FFFFFF"] All… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4"] 90 [/TD]

[TD="bgcolor: #E5EBEF"] Ind… [/TD]
[TD="bgcolor: #E5EBEF, colspan: 4"] 69 [/TD]

[TD="bgcolor: #FFFFFF"] Exa… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4"] 3138 [/TD]


Table 1. Meta-analytic results for all experiments and for independent replications of Bem’s experiments.


The second question is whether independent investigators can successfully replicate Bem’s original experiments.
As shown in the third and fourth rows of Table 1, the answer is again yes: When Bem’s experiments are excluded, the combined effect size for attempted replications by other investigators is 0.06, z = 4.16, p = 1.1 × 10-5, and the BF value is 3,853, which again greatly exceeds the criterion value of 100 for “decisive evidence.”

The fifth and sixth rows of Table 1 show that the mean effect sizes of exact and modified replications are each independently significant and not significantly different from each other (Mean diff = 0.025; 95% CI [-0.04, 0.09]; z = 0.87,ns).

The seventh and eighth rows show that the mean effect sizes of replications conducted before and after the January, 2011 (online) publication of Bem’s article are each independently significant and not significantly different from each other (Mean diff = 0.042; 95% CI [.02, 0.10]; z = 0.37, ns).

And finally, the bottom two rows of Table 1 show that the mean effect sizes of peer reviewed and not-peer-reviewed replications are each independently significant and identical to each other.

Table 2 displays the meta-analysis of the complete database as a function of experiment type and divided post-hoc into fast-thinking and slow-thinking protocols.


[TH="bgcolor: #BDCCD4, align: left"] Exp… [/TH]
[TH="bgcolor: #BDCCD4, colspan: 4, align: center"] Num… [/TH]

[TD="bgcolor: #FFFFFF"] Pre… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] 14 [/TD]

[TD="bgcolor: #E5EBEF"] Pre… [/TD]
[TD="bgcolor: #E5EBEF, colspan: 4, align: center"] 8 [/TD]

[TD="bgcolor: #FFFFFF"] Ret… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] 15 [/TD]

Table 2. Meta-analytic results as a function of protocol and experiment type.


As shown in Table 2, fast-thinking protocols fared better than slow-thinking protocols: Every fast-thinking protocol individually achieved a statistically significant effect, with an overall effect size of 0.11 and a combined z greater than 7 sigma.

In contrast, slow-thinking experiments achieved an overall effect size of only 0.03, failing even to achieve a conventional level of statistical significance (p = .16).

One possible reason for the less successful performance of the slow-thinking experiments is that 12 of the 27 attempted replications of Bem’s retroactive facilitation of recall experiment were modified replications.

The 15 exact replications of that protocol yielded an overall effect size of 0.08, but the 12 modified replications yielded a null effect size (-0.00).

For example, Galak et al. (2012) used their own software to conduct seven of their 11 modified replications in which 87% of the sessions (2,845 of 3,289 sessions) were conducted online, thereby bypassing the controlled conditions of the laboratory.

These unsupervised sessions produced an overall effect size of -0.02.
Because experiments in a meta-analysis are weighted by sample size, the huge N of these online experiments substantially lowers the mean effect size of the replications: When the online experiments are removed, the mean ES for this protocol rises to 0.06 [0.00, 0.12]; z = 1.95, p = .05.

Nevertheless, we still believe that it is the fast/slow variable itself that is an important determinant of the lower success rate of the slow-thinking experiments.

In particular, we suspect that fast-thinking protocols are more likely to produce evidence for psi because they prevent conscious cognitive strategies from interfering with the automatic, unconscious, and implicit nature of psi functioning (Carpenter, 2012).

This parallels the finding in conventional psychology that mere exposure effects are most likely to occur when the exposures are subliminal or incidental because the participant is not aware of them and, hence, is not prompted to counter their attitude-inducing effects (Bornstein, 1989).

Finally, Table 2 reveals that the clear winner of our meta-analytic sweepstakes is the precognitive detection of erotic stimuli (row 1), the time-reversed version of psychology’s time-honored Law of Effect.

The fourteen experiments using that protocol– conducted in laboratories in four different countries–achieve a larger effect size (0.14), a larger combined z(4.22), and a more statistically significant result (p = 1.2 × 10-5) than any other protocol in the Table.

This protocol was also the most reliable: If we exclude the three experiments that were not designed to be replications of Bem’s original protocol, 10 of the 11 replication attempts were successful, achieving effect sizes ranging from 0.12 to 0.52.

The one exception was a replication failure conducted by Wagenmakers et al. (2012), which yielded a non-significant effect in the unpredicted direction, ES = -0.02, t(99) = -0.22, ns.

These investigators wrote their own version of the software and used a set of erotic photographs that were much less sexually explicit than those used in Bem’s experiment and its exact replications.

The results of our meta-analysis do not stand alone.
As we noted in the introduction, Bem’s experiments can be viewed as conceptual replications of the presentiment experiments in which participants display physiological arousal to erotic and negative photographs a few seconds before the photographs are selected and displayed (Mossbridge et al., 2012).

The parallel is particularly close for the two protocols testing the precognitive detection of erotic stimuli and the precognitive avoidance of negative stimuli (Protocols 1 and 2 in Table 2).

Together those two protocols achieve a combined effect size of 0.11, z = 4.74, p = 1.07 × 10-6.

File-drawer effects: Selection bias and P-hacking

Because successful studies are more likely to be published than unsuccessful studies–the file-drawer effect–conclusions that are drawn from meta-analyses of the known studies can be misleading.

To help mitigate this problem, the Parapsychological Association adopted the policy in 1976 of explicitly encouraging the submission and publication of psi experiments regardless of their statistical outcomes.

Similarly, we put as much effort as we could in locating unpublished attempts to replicate Bem’s experiments by contacting both psi and mainstream researchers who had requested his replication packages or had otherwise expressed an interest in replicating the experiments.

As we saw in Table 1, this all appears to have had the desired effect on the current database: Peer-reviewed experiments yielded the same results as experiments that were not peer-reviewed.

There are also several statistical techniques for assessing the extent to which the absence of unknown studies might be biasing a meta-analysis.

We consider nine of them here.

Fail-safe calculations

One of the earliest of these techniques was the calculation of a “Fail-Safe N,” the number of unknown studies averaging null results that would nullify the overall significance level of the database if they were to be included in the meta-analysis (Rosenthal, 1979).

The argument was that if this number were implausibly large, it would give us greater confidence in the conclusions based on the known studies.

The Rosenthal Fail-Safe N, however, has been criticized as insufficiently conservative because it does not take into account the likely possibility that unpublished or unretrieved studies might well have a mean non-zero effect in the unpredicted direction.

Thus the estimate of the Fail-Safe N is likely to be too high.
(For the record, the Rosenthal Fail-Safe N for our database is greater than 1,000.)

An alternative approach for estimating a Fail-Safe N focuses on the effect size rather than the p value (Orwin, 1983).
The investigator first specifies two numbers: The first is an average effect size for missing studies which, if added to the database, would bring the combined effect size under a specified “trivial” threshold–the second number that must be specified.

If we set the mean effect size of missing studies at .001 and define the threshold for a “trivial” effect size to be .01, then the Orwin Fail-Safe N for our database is 544 studies.

That is, there would have to be 544 studies missing from our database with a mean effect size of .001 to reduce its overall effect size to .01.

Correlations between study size and effect size

Another set of indices for assessing selection bias are various correlational measures for assessing the relationship between the size of a study and its effect size.

The most direct is the Begg and Mazumdar’s rank correlation test, which simply calculates the rank correlation (Kendall’s tau) between the variances or standard errors of the studies and their standardized effect sizes (Rothstein et al., 2005).

If this correlation is significantly negative, if small underpowered studies have larger effect sizes than larger studies, then there is reason to suspect the presence of publication or retrieval bias in the database.

For our database, Kendall’s tau is actually slightly positive: τ = +0.10; z = 1.40, implying that our database is not seriously biased by a selection bias.

More recent publications (e.g., Jin et al., 2015; Rücker et al., 2011; Schwarzer et al., 2010; Stanley & Doucouliagos, 2014;Stanley & Doucouliagos, 2015) have urged the adoption of more complex indices of selection bias:

  • 1. The Copas method (Copas, 2013; Schwarzer et al., 2010) is based on two models, the standard random effects model and the selection model, which takes study size into account.
  • 2. The Limit meta-analysis (Schwarzer et al., 2014) is an extended random effects model that takes account of possible small-study effects by allowing the treatment effect to depend on the standard error.
  • 3. The Precision Effect Test (PET, Stanley, 2008; Stanley & Doucouliagos, 2014) is a variant of the classical Egger regression test (Sterne & Egger, 2005), which tests the relationship between study size and effect size.
  • 4. The Weighted Least Squares analysis (Stanley & Doucouliagos, 2015) provides estimates that are comparable to random effects analyses when there is no publication bias and are identical to fixed-effect analyses when there is no heterogeneity, providing superior estimates compared with both conventional fixed and random effects analyses.

Table 3 summarizes the results of applying these four additional tests to our database.


[TH="bgcolor: #BDCCD4, align: left"] Test [/TH]
[TH="bgcolor: #BDCCD4, colspan: 4"]
[/TH]

[TD="bgcolor: #FFFFFF"] Copa… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] Over… [/TD]

[TD="bgcolor: #E5EBEF, align: center"] Fast-t… [/TD]
[TD="bgcolor: #E5EBEF, colspan: 4, align: center"] 0.07 [/TD]

[TD="bgcolor: #FFFFFF"] Limi… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] Over… [/TD]

Table 3. Copas method, Limit meta-analysis, Precision Effect Test and Weighted least squares results for the overall and the “fast-thinking” database.


As Table 3 shows, three of the four tests yield significant effect sizes estimates for our database after being corrected for potential selection bias; the PET analysis is the only test in which the 95% confidence interval includes the zero effect size.

As Sterne & Egger (2005) themselves caution, however, this procedure cannot assign a causal mechanism, such as selection bias, to the correlation between study size and effect size, and they urge the use of the more noncommittal term “small-study effect.”

Trim and fill

Currently the most common method for estimating the number of studies with low effect sizes that might be missing from a database is Duval & Tweedie’s (2000) Trim-and-Fill procedure.

It is based on a graphic display of the correlation between sample size and effect size called the “funnel” plot, which plots a measure of sample size on the vertical axis as a function of effect sizes on the horizontal axis. The funnel plot for our database is displayed in Figure 1, which uses the reciprocal of the standard error as the measure of sample size.
Figure 1. Funnel Plot of the observed studies (white circles) and the imputed missing studies (black circles) under a random-effects model.

Download as a PowerPoint slide


If a meta-analysis has captured all the relevant experiments, we would expect the funnel plot to be symmetric: Experiments should be dispersed equally on both sides of the mean effect size.

If the funnel plot is asymmetric, with a relatively high number of small experiments falling to the right of the mean effect size and relatively few falling to the left, it signals the possibility that there may be experiments with small or null effects that actually exist but are missing from the database under consideration.

Using an iterative procedure, the trim-and-fill method begins by trimming experiments from the extreme right end of the plot (i.e., the smallest studies with the largest effect sizes) and then calculating a new mean effect size.

It then reinserts the trimmed studies on the right and inserts their imputed “missing” counterparts symmetrically to the left of the new mean effect size.

This produces a revised, more symmetric funnel plot centered around the newly revised mean effect size.
This process continues until the funnel plot becomes symmetric.

At that point, the plot is centered around a final corrected estimate of the effect size and displays the number of imputed “missing” experiments to the left of the unbiased mean effect size.

Figure 1 displays the funnel plot for our complete database after it has been modified by the trim-and-fill procedure.
The unfilled diamond under the horizontal axis marks the original observed effect size (0.09, see Table 1) and the black diamond marks the corrected estimate of the effect size: 0.07 [0.04, 0.10].

The unfilled circles identify the 90 actual experiments in the meta-analysis; the black circles identify the imputed missing experiments.
As Figure 1 shows, there are only eight potentially missing studies.

As noted above, the Orwin Fail-Safe estimate of how many missing experiments with low effect sizes would be required to nullify the overall effect size of the database is 544.

P-curve analysis

All the analyses discussed above presume that selection bias is driven by effect-size considerations, but Simonsohn et al.(2014a); Simonsohn et al. (2014b) have argued that it is actually more likely to be driven by the p = .05 significance level.

They have also demonstrated empirically that the trim and fill procedure is inadequate for estimating the true effect size present in the database (2014b).

In its place, they and other authors (van Assen et al., 2015) have recently proposed a very different approach called p-curve analysis.
P-curve is the distribution of significant (p < .05) results among the experiments in a meta-analysis. “It capitalizes on the fact that the distribution of significant p values... is a function of the true underlying effect.

Researchers armed only with sample sizes and test results of the published findings can correct for publication bias (Simonsohn et al., 2014b, p. 666).”

In addition to assessing selection bias, p-curve analysis can also assess the presence of “p-hacking,” questionable practices of selective reporting that illegitimately enable an investigator to claim results that meet the coveted p < .05 threshold (Simonsohn, et al., 2014a; Simonsohn, et al., 2014b).

In our database, 17 (19%) of the 90 studies reported results that were statistically significant at the .05 level.
The solid blue line in Figure 2 displays the p-curve distribution of those studies.
Figure 2. Distribution of the significant p values across experiments in the meta-analysis.

Download as a PowerPoint slide


The dotted horizontal red line (“Null of zero effect”) is the distribution expected if there is no effect in the data.
In that case, 5% of the significant p values will be below .05, 4% will be below .04, 3% will be below .03, 2% will be below .02, and 1% will be below .01.

Thus there will be as many p values between .04 and .05 as between .00 and .01, and the shape of thep-curve is a uniform, straight horizontal line with 20% of the significant values within each of the 5 intervals on the horizontal-axis.

If a genuine non-zero effect exists, however, then p-curve’s expected distribution will be right-skewed:

  • We expect to observe more low significant p values (p < .01) than high significant p values (.04 < p < .05) (Simonsohn et al., 2014b, pp. 666—667)... A set of significant findings contains evidential value when we can rule out selective reporting as the sole explanation of those findings. Only right-skewed p-curves... are diagnostic of evidential value. P-curves that are not right-skewed suggest that the set of findings lacks evidential value, and curves that are left-skewed suggest the presence of intense p-hacking (Simonsohn et al., 2014a, p. 535).

Table 4 presents the skewness analysis of our database (Simonsohn et al., 2014b).


[TH="bgcolor: #BDCCD4, align: left"] Statis… [/TH]
[TH="bgcolor: #BDCCD4, colspan: 4, align: center"] Binomi… [/TH]

[TD="bgcolor: #FFFFFF"] Studie… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] p = .0… [/TD]

[TD="bgcolor: #E5EBEF"] Studie… [/TD]
[TD="bgcolor: #E5EBEF, colspan: 4, align: center"] p = .64 [/TD]

[TD="bgcolor: #FFFFFF"] Studie… [/TD]
[TD="bgcolor: #FFFFFF, colspan: 4, align: center"] p = .98 [/TD]

Table 4. Skewness tests on the distribution of significant p values across experiments in the database.


As shown in the first row of Table 4, the right-skew of the p-curve is equivocally significant (p = .048, p = .056).
When this is the case, Simonsohn et al. (2014a) propose applying a second test to see if the studies lack evidential value because they are flatter than an underpowered (33%) p-curve–depicted by the dashed green line.

As shown in the second row of the Table, the observed p-curve is not flatter than the null at 33% power, so we cannot conclude that the evidential value of the database is inadequate.

And finally, the bottom row shows that the p-curve is clearly not left-skewed, implying that the database has not been strongly p-hacked.
Because the right-skew of the p-curve is equivocally significant, we turned to a more direct p-curve algorithm called p-uniform (Van Assen et al., 2015) that directly tests the degree to which the observed curve differs from the “no-effect” uniform distribution. (If there is a substantial amount of heterogeneity in the meta-analysis, this method should be used as a sensitivity analysis.)

The p-uniform test confirms that there is, in fact, a significant effect in our database (p = .005) and that there is no evidence for selection bias (p = .857).

In sum, eight of the nine statistical tests we have applied to our database support the conclusion that its overall statistical significance has not been compromised by either selection bias or by p-hacking.

P-curve and the true effect size

One of the counterintuitive derivations from p-curve analysis–confirmed by extensive simulations–is that when the distribution of significant p values is right-skewed, the inclusion of studies with nonsignificant p levels (p > .05) in a meta-analysis actually underestimates the true effect size in the database (Simonsohn et al., 2014b).

Based on the Simonsohnet al. p-curve analysis, the estimate of the true effect size for our database is 0.20, virtually identical to the mean effect size of Bem’s (2011) original experiments (0.22) and the mean effect size of the presentiment experiments (0.21) (Mossbridgeet al., 2012).

A comparable calculation cannot be legitimately derived from the p-uniform algorithm because it assumes that the population effect size is fixed rather than heterogeneous (van Assen et al., 2015, p. 4).

As shown in Table 1, our population effect size is heterogeneous.

The complementary merits of exact and modified replications

Our meta-analysis reveals that both exact and modified replications of Bem’s experiments achieve significant and comparable success rates (Table 1).

This is reassuring because the two kinds of replication have different advantages and disadvantages.
When a replication succeeds, it logically implies that every step in the replication “worked.”

When a replication fails, it logically implies that at least one or more of the steps in the replication failed–including the possibility that the experimental hypothesis is false–but we do not know which step(s) failed.

As a consequence, even when exact replications fail, they are still more informative than modified replications because they dramatically limit the number of potential variables that might have caused the failure.

There is, of course, no such thing as a truly exact replication.
For example, the experimenter’s attitudes and expectations remain uncontrolled even in a procedurally exact replication, and there are now more than 345 experiments demonstrating that experimenter attitudes and expectations can produce belief-confirming results, even in simple maze experiments with rats as subjects (Rosenthal & Rubin, 1978).

Exact replications also serve to guard against some of the questionable research practices that can produce false-positive results, such as changing the protocol or experimental parameters as the experiment progresses, selectively reporting comparisons and covariates without correcting for the number examined, and selectively presenting statistical analyses that yielded significant results while omitting other analyses that did not (Simmons et al., 2011).

By defining an exact replication in our meta-analysis as one that used Bem’s experimental instructions, software, and stimuli, we ensure that the experimental parameters and data analyses are all specified ahead of time.

In other words, an exact replication is a publicly available, pre-specified protocol that provides many of the same safeguards against false-positive results that are provided by the preregistration of planned experiments.

Despite the merits of exact replications, however, they cannot uncover artifacts in the original protocol that may produce false positive results, whereas suitably modified replications can do exactly that by showing that an experiment fails when a suspected artifact is controlled for.

Modified replications can also assess the generality of an experimental effect by changing some of the parameters and observing whether or not the original results are replicated. For example, the one failed replication of the erotic stimulus detection experiment (Wagenmakers et al., 2012) had substituted mild, non-explicit erotic photographs for the more explicit photographs used in Bem’s original experiment and its exact replications.

As we noted in the introduction, Judd et al. (2012) have recently suggested that psychologists should begin to treat stimuli statistically as a random factor the same way we currently treat participants.

This would constitute a way of testing the generalizability of results in psychological experiments.
This would, however, also represent a major change in current practice in psychology, and none of the experiments in our database treated stimuli as a random factor.

Nevertheless, some generality of stimuli used in Bem’s experimental protocols is achieved.
In those involving erotic photographs, for example, different stimulus sets are used for men and women and all participants are given the choice of viewing opposite-sex or same-sex erotica.

Experiments using words as stimuli (e.g., retroactive priming experiments) were successfully replicated in languages other than English.
The fact that exact and modified replications of Bem’s experiments produced comparable, statistically significant results thus implies generality across stimuli, protocols, subject samples, and national cultures.

Moreover, the different protocols can themselves be viewed as conceptual replications of the overarching hypothesis that individuals are capable of anomalously anticipating random future events.

General discussion

As Bem noted in his original 2011 article, psi is a controversial subject, and most academic psychologists do not believe that psi phenomena are likely to exist.

A survey of 1,188 college professors in the United States revealed that psychologists were much more skeptical about psi than respondents in the humanities, the social sciences, or the physical sciences, including physics (Wagner & Monnet, 1979).

Although this survey is now several years old, many psi researchers have observed that psychologists continue to be the most psi-skeptical subgroup of academics.

As Bem further noted, there are, in fact, justifiable reasons for the greater skepticism of psychologists.
Although our colleagues in other disciplines would probably agree with the oft-quoted dictum that “extraordinary claims require extraordinary evidence,” we psychologists are more likely to be familiar with the methodological and statistical requirements for sustaining such claims and aware of previous claims that failed either to meet those requirements or to survive the test of successful replication.

Even for ordinary claims, our conventional frequentist statistical criteria are conservative: The p = .05 threshold is a constant reminder that it is worse to assert that an effect exists when it does not (the Type I error) than to assert that an effect does not exist when it does (the Type II error). (For a refreshing challenge to this view, see Fiedler et al., 2012).

Second, research in cognitive and social psychology over the past 40 years has sensitized us psychologists to the errors and biases that plague intuitive attempts to draw valid inferences from the data of everyday experience (e.g. Gilovich, 1991; Kahneman, 2011).

This leads us to give virtually no weight to anecdotal or journalistic reports of psi, the main source cited in the survey by our colleagues in other disciplines as evidence for their more favorable beliefs about psi.

One sobering statistic from the survey was that 34% of psychologists in the sample asserted psi to be impossible, more than twice the percentage of all other respondents (16%).

Critics of Bayesian analyses frequently point out the reductio ad absurdum case of the extreme skeptic who declares psi or any other testable phenomenon to be impossible.

The Bayesian formula implies that for such a person, no finite amount of data can raise the posterior probability in favor of the experimental hypothesis above 0, thereby conferring illusory legitimacy on the most anti-scientific stance.

More realistically, all an extreme skeptic needs to do is to set his or her prior odds in favor of the psi alternative sufficiently low so as to rule out the probative force of any data that could reasonably be proffered.

Which raises the following question: On purely statistical grounds, are the results of our meta-analysis strong enough to raise the posterior odds of such a skeptic to the point at which the psi hypothesis is actually favored over the null, however slightly?

An opportunity to calculate an approximate answer to this question emerges from the Bayesian critique of Bem’s original experiments made by Wagenmakers et al. (2011).

Although they did not explicitly claim psi to be impossible, they came very close by setting their prior odds at 1020 against the psi hypothesis.

As shown in Table 1, the Bayes Factor for our database is approximately 109 in favor of the psi hypothesis, which implies that our meta-analysis should lower their posterior odds against the psi hypothesis to 1011.

In other words, our “decisive evidence” falls 11 orders of magnitude short of convincing Wagenmakers et al. to reject the null. (See a related analysis of their prior odds in Bem et al., 2011.)

Clearly psi-proponents have their work cut out for them.
Beyond this Bayesian argument, a more general reason that many psychologists may find a meta-analysis insufficiently persuasive is that the methodology of meta-analysis is itself currently under intense re-examination, with new procedural safeguards (e.g. preregistration of all included studies) and statistical procedures (e.g., treating stimuli as a random factor,p-curve analysis) appearing almost monthly in the professional literature.

Even though our meta-analysis was conceived and initiated prior to many of these developments, we were able to make use of many of them after the fact, (e.g., p-curve analysis) but not others (e.g., preregistration, stimuli treated as a random factor).

We thus hope that other researchers will be motivated to follow up with additional experiments and analyses to confirm, disconfirm, or clarify the nature of our findings.

Perhaps the most reasonable and frequently cited argument for being skeptical about psi is that there is no explanatory theory or proposed mechanism for psi phenomena that is compatible with current physical and biological principles.

Indeed, this limitation is implied by the very description of psi as “anomalous,” and it provides an arguably legitimate rationale for imposing the requirement that the evidence for psi be “extraordinary.”

We would argue, however, that this is still not a legitimate rationale for rejecting proffered evidence a priori.
Historically, the discovery and scientific exploration of most phenomena have preceded explanatory theories, often by decades (e.g., the analgesic effect of aspirin; the anti-depressant effect of electroconvulsive therapy) or even centuries (e.g., electricity and magnetism, explored in ancient Greece as early as 600 BC, remained without theoretical explanation until the Nineteenth Century).

The incompatibility of psi with our current conceptual model of physical reality may say less about psi than about the conceptual model of physical reality that most non-physicists, including psychologists, still take for granted–but which physicists no longer do.

As is widely known, the conceptual model of physical reality changed dramatically for physicists during the 20th Century, when quantum theory predicted and experiments confirmed the existence of several phenomena that are themselves incompatible with our everyday Newtonian conception of physical reality.

Some psi researchers see sufficiently compelling parallels between certain quantum phenomena (e.g., quantum entanglement) and characteristics of psi to warrant considering them as potential mechanisms for psi phenomena (e.g., Broderick, 2007; Radin, 2006).

Moreover, specific mechanisms have been proposed that seek to explain psi effects with theories more testable and falsifiable than simple metaphor (e.g., Bierman, 2010; Maier & Buechner, 2015; Walach et al., 2014).

A recent collection of these theories is presented in May & Marwaha (2015).
Although very few physicists are likely to be interested in pursuing explanations for psi, the American Association for the Advancement of Science (AAAS) has now sponsored two conferences of physicists and psi researchers specifically organized to discuss the extent to which precognition and retrocausation can be reconciled with current or modified versions of quantum theory.

The proceedings have been published by the American Institute of Physics (Sheehan, 2006;Sheehan, 2011).
A central starting point for the discussions has been the consensus that the fundamental laws of both classical and quantum physics are time symmetric:


  • They formally and equally admit time-forward and time-reversed solutions.... Thus, though we began simply desiring to predict the future from the present, we find that the best models do not require–in fact, do not respect–this asymmetry.... [Accordingly,] it seems untenable to assert that time-reverse causation (retrocausation) cannot occur, even though it temporarily runs counter to the macroscopic arrow of time (Sheehan, 2006, p. vii).

Ironically, even if quantum-based theories of psi eventually do mature from metaphor to genuinely predictive models, they are still not likely to provide intuitively satisfying descriptive mechanisms for psi because quantum theory itself fails to provide such mechanisms for physical reality.

Physicists have learned to live with that conundrum in several ways.
Perhaps the most common is simply to ignore it and attend only to the mathematics and empirical findings of the thry–derisively called the “Shut Up and Calculate” school of quantum physics (Kaiser, 2012).

As physicist and Nobel Laureate Richard Feynman (1994) advised, “Do not keep saying to yourself... ‘but how can it be like that?’ because you will get...into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that (p. 123).”

Meanwhile the data increasingly compel the conclusion that it really is like that.
Perhaps in the future, we will be able to make the same statement about psi.

Data availability

F1000Research: Dataset 1. Table S1, 10.5256/f1000research.7177.d105136

 
How Many Universes Are There?
The World May Finally Know


Susanne.Posel-Headline.News_.Official-multiple.universe.discovered.plank_.european.space_.agency_occupycorporatism.jpg


Ranga-Ram Chary, project manager for the Plank Data Center, has discovered a “glow” using the Cosmic Microwave Background (CMB).
These spots of light are an estimated 4,500 times brighter than was expected.

The data from this discovery was determined via a map using information from the European Space Agency (ESA) space telescope.
Chary surmised that this glow was another universe colliding with our own; which validates the hypothesis that our universe is located in a region of other universes.

This discovery lends to the multiple universe theories that have so far been difficult to prove.
Cosmic inflation , the theory of expansion since the Big Bang, was the catalyst for the multiple universe hypothesis.

But before the scientific community gets too excited, Chary said that there is a 30% chance this light “is nothing out of the ordinary”.


[video=youtube;Ywn2Lz5zmYg]https://www.youtube.com/watch?feature=player_embedded&amp;v=Ywn2Lz5zmYg[/video]
 
[video=youtube;Bf5TgVRGND4]https://www.youtube.com/watch?feature=player_embedded&amp;v=Bf5TgVRGND4[/video]
[MENTION=2578]Kgal[/MENTION]​
 
@Kgal Or anyone else who has a clue??


So last night as I was driving home from dropping off my Son at his Mom’s…I’m going down the freeway and it’s slightly raining/misting…anyhow…
I notice that the lights on this truck up ahead are all triangles, but it’s a bit of a ways up and the lights are blurred slightly from the mist/night glare/flare that you get.
Then I look at another truck…and another car…all triangle light flares…as I get closer, they were all round…this is the first time in my life I think I have ever noticed this?
Has this happened to you? It was like driving behind hundreds of red triangles.

Usually (and now I’m not sure of myself) the lights at night would flare or glare and still be round…it was bizarre.
What do you think?

Later that night I was walking outside and it wasn’t happening.

WTF?
 
Johns Hopkins Psilocybin Project:
Psychedelics Research History & the Mystical Experience


[video=youtube;samrfbevoWI]https://www.youtube.com/watch?feature=player_detailpage&amp;v=samrfbevoWI[/video]

Roland R. Griffiths, Ph.D., is Professor in the Departments of Psychiatry and Neurosciences at the Johns Hopkins University School of Medicine.
His principal research focus in both clinical and preclinical laboratories has been on the behavioral and subjective effects of mood-altering drugs.

His research has been largely supported by grants from the National Institute on Health and he is author of over 300 journal articles and book chapters.
He has been a consultant to the National Institutes of Health, and to numerous pharmaceutical companies in the development of new psychotropic drugs.

He is also currently a member of the Expert Advisory Panel on Drug Dependence for the World Health Organization.
He has an interest in meditation and is the lead investigator of the psilocybin research initiative at Johns Hopkins, which includes studies of psilocybin occasioned mystical experience in healthy volunteers and cancer patients, and a pilot study of psilocybin-facilitated smoking cessation.

Roland Griffiths. Alicia Danforth. Charles Grob. Matthew Johnson. Albert Garcia-Romeu. Tony Bossis. Stephen Ross. David Nichols.
 
@Kgal Or anyone else who has a clue??


So last night as I was driving home from dropping off my Son at his Mom’s…I’m going down the freeway and it’s slightly raining/misting…anyhow…
I notice that the lights on this truck up ahead are all triangles, but it’s a bit of a ways up and the lights are blurred slightly from the mist/night glare/flare that you get.
Then I look at another truck…and another car…all triangle light flares…as I get closer, they were all round…this is the first time in my life I think I have ever noticed this?
Has this happened to you? It was like driving behind hundreds of red triangles.

Usually (and now I’m not sure of myself) the lights at night would flare or glare and still be round…it was bizarre.
What do you think?

Later that night I was walking outside and it wasn’t happening.

WTF?

I am hearing from others sights of the platonic solids appearing in their view. They make up the stuff we're made of and everything else.
It would seem you are starting to see the 4th dimension appearing through the 3rd overlay.

Keep in mind we are a hologram. We are projecting our world "out there" from within our selves. Each of us creates our own world and at the same time agree to co-create THE world following game rules with others around us. Now...you might be thinking that you're not agreeing with others....yet to verify that idea we must look to our beliefs. For example: we co-create the idea with others that Trees have leaves and bark and tend to put roots in the ground while reaching for the sky.

Herein lies our power to create a different world with others. We can choose to project the world we wish to see while attracting others to co-create it with us. Still...we are agreeing to create within the rules of the game...or the software...or the program running this hologram world. So if the Elemental Tree wishes to self express in this world as having roots and leaves...then if we wish to co-create a world with Tree in it - we agree to see Tree with leaves and roots. Otherwise - Tree will not be in our world.

You are seeing the through the veil of illusion of the 3rd frequency level revealing the fabric of the Universe upon which all the frequencies exist.

EDIT: I forgot to mention I see translucent purple when I'm driving...especially when I'm driving over water or the trees grow close to the edge of the roads. Amazing to see this stuff - isn't it? :)
 
[video=youtube;Bf5TgVRGND4]https://www.youtube.com/watch?feature=player_embedded&amp;v=Bf5TgVRGND4[/video]
@Kgal

Bahahahahaha.... I laughed sooo hard!

May even reduce catanoia? :pound:
 
I am hearing from others sights of the platonic solids appearing in their view. They make up the stuff we're made of and everything else.
It would seem you are starting to see the 4th dimension appearing through the 3rd overlay.

Keep in mind we are a hologram. We are projecting our world "out there" from within our selves. Each of us creates our own world and at the same time agree to co-create THE world following game rules with others around us. Now...you might be thinking that you're not agreeing with others....yet to verify that idea we must look to our beliefs. For example: we co-create the idea with others that Trees have leaves and bark and tend to put roots in the ground while reaching for the sky.

Herein lies our power to create a different world with others. We can choose to project the world we wish to see while attracting others to co-create it with us. Still...we are agreeing to create within the rules of the game...or the software...or the program running this hologram world. So if the Elemental Tree wishes to self express in this world as having roots and leaves...then if we wish to co-create a world with Tree in it - we agree to see Tree with leaves and roots. Otherwise - Tree will not be in our world.

You are seeing the through the veil of illusion of the 3rd frequency level revealing the fabric of the Universe upon which all the frequencies exist.

EDIT: I forgot to mention I see translucent purple when I'm driving...especially when I'm driving over water or the trees grow close to the edge of the roads. Amazing to see this stuff - isn't it? :)

Another person here suggested it could be visual hallucinations brought on by the aura of a certain type of ocular migraine.
Which is an interesting suggestion, but it was not a free standing thing. And I did not get, nor have I ever had an ocular migraine - but it was a smart suggestion!
The description just doesn’t make sense compared to what I saw.

It was every light past a certain distance flared into a perfect triangle instead of the expected starburst glare/flare pattern you usually see.
I thought - Is this always how it is and I never just noticed until now? But the more I thought about it, I never remembered seeing that, and if so, then other people would see it too and I could look up the phenomena - for which I have had zero success.

I have been mentally, emotionally, spiritually, etc, etc, preparing for this entheogenic experience I plan to embark on next week.
I have been doing meditations that I don’t normally do lately, including a good portion on astral projection.
And I have been asking for guidance…to “show” me, to help me know/understand, to kick me out of the rut of this skipping record that I feel I’ve been trapped on going around in circles for far too long now.
It’s not my eyes…I actually have better than 20/20 vision…so I’ve ruled that out.
I could have a brain tumor, but I would expect more symptoms than what I saw and it would be more consistent.
I wasn’t on drugs.

So, perhaps I did see past something for a little while, I must admit that I have been trying to see auras on people and to notice when there is a spirit around.
This probably sounds cuckoo to someone on the outside reading this, but I have no other explanation that helps to explain what I saw except for the one you have offered.

Gives me something to ponder for a while.
Thank you for your detailed response as always!

Thanks [MENTION=2578]Kgal[/MENTION] [MENTION=251]Wyote[/MENTION]
 
Another person here suggested it could be visual hallucinations brought on by the aura of a certain type of ocular migraine.
Which is an interesting suggestion, but it was not a free standing thing. And I did not get, nor have I ever had an ocular migraine - but it was a smart suggestion!
The description just doesn’t make sense compared to what I saw.

It was every light past a certain distance flared into a perfect triangle instead of the expected starburst glare/flare pattern you usually see.
I thought - Is this always how it is and I never just noticed until now? But the more I thought about it, I never remembered seeing that, and if so, then other people would see it too and I could look up the phenomena - for which I have had zero success.

I have been mentally, emotionally, spiritually, etc, etc, preparing for this entheogenic experience I plan to embark on next week.
I have been doing meditations that I don’t normally do lately, including a good portion on astral projection.
And I have been asking for guidance…to “show” me, to help me know/understand, to kick me out of the rut of this skipping record that I feel I’ve been trapped on going around in circles for far too long now.
It’s not my eyes…I actually have better than 20/20 vision…so I’ve ruled that out.
I could have a brain tumor, but I would expect more symptoms than what I saw and it would be more consistent.
I wasn’t on drugs.

So, perhaps I did see past something for a little while, I must admit that I have been trying to see auras on people and to notice when there is a spirit around.
This probably sounds cuckoo to someone on the outside reading this, but I have no other explanation that helps to explain what I saw except for the one you have offered.

Gives me something to ponder for a while.
Thank you for your detailed response as always!

Thanks @Kgal @Wyote

Really? Me too!!! I've been relaxing my eyes to see if I can see auras. Lately - as I've been watching the crows wing their way silently across the sky I have started seeing a darker contrast around them. It's as if they're flying in a bubble that morphs and moves with them.

Are you going to try for the Lion's Gateway Nov 11th?

That's a very powerful day. B had died the day before and on the night of the 11th he materialized in physical form into my room. I attribute it to the energies from the Lion's Gateway.

I too am gearing up for a meditation experience soon as I have been told to prepare. Yesterday and today I am on lime water fasts with only one meal a day. No beer either. :( ( i do love the home brew)
They told me the Hops interferes....

I wish you a wonderful astonishing energy expanding experience Skarekrow!!! :hug:
 
Just wanted to say thank you to everyone!
This thread now has over 150,000 views!
So awesome!
 
NASA confirms that the ‘impossible’ EmDrive thruster really works, after new tests


Engineer Roger Shawyer’s controversial EmDrive thruster jets back into relevancy this week, as a team of researchers at NASA’s Eagleworks Laboratories recently completed yet another round of testing on the seemingly impossible tech.

Though no official peer-reviewed lab paper has been published yet, and NASA institutes strict press release restrictions on the Eagleworks lab these days, engineer Paul March took to the NASA Spaceflight forum to explain the group’s findings. In essence, by utilizing an improved experimental procedure, the team managed to mitigate some of the errors from prior tests – yet still found signals of unexplained thrust.

Isaac Newton should be sweating.

Flying in the face of traditional laws of physics, the EmDrive makes use of a magnetron and microwaves to create a propellant-less propulsion system.
By pushing microwaves into a closed, truncated cone and back towards the small end of said cone, the drive creates the momentum and force necessary to propel a craft forward.

Because the system is a reaction-less drive, it goes against humankind’s fundamental comprehension of physics, hence its controversial nature.

View photo
25b06b46f265853a0492a0d2979f31e4


On the NASA spaceflight forums, March revealed as much as he could about the advancements that have been made with EmDrive and its relative technology. After apologizing for not having the ability to share pictures or the supporting data from a peer-reviewed lab paper, he starts by explaining (as straightforward as rocket science can get) that the Eagleworks lab successfully built and installed a 2nd generation magnetic damper which helps reduce stray magnetic fields in a vacuum chamber.

The addition reduced magnetic fields by an order of magnitude inside the chamber, and also decreased Lorentz force interactions.

However, despite ruling out Lorentz forces almost entirely, March still reported a contamination caused by thermal expansion.

Unfortunately, this reported contamination proves even worse in a vacuum (i.e. outer space) due in large part to its inherently high level of insulation.
To combat this, March acknowledged the team is now developing an advanced analytics tool to assist in the separation of the contamination, as well as an integrated test which aims to alleviate thermally induced errors altogether.

While these advancements and additions are no doubt a boon for continued research of the EmDrive, the fact that the machine still produced what March calls “anomalous thrust signals” is by far the test’s single biggest discovery.

The reason why this thrust exists still confounds even the brightest rocket scientists in the world, but the recurring phenomenon of direction-based momentum does make the EmDrive appear less a combination of errors and more like a legitimate answer to interstellar travel.

Eagleworks Laboratories’ recent successful testing is the latest in a long line of scientific research allowing EmDrive to slowly shed its “ridiculous” title.
Though Shawyer unveiled the device in 2003, it wasn’t until 2009 that a group of Chinese scientists confirmed what he initially asserted to be true – that is, that filling a closed, conical container with resonating microwaves does, in fact, generate a modest amount of thrust towards the wide end of the container.

Although extremely cautious about the test, the team in China found the theoretical basis to be correct and that net thrust is plausible.


The thing is, the initial reaction on this theory (especially from the west) was met with polite skepticism.
Though the published work showed the calculations to be consistent with theoretical calculations, the test was conducted at such low power that the results were widely deemed to be useless.

Luckily, this didn’t stop the good folks over at NASA from giving the EmDrive a spin, resulting in an official study that was conducted in August of 2013.
After deliberating on the findings, the space agency officially published its judgment in June of the following year before presenting it at the 50th Joint Propulsion Conference in Cleveland, Ohio.

NASA concluded the RF resonant cavity thruster design does produce thrust “not attributable to any classical electromagnetic phenomenon.”
In other words, NASA confirmed Shawyer’s initial prognosis (much like the team of Chinese scientists), but couldn’t come up with a reasonable explanation as to why the thing works outside of, “it just does.”

Moving forward, NASA’s short term objective is to conduct a diverse array of tests on a quantum vacuum plasma thruster (a similar propellantless engine flatter in shape than the EmDrive), in hopes of gaining independent verification and validation of the thruster.

Initial IV&V testing will be supported by the Glenn Research Center in Cleveland, Ohio, making use of a stainless steel vacuum chamber which has the capacity to detect force at a single-digit micronewton level, called a low-thrust torsion pendulum.

After that, a similar round of low-thrust torsion pendulum tests will then be conducted at NASA’s Jet Propulsion Laboratory before comparing the findings.
It’s also reported that the Johns Hopkins University Applied Physics Laboratory has contacted the lab about conducting Cavendish Balance-type testing of the IV&V shipset.

Ideally, this test would allow Johns Hopkins to measure the amount of gravitational force exerted in propellantless engines.

At this time, it’s unknown when Eagleworks Laboratories intends to officially publish its peer-reviewed paper, but even so, just hearing of the EmDrive’s advancements from one of its top engineers bodes well for the future of this fascinating tech.

 
The Lady ParaNorma

[video=youtube;9UFjqwRj76o]https://www.youtube.com/watch?feature=player_detailpage&amp;v=9UFjqwRj76o[/video]

A short film written and directed by artist Vincent Marcone aka "My Pet Skeleton" with narration by Peter Murphy...
 
A very nice mindfulness meditation by Alan Watts.
Enjoy!


[video=youtube;jPpUNAFHgxM]https://www.youtube.com/watch?feature=player_detailpage&amp;v=jPpUNAFHgxM[/video]

Alan Watts - Guided Meditation (Awakening The Mind)


 
Physicists Say Consciousness May Be A State Of Matter:
The Non-Physical Is Indeed Real


consciousness-728x400.jpg


“Looking for consciousness in the brain is like looking in the radio for the announcer.” — Nassim Haramein, director of research for the Resonance Project

It’s been more than one hundred years since Max Planck, the theoretical physicist who originated quantum theory, which won him the Nobel Prize in Physics, said that he regards “consciousness as fundamental,” that he regards “matter as a derivative from consciousness,” and that “everything we talk about, everything that we regard as existing, postulates consciousness.”

He is basically saying that the immaterial ‘substance’ of consciousness is directly intertwined with what we perceive to be our physical material world in some sort of way, shape or form, that consciousness is required for matter to be, that it becomes after consciousness….and he’s not the only physicist to believe that.

“It was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to consciousness.” —
Eugene Wigner, theoretical physicist and mathematician.
He received a share of the Nobel Prize in Physics in 1963

Scientists have been urging the mainstream scientific community, which today is littered with scientific fraud and industry influence as well as invention secrecy, to open up to a broader view regarding the true nature of our reality.

“The day science begins to study non-physical phenomena, it will make more progress in one decade that in all of the previous centuries of its existence.” —
Nikola Tesla

Not long ago, a group of internationally recognized scientists came together to stress this fact and how it’s overlooked by the mainstream scientific community.
It’s ‘post-material” science, an area of study dealing with the ‘non-physical realm, and it’s challenging the modern scientific worldview of materialism that’s dominated mainstream science.

The idea that matter is not the reality is finally starting to gain some merrit.
The summary of this report presented at the International Summit On Post-Materialist Science can be found HERE.

“The modern scientific worldview is predominantly predicated on assumptions that are closely associated with classical physics. Materialism–the idea that matter is the only reality–is one of these assumptions. A related assumption is reductionism, the notion that complex things can be understood by reducing them to the interactions of their parts, or to simpler or more fundamental things such as tiny material particles.” —
Manifesto for a Post-Materialist Science

MIT’s Max Tegmark,a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, is one of the latest to attempt explaining why he believes consciousness is a state of matter.

He believes that consciousness arises out of a certain set of mathematical conditions, and that there are varying degrees of consciousness — just as certain conditions are required to create varying states of vapor, water, and ice.

As PBS emphasized, “understanding how consciousness functions as a separate state of matter could help us come to a more thorough understanding of why we perceive the world the way we do.” (source)

Tegmark describes this as “perceptronium,” which he defines as the most general substance that feels subjectively self-aware and this substance should not only be able to store information, but do it in a way that form a unified, indivisible, whole.

“The problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, we do we perceive this solution?” —
Tegmark (source)

This new way of thinking about consciousness has been spreading throughout the physics community at an exponential rate within the past few years.
Considering consciousness as an actual state of matter would be huge, considering the fact that modern day definitions of matter require a substance to have mass, which consciousness does not have.

What it does have, however, is some sort of effect on our physical material world, and the extent of this effect and how far it goes is the next step for science.
The quantum double slit experiment is a very popular experiment used to examine how consciousness and our physical material world are intertwined.

It is a great example that documents how factors associated with consciousness and our physical material world are connected in some way.

One potential revelation of this experience is that “the observer creates the reality.”

A paper published in the peer-reviewed journal Physics Essays by Dean Radin, PhD, explains how this experiment has been used multiple times to explore the role of consciousness in shaping the nature of physical reality.

The study found that factors associated with consciousness “significantly” correlated in predicted ways with perturbations in the double slit interference pattern. (source)

“Observation not only disturbs what has to be measured, they produce it. We compel the electron to assume a definite position. We ourselves produce the results of the measurement.” (source)

For a physicist to brush off the fact that understanding consciousness is necessary for the advancement and understanding of the nature of our reality is not as common as it used to be but, despite the empirical success of quantum theory, even the suggesting that it could be true as a description of our reality is greeted with harsh cynicism, incomprehension and even anger.

“A fundamental conclusion of the new physics also acknowledges that the observer creates the reality. As observers, we are personally involved with the creation of our own reality. Physicists are being forced to admit that the universe is a “mental” construction. Pioneering physicist Sir James Jeans wrote: “The stream of knowledge is heading toward a non-mechanical reality; the universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter, we ought rather hail it as the creator and governor of the realm of matter. Get over it, and accept the inarguable conclusion. The universe is immaterial-mental and spiritual.” —
R.C. Henry, Professor of Physics and Astronomy at Johns Hopkins University , “The Mental Universe” ; Nature 436:29,2005) (source)

Thanks for reading.
 
The other picture of my Dad in this thread broke so…
In honor of him on Veterans Day yesterday…I miss him every day!

attachment.php
 

Attachments

  • Dad army.webp
    Dad army.webp
    69 KB · Views: 51
Last edited:
Hi Scarecrow!
Just wanted to share with you my first guided meditation that I was able to let go completely. I've tried Allan Watts above - enjoyed it but there is always something that holds me back with this guided meditations. It seems that I am better on my own.

https://www.youtube.com/watch?v=N8QiIWS0pRU
 
Hi Scarecrow!
Just wanted to share with you my first guided meditation that I was able to let go completely. I've tried Allan Watts above - enjoyed it but there is always something that holds me back with this guided meditations. It seems that I am better on my own.

https://www.youtube.com/watch?v=N8QiIWS0pRU


Thanks for the meditation!
I actually have a couple of meditations that I recorded myself…my own voice, I would recommend it actually.
It gives it a new dynamic that is interesting.

Allan Watts can go off on tangents sometimes that seems to interrupt the flow of things…I understand what you mean.
 
I think my problem with guided meditations is that I don' want to be led into something or perhaps I am afraid to give someone power over me. Meditation music is better choice for me but
I am willing to try (and curious) if you are willing to share those that you recorded.

My mom told me that ,once when she was young, 'magician' chose her along with few others from audience and tried to hypnotize them. He was able to hypnotize everyone but her and he tried couple times with no success. I don't know how much is that related to meditation - maybe just that feeling of letting go of control? I am chicken to try hypnosis even though I would like to try past life regression. Do you have any experience with it?
 
I think my problem with guided meditations is that I don' want to be led into something or perhaps I am afraid to give someone power over me. Meditation music is better choice for me but
I am willing to try (and curious) if you are willing to share those that you recorded.

My mom told me that ,once when she was young, 'magician' chose her along with few others from audience and tried to hypnotize them. He was able to hypnotize everyone but her and he tried couple times with no success. I don't know how much is that related to meditation - maybe just that feeling of letting go of control? I am chicken to try hypnosis even though I would like to try past life regression. Do you have any experience with it?

Well, you certainly should try and find whatever works best for you.
But I must say that one of the points of meditating is to “let go”…to just let things “be” without putting a label or expectation on it.
When you find yourself consumed by outside thoughts, acknowledge them and dismiss them…refocus yourself with a mantra or by breathing exercises.
It’s almost equivalent to having “faith”…not an easy task for many, and even harder when your mind thinks in a certain way.
I haven’t personally had a past life regression but my SO has and enjoyed the experience immensely!
But yes, for something like that, it doesn’t require hypnosis (though most practitioners do it this way because it’s the easiest way to get there), and you should always do such things IMO with someone that you fully trust (i.e. - not a stage magician haha) and feel comfortable with.
Remember, meditation is something you practice…it is always difficult at first, especially when our heads have been filled with the terrible tragedies happening around the world…our media and TV constantly show you that you should - be afraid, you aren’t good enough, materialism brings happiness…etc.
There really is no wrong way to start meditating if you are sincere and make the effort - it will become easier you will find after time, but you should try and make it a daily activity if only for just 10 mins at a time.

Here is one to try yourself…it’s just a mental exercise to try.

Frist breathe...
Imagine you are a droplet of water…the water is who you see yourself as, this includes your problems and difficulties, your fears…they are all concentrated in this droplet of water. Which may seem small to us, but under a microscope it can be another universe entirely…size is and isn't relevant here.
What happens if you drop that drop into a glass of water? Imagine your problems, fears, worries, etc. diluting into that glass.
Remember to breathe…
That glass of water may still be pretty strong depending on what your droplet contains….so then pour it into the bath….it’s more dilute - and your problems are less significant floating around in here - so then imagine pouring that water into a pond….into a lake…into a flowing river that runs to the Oceans…make it as big as you need it to be until those problems are so diluted now that no longer effect you…if one in particular keeps popping into your head then be specific with that problem and use the same visualization to deal with it.
Expand your consciousness to the corresponding size of the body of water you have reached until you cannot anymore.
Don’t be surprised the first few times if this is very difficult to maintain concentration over.
Don’t be discouraged…like every practice, you get better, it gets easier, you gain insight and wisdom and learn to gain some power over the fear thoughts that dominate us if we let them.

I hope that helps!
 
Last edited:
I, psychonaut.

I will tell you firstly that I’m not advocating that anyone break any laws in their states or countries, but I will however say that it is YOUR consciousness not theirs!

First steps…taking the leap.

There is no way around it, I mean you can pussyfoot around it and just take a little bit and see what happens and get disappointed and underwhelmed…but why bother?
I took enough…made into a ginger tea, to give me at least a level 3 experience -

Level 3
Very obvious visuals, everything looking curved and/or warped patterns and kaleidoscopes seen on walls, faces etc. Some mild hallucinations such as rivers flowing in wood grained or "mother of pearl" surfaces. Closed eye hallucinations become 3 dimensional. There is some confusion of the senses (i.e. seeing sounds as colors, etcetera). Time distortions and "moments of eternity".

I felt the Remeron I did a quick taper off was possibly still in my system and though I’m not disappointed with the outcome, could have blocked some of the effects but - WTF do I know? It was my first time…I took the leap and popped my cherry if I can be so crass.
First of all…I prepared myself of a loooong time for this in every aspect that I could think to prepare.
Going out and making the person to person connections that facilitated this experience has introduced me to a whole group of incredibly caring and sincere individuals.
So preparing myself yesterday I tried to avoid too many negative things (why I don’t do that all the time was the first realization), though it’s practically forced upon us in every way possible.
Like the little pamphlet that someone inched through the door jam over the course of 5 mins. - this guy is standing on my doorstep, clearly having seen the No Solicitation sign and deciding to not ring my doorbell (which is wise for many coming to preach at me), still…it eventually gets through enough to fall inside my house. What does it say? “End of the World!!!” “You can only be saved from Hell fires if you convert!”
Well fuck you very much for shoving your fear and hate into my sanctuary…clearly the Lionel Ritchie No Solicitation sign needs to be more specific than “Hello?…It’s NOT me you’re looking for! - No Soliciting” I guess should also include - no shoving shit through my doorjamb!
My point being…we’re practically water-boarded with fear and anxiety on a daily basis and we ignore it for the most part…and sometimes when you ignore something long enough and you don’t face that fear being fed to you - it becomes true. It has given you a fear of that thing you weren’t afraid of before. Remember how fearless you could be as a kid…that’s not ignorance of things and lack of knowledge…that was you.
That was the real you that wasn’t afraid of this or that…and now I want to replace the word “fearless” with the word “stillness”.
The stillness of the innocent heart.
Isn’t that what we all crave in our daily lives? Some form of stillness…for our mind to not be lost in the past, recreated over and over, sometimes exaggerated upon…for them not be lost in the infinity of futures possible which are most definitely exaggerated!
For our mind to be fully present and to maintain that presence is no easy task for anyone who has undertaken the challenge…or for some the necessity.
That is what I got the most from this experience - it forced me to be present.
For some, I would imagine that would be uncomfortable, but it wasn’t…it was okay for everything to be how it is…and even this morning it still is.

So the time came that I had set for myself and I made the worst tasting ginger tea ever…but it was also very strangely familiar both in taste and smell. (Why no, I have never eaten a gym sock, why do you ask?)
Before hand I lit some of my favorite incense and also smudged my house with sage and dragon’s blood.
I said a prayer of gratitude to the spirits of the tea and chugged it down along with the bits….no stomach discomfort.
After 15 mins. I get slightly light-headed.
After 40 mins. I decide to go lie down, put my eye-shades on (last thing I looked at was my alarm clock reading 5:55), and listen to the music/nature sounds/etc/etc. playlist that I put together earlier.
The geometric patterns you faintly notice when you close your eyes are very noticeable with eyes open or closed.
The shapes quickly change from the familiar geometric designs to more fluid shapes…like paisley but a bit more squiggly some colors…purple, green, yellow, white.
The music and sounds feel as if they are echoing around in my head…stereo is given a whole new dimension ;-)
It’s probably a hour and a half in now…same as before though more intensified.
I cannot close my eyes though I feel as if I am falling asleep, I see slow-building pulses of white light that begin in my lower periphery and fade out as they reach up and to the sides.
I wonder if this is my Qi?
The Qi of the substance?
My aura? (at least it wasn’t black)
Any remaining nervousness, uncertainty, and expectations are let go now, I feel comfortable and surprisingly in control of myself and my senses.
Throughout the whole experience though - I am present.

I think it was about 8pm when I felt like getting back up again…though I said little about my experience to Sensiko as I think I was still processing this myself.
So far so good.

Had a bit of a time getting to sleep and staying asleep last night, but didn’t have any wild dreams.

First steps - done.

Next step - go deeper.
I’m ready.
 
Back
Top