Vous êtes sur la page 1sur 670

JOURNAL OF SCIENTIFIC EXPLORATION

A Publication of the Society for Scientific Exploration Volume 14, Number 1
Page
1 19 35

53

73

91

Research Articles Investigating Deviations from Dynamical Randomness with Scaling Indices Valentich Disappearance: New Evidence and a New Conclusion Protection of Mice from Tularemia Infection with Ultra-Low, Serial Agitated Dilutions Prepared from Francisella tularensis-Infected Tissue The Correlation of the Gradient of Shannon Entropy and Anomalous Cognition: Toward an AC Sensory System Contributions to Variance in REG Experiments: ANOVA Models and Specialized Subsidiary Analyses Publication Bias: The "File-Drawer" Problem in Scientific Inference Essay Remote Viewing in a Group Setting

Harald Atmanspacher and Herbert Scheingraber Richard F. Haines and Paul Norman Wayne B. Jonas and Debra K. Dillner Edwin C. May, S. James, I? Spottiswoode, and Laura V Faith R. D. Nelson, R. G. Jahn, Z H. Dobyns, and B. J. Dunne Jefsrey D. Scargle

107

Russell Targ and Jane E. Katra

115

Guest Column The Sovereignty of Science Book Reviews Best UFO Cases-Europe by Illobrand von Ludwiger The Last Laugh by Raymond Moody The Discovery of the Cold Fusion Phenomenon by Hideo Kozima The Truth in the Light by Peter Fenwick and Light and Death by Michael Sabom Psychedelic Drugs Reconsidered by Lester Grinspoon and James B. Bakalar The Meaning of Consciousness by Andrew Lohrey Erratum

121 124 129 132 135 137 139

Richard E Haines Robert Almeder John O'M. Bockris Ian Stevenson Charles Eisenstein John Belof

141 145 149

SSE News Items Nineteenth Annual SSE Meeting Announcement Fifth Biennial SSE European Meeting
Corrected Index for Volume 13

[sji JOURNAL OF SCIENTIFIC EXPLORATION
A Publication of the Society for Scientific Exploration
Volume 14, Number 2
Page 163 195 217 233

Research Articles
Overview of Several Theoretical Models on PEAR Data The Ordering of Random Events by Emotional Expression Energy, Fitness, and Information-Augmented Electromagnetic Fields in Drosophila melanogaster A Dog That Seems to Know When His Owner Is Coming Home: Videotaped Experiments and Observations
York H. Dobyns Richard A. Blasband Michael J. Kohane and William A. Tiller Rupert Sheldrake and Pamela Smart

Essay
257 275 What Can Elementary Particles Tell Us About the World in Which We Live? Modern Physics and Subtle Realms: Not Mutually Exclusive
Ronald A. Bryan Robert D. Klauber

281 282

287 289 293 295 296

Book Reviews The User Illusion: Cutting Consciousness Down to Size by Tor Ngrretranders Larnarck's Signature by Edward J. Steele, Robyn A. Lindley, and Robert V. Blanden; Epigenetic Inheritance and Evolution by Eva Jablonka and Marion J. Lamb Authentic Knowing: The Convergence of Science and Spiritual Aspiration by Imants Baruss Alien Abductions: Creating a Modern Phenomenon by Terry Matheson At the Threshold by Charles F. Ernmons Minds in Many Pieces: Revealing the Spiritual Side of Multiple Personality Disorder by Ralph B. Allison Earth Under Fire: Humanity's Survival of the Apocalypse by Paul LaViolette SSE News Items

Arnold L. Lettieri, JK Michael Levin

James A. Scott Damien Broderick Charles Eisenstein Robert E Creegan John L. Petersen

JOURNAL OF SCIENTIFIC EXPLORATION
A Publication of the Society for Scientific Exploration
Volume 14, Number 3
Page 303 304 Tribute Editorial Peter A. Sturrock Henry Bauer

I
2000

~

307 353 365 383

Research Articles Plate Tectonics: A Paradigm Under Threat The Effect of the "Laying On of Hands" on Transplanted Breast Cancer in Mice The Stability of Assessments of Paranormal Connections in Reincarnation-Type Cases ArtREG: A Random Event Experiment Utilizing PicturePreference Feedback
Can Population Growth Rule Out Reincarnation? A Model of Circular Migration

411

David Pratt William E Bengston and David Krinsley Ian Stevenson and Jiirgen Keil R.G. Jahn, B. J. Dunne, K H. Dobyns, R.D. Nelson, and G. J. Bradish David Bishai

421 431

Commentary The Mars Effect is Genuine: On Kurtz, Nienhuys, and Sandhu's Missing the Evidence Bulky Mars Effect Hard to Hide: Comment on Domrnanget's Account of the Belgian Skeptics' Research Essay What Has Science Come to? Letters to the Editor Is the Activity of the Cerebral Cortex a "Brake" on the Functioning of the Brainstem? Understanding the Nature of Racial Prejudice: Comment on Hoyle and Wickramasinghe's article with response Book Reviews With the Tongues of Men and Angels: A Study of Channeling by Arthur Hastings Reaching for Reality-Seven Incredible True Stories of Alien Abduction by Constance Clear Project Mindshift: The Re-Education of the American Public Concerning Extraterrestrial Life, 1947 to the Present by Michael Mannion Cryptozoology A to Z: The Encyclopedia of Loch Monsters, Sasquatch, Chupacabras, and Other Authentic Mysteries of Nature by Loren Coleman and Jerome Clark
(continued on next page)

Suitbert Ertel and Kenneth Irving Suitbert Ertel

447

Halton Arp

1
I

455 458

Jesse Hong Xiong Richard S. Hahn

461 463 465

Stephan A. Schwartz Robert Nordberg Alexander Zmich

I

468

Larissa Vilenskaya

1

JOURNAL OF SCIENTIFIC EXPLORATION
A Publication of the Society for Scientific Exploration
Volume 14, Number 4
Page 497 Editorial

2000

499 557 571 583

Research Articles Mind/Machine Interaction Consortium: PortREG Replication Experiments Unusual Play in Young Children Who Claim to Remember Previous Lives A Scale to Measure the Strength of Children's Claims of Previous Lives: Methodology and Initial Findings Reanalysis of the 1965 Heflin UFO Photos

R. Jahn, J. Mischo, D. Vaitl, et al. Ian Stevenson Jim B. Tucker Ann Druffel, Robert M.Wood, and Eric Kelson Joel M. K a u B a n Joop M. Houtkoopel; Dieter Vaitl, and Ulrich Timm Stefan Schmidt and Harald Walach Philip J. Klass

623 643

Should You Take Aspirin to Prevent Heart Attack?

Letters to the Editor A Further Note on Walach and Schmidt's "Empirical Evidence for a Non-Classical Experimenter Effect"
Reply to " Further Note on Walach and Schmidt's A 'Empirical Evidence for a Non-Classical Experimenter Effect"' Response to "Valentich Disappearance: New Evidence and a New Conclusion," by Richard F. Haines and Paul Norman Authors' reply to Philip Klass

644

646

647

Richard F: Haines and Paul Norman Michael Levin Henry H. Bauer J. O'M. Bockris Alexander Imich Robert L. Raymond John Belof Robert F: Creegan

649 654 657 659 662 663 664

Book Reviews Mysterious Flame: Conscious Minds in a Material World by Colin McGinn Dancing Naked in the Mind Field by Kary Mullis Consciousness by D. Rakovic and D. Koruga Searching for Eternity: A Scientist's Spiritual Journey to Overcome Death Anxiety by Don Morse Gardner's Whys & Wherefores by Martin Gardner Paranormal Experience and Survival of Death (Suny Series in Western Esoteric Traditions) by Carl B. Becker Scientific Development and Misconceptions through the Ages, A Reference Guide by Robert E. Krebs

SSE News Item
665 669
Twentieth Annual SSE Meeting Announcement and Registration Form

Index: Volume 14

r i JOURNAL OF SCIENTIFIC EXPLORATION s
A Publication of the Society for Scientific Exploration
(ISSN 0892-3310)

Editor-in-Chief, Bernard Haisch*, Henry Bauer** Executive Editor, Marsha Sims*, Managing Editor, Sarah Cunningham*"

* For material published through Vol. 14, No. 2 and for several articles in later issues.
* * For all new material
Associate Editors, Stephen Braude, Dean Brown, Michael Epstein, Dean Radin, and Mark Rodeghier Editorial Office and Manuscript Submission: Journal of ScientiJic Exploration, Attn: Sarah Cunningham
Allen Press, 8 10 East 10th St. Lawrence, KS 66044 (scunningham@allenpress.com); 1-800-627-0629 ext. 284; Fax: 785-843-1274

World Wide Web - http://www.jse.com Society for Scientific Exploration Website - http://www.scientificexploration.org

Executive Book Review Editor, Michael Epstein
Analytical Chemistry Division, MS 8391, National Institute of Standards and Technology 100 Bureau Drive, Gaithersburg, MD 20899- 0001 (michael.epstein@nist.gov)

Associate Book Review Editor, David Moncrief Editorial Board
Dr. Mike1 Aickin, Center for Health Res., Kaiser Permanente, Portland, OR Prof. RCmy Chauvin, Sorbonne, France Prof. Olivier Costa de Beauregard, University of Paris, France Dr. Steven J. Dick, U. S. Naval Observatory, Washington, DC Dr. Peter Fenwick, Institute of Psychiatry, London, UK Dr. Alan Gauld, Dept. of Psychology, Univ. of Nottingham, UK Prof. Richard C. Henry (Chairman),Dept. Physics &Astronomy, Johns Hopkins Univ. Prof. Robert Jahn, School of Engineering, Princeton University Prof. W. H. Jefferys, Dept. of Astronomy, University of Texas Dr. Wayne B. Jonas, National Institutes of Health, Bethesda, MD Dr. Michael Levin, Cell Biology Dept., Harvard Medical School Dr. David C. Pieri, Jet Propulsion Laboratory, Calif. Inst. Technology, Pasadena, CA Prof. Juan Roederer, University of Alaska-Fairbanks Prof. Kunitomo Sakurai, Institute of Physics, Kanagawa University, Japan Prof. Ian Stevenson, Health Sciences Center, University of Virginia Prof. Peter Sturrock, Ctr. for Space Science &Astrophysics, Stanford University Prof. Yervant Terzian, Dept. of Astronomy, Cornell University Prof. N. C. Wickramasinghe, School of Mathematics, Univ. College Cardiff, UK

SUBSCRIPTIONS AND BACK ISSUES: Please use the order forms in the back. COPYRIGHT: Authors

retain the copyright to their writings. However, when an article has been submitted to the Journal of Scientific Exploration for consideration, the Journul holds first serial (periodical) publication rights. Additionally, the Society has the right to post the article on thc Internet and make it available via electronic as well as print subscription. The material must not appear anywhere else (including on an Internet Website) until it has been published by the Journal (or rejected for publication). After publication in the Journal, authors may use the material as they wish but should make appropriate reference to the prior publication in the Journal, for example: "This paper (material) first appeared in the Journal of Scientific Exploration, vol ... no... pp... under the title... ."
Journal of Scientific Exploration (ISSN 0892-3310) is published quarterly in March, June, September and December by the Society of Scientific Exploration, Allen Press, 810 East 10th St., Lawrence, KS 66044. Private subscription rate: US $50.00 per year (plus $5.00 postage outside USA). Institutional and Library subscription rate: US $100.00 per year. Periodical postage paid at Lawrence, KS, and additional mailing offices. POSTMASTER: Send address changes to: Journal of ScientiJic Exploration, Allen Press, 8 10 East 10th St., Lawrence, KS 66044.

Journal of Scientijic Exploration, Vol. 14, No. 1, pp. 1-18,2000

0892-33 10100 O 2000 Society for Scientific Exploration

Investigating Deviations from Dynamical Randomness with Scaling Indices

Institutfir Grenzgebiete der Psychologie Wilhelmstrasse 3a, 0-79098 Freiburg, Germany and Max-Planck-Institut fur extraterrestrische Physik Giessenbachstrasse, 0-85740 Garching, Germany

Max-Planck-Institut fiir extraterrestrische Physik Giessenbachstrasse, 0-85740 Garching, Germany

Abstract- The information contained in any given experimental time series can be utilized more exhaustively when transition probabilities between states rather than state probabilities alone are studied. Using advanced techniques of time series analysis, it is shown that deviations from dynamical randomness indicate evidence for unexpected temporal correlation features in selected data sets taken from a mind-matter experiment conducted at Freiburg (Germany). The techniques of analysis and a proper error estimation are briefly described, and some preliminary first results are presented. They encourage further inquiry into processual aspects of deviations from randomness in addition to more straightforward analyses of state probabilities. Keywords: transition probabilities - scaling indices - deviations from randomness - temporal correlations in mind-matter data.

1. Introduction
Among other problem areas in the general framework of mind-matter research there is the basic question as to whether there are presently unknown relationships between physical systems and mental activities of human agents. Avoiding any speculations with respect to specific concepts of correlations, interactions, and causal or other influences between physical systems and mental systems, the mere formulation of this question presupposes a minimal methodological (not ontological) dualism of mind and matter without which it could not even be phrased. It has to be specified in order to lead to researchable problems. The special type of question addressed here is whether the randomness of a physically random system changes when human agents are asked to carry out certain "intentional tasks" that are defined so as to correspond to deviations from randomness in the behavior of the system. Randomness is itself a concept that needs to be considered in more detail. First, there is the much discussed question of whether randomness or chance is

2

H. Atmanspacher & H. Scheingraber

an "ontic" property of the material world, without any reference to the knowledge of observers, or whether it is an "epistemic" expression of our limited knowledge about an ontic world. Another most obvious problem with which any working scientist is familiar arises from the fact that for empirical purposes the mathematical, measure theoretic definition of probability a la Kolmogorov has to be backed up by an interpretation in terms of a limit of infinitely many independent realizations of an event. In practice, this is impossible to achieve. We always deal with finitely many events, they are never identical in every respect, and it has to be checked carefully whether and in what sense they are really independent of each other. The finiteness of any empirical collective of events has the consequence that many mathematical theorems about probabilities (based on the limit N -+ o o ) must be applied with caution. For instance, in the area of complex systems research, many examples are known for which limit theorems (e.g., laws of large numbers) are not naively applicable and ergodicity must not be implicitly presupposed. Novel approaches and techniques of modern statistics such as second order statistics or large deviations statistics (Atmanspacher, Rath, and Wiedenmann, 1997) offer new insights beyond a standard characterization of a distribution in terms of its first two moments. For appropriate methods of analyzing finite collectives in such situations, it has been proven necessary to characterize the concept of randomness with reference to the range of N with which one is empirically dealing. Another important topic is the independence of events as members of a random collective. It is well known that distributions of states of a system can be perfectly random in the sense that they are perfectly represented, e.g., by a Gaussian distribution, but nevertheless there can be non-random features, e.g., correlations, as far as the transition probabilities between states (i.e., the dynamics of the system) are concerned. In such a case, individual events are not independent. They depend on their prehistory and show significant temporal correlations. Correlations are significant if they deviate (at a level to be specified) from the amount of correlations expected due to the finiteness of the random collective. A number of measures that characterize and distinguish these different kinds of randomness have been analyzed and compared with each other by Wackerbauer et al. (1994). For an extensive study addressing dynamical randomness due to different kinds of stochastic processes see Gaspard and Wang (1993). In Section 2 we give some basic arguments as to why dynamical aspects of randomness are important. Section 3 introduces a method of time series analysis capable of detecting deviations from (dynamical or non-dynamical) randomness with high sensitivity. This method is applied to data from an experiment described in Section 4.1. In Section 4.2 we sketch some details about proper error estimates for the analysis, and Section 4.3 discusses some first results from a small set of data involving human agent intention. Section 4 is a condensed version of a more comprehensive presentation in Atmanspacher et al. (1999). Section 5 concludes the paper with some perspectives.

Deviations from Dynamical Randomness

3

2. Why Transition Probabilities?
Let us consider a state space A with a (homogeneous or non-homogeneous) ~ partition P = {Ai}Z1 S U C that UAi = A and Ai n Aj = 0 b'i # j. If we assume that states w of a system can be represented as points in A then each state w can be assigned to a cell Ai of the partition. Given an overall probability measure p on A , the probability to find the state w in cell Ai , briefly denoted state probability, is

Since the state is assumed to be somewhere in A , the state probabilities are normalized,

Supposing that the states w are not stationary but evolve dynamically in A , each w has a predecessor and a successor. Let pij be the joint probability that the sequence w E Ai + w E Aj occurs in any two successive steps in the dynamics, for instance described by a stochastic process (for more details see Doob, 1953). Then

is the conditional probability that, given w E A i , the successor of w is w E A,. The matrix in Equation (3) is called a transition matrix, its entries ' being the transition probabilities. Since each state has to be somewhere in the subsequent step, the transition probabilities have to be normalized in each row of the matrix,

In another terminology, pi+j is a forward transition probability, designed for purposes of predicting a future state given the present. One can similarly define backward transition probabilities, designed for purposes of retrodicting a past state given the present, by pi+j = pij/pj. This would be the conditional probability that, given w E Aj, the predecessor of w is w E Ai. In general, '
~ i + j pi-j .

#

After these definitions, it is easy to answer the question formulated in the title of the present section. Stochastic processes with diferent transition matrices pi+j can produce the same state distribution pi. This is to say that

4

H. Atmanspacher & H. Scheingraber

information about the transition matrix can help to distinguish between different processes generating the same state distribution. If there are reasons to believe that (almost) identical state distributions are generated by different processes, then, in order to distinguish them, one has to look for differences in the transition probabilities. To discuss some straightforward examples, we consider two classes of stochastic processes, namely uniformly and doubly stochastic processes.
Uniformly stochastic processes are characterized by an equidistribution of states,

Doubly stochastic processes are characterized by the fact that transition probabilities in both rows and columns are normalized,

Each doubly stochastic process is uniformly stochastic, but not vice versa (Feller, 1950, p. 399). For instance, a transition matrix of the form

implies that pi = 1/N K , hence uniform stochasticity. Well-known examples are the fair coin ( N = 2 ) or the fair dice ( N = 6). Other examples for which the partition is usually much more refined are "fully developed chaos" in the logistic map (at r = 4 ), or any white noise process, i.e., processes of Markov order zero with no temporal correlations. Not quite so simple are doubly stochastic processes with transition matrices satisfying Equation (6) with different entries. For example, a process with

also implies uniform stochasticity with pi = 112 for both states. More involved examples for this kind of doubly stochastic behavior are given by colored noise processes, i.e., higher order Markov processes with temporal corre-

Deviations from Dynamical Randomness

5

lations. Doubly stochastic processes include the special case of strictly deterministic (e.g., periodic) processes. If

the corresponding stochastic process is not doubly stochastic. This is the general case, and it is easy to imagine that in this case identical state distributions can be generated by processes with totally different transition matrices. Some well-studied examples are the golden mean process, the even process, or the Misiurewicz process. For more details, see Young and Crutchfield (1994).

3. Scaling Indices
There are a number of standard techniques of time series analysis which have originally been developed for the analysis of linear systems. Tools such as autocorrelation functions and power spectra have been widely used in many fields of application. Particularly due to more recent interest in the non-linear dynamics of complex systems, there is a considerable amount of literature on non-linear time series (see, e.g., Priestley, 1988; Tong, 1990; Kantz and Schreiber, 1997); for applications see additionally Deco, Schittenkopf and Schiirmann (1 997), Kanz, Kurths and Meyer-Kress (1998) and Schreiber (1999). A particularly illuminating example for the power of advanced time series techniques, enabling the detection of phase correlations in non-linear systems, has been reported by Rosenblum, Pikovsky and Kurths (1996). Among general topics such as optimal prediction, noise reduction, tests for stationarity and linearity, it is one of the central problems for many applications to distinguish between random and non-random contributions in a given time series. An extremely sensitive technique in this regard is the scaling index analysis, a procedure strongly inspired by the concept of multifractals (Mandelbrot, 1974; Halsey et al., 1986). Scaling indices essentially represent a fairly sophisticated and compact measure of correlations between data points (Paladin and Vulpiani, 1987). For a given point set they describe how the local density of the set around each point changes with increasing distance from that point. A convenient way to visualize this is in terms of a histogram N(a!) showing the number N of points in the set as a function of the scaling index a . The scaling index a! (sometimes also denoted "crowding index"; Grassberger, Badii and Politi, 1988) is defined

i i a . = log n ( € 2 ) - log n (€1) ' log €2 - log €1
where ni(~) the number of points within a box of size e , to be considered is

6

H. Atmanspacher & H. Scheingraber

around point i of the entire point set (i = 1,2, ..., NtOt),and €1 < e2. The scaling index is thus defined for a specific range of e , which in turn defines a locality criterion with respect to which correlations are characterized by ai . Counting those boxes (to be constructed around points) that give rise to a i , a histogram N ( a ) is obtained. For more details and additional references, see Atmanspacher, Scheingraber and Wiedenmann (1989) and Atmanspacher et al., (1999). For perfectly random distributions in a d -dimensional space, the ideal N ( a ) histogram for infinitely many points is a 6 -function at a = d. Due to the restrictions imposed by a finite number of points, non-zero E , and finite binning of a , N ( a ) is broader and its mean is shifted toward values of a smaller than d . The same happens for regular patterns with topological dimension n < d , where the ideal N ( a ) for infinitely many points is a 6-function at a = n . It is intuitively clear that a scaling index analysis can be useful to discriminate between non-random features with n < d and a random background. For examples in various scientific areas see Atmanspacher, Scheingraber and Wiedenmann (1989), Morfill, Demmel and Schmidt (1994), Atmanspacher, Wiedenmann and Amann (1995), Rath and Morfill (1997), Wiedenmann, Scheingraber and Voges (1997), and Atmanspacher et al. (1999). For faint non-random contributions dominated by a random distribution such a task is difficult since -depending on their nature - they typically appear as small deviations in the left wing of an N ( a ) histogram characterizing randomness. For such situations, it is crucial to take care in selecting a good locality criterion (range of E over which ai is calculated) and in estimating errors in an appropriate manner. If a correlation analysis based on a scaling index analysis is to be carried out for a time series (temporal sequence of data) rather than a spatial pattern, then it is necessary to represent the time series in a space with dimension greater than one. A standard way to achieve such a representation has been proposed by Packard et al. (1980) in order to reconstruct attractors of dissipative dynamical systems and extract their invariants. Our goal in the present study is less ambitious since we do not look for invariants of a real physical process but simply for correlations in the temporal sequence of data points in a time series. Inspired by the delay coordinate technique of Packard et al., we use the original time series $(t) to construct a number ( d - 1) of additional time series, each delayed by a temporal interval At with respect to its predecessor. In this way, new coordinates xi are obtained according to

Xl(t)= 4 ( t ) , x ( t )= 4 (t At), 2 . - . _ X d it) = 4'(t ( d - 1) At)

+ +

Deviations from Dynamical Randomness with

In this manner, the original (one-dimensional) time series can be represented in a d -dimensional space, and the corresponding histogram N ( a ) can be calculated. This allows us to discriminate regular (deterministic) contributions from random noise in the behavior of dynamical systems. In particular, nonrandom features in the transition probabilities (i.e., correlations) between individual members of the time series can be detected in addition to non-random state probabilities. Such a task is fairly straightforward in the case of low-dimensional attractors (fixed points, limit cycles); compare, e.g., Siffling (1996). For investigations of less prominent non-random temporal features in a random background process, the analysis becomes more sophisticated. In this case, one would ideally proceed to embedding dimensions d as high as possible since the a -range of N ( a ) characterizing random contributions increases with d whereas the a -range of the non-random part of N ( a ) drops back with increasing d . This means that faint non-random contributions appearing as deviations in the left wing of the dominating random part of N ( a ) become more pronounced and thus can be better discriminated as d is increased. However, severe restrictions on the size of d are imposed by the length of the original time series. For as few as 1000 data points per time series, the point distribution resulting from the delay technique in d = 4 is not dense enough to admit a statistically reasonable analysis. For 10,000 data points, d = 4 is a reasonable upper limit for d . In addition, it is mandatory to carry out a proper error estimate since the N ( a ) histogram for a random distribution of not more than 10,000 points deviates significantly from its limit for infinitely many points. Non-vanishing correlations are to be expected for random time series of finite length. The question is whether the scaling histogram of an empirically observed time series of finite length differs significantly from the expected histogram for a random time series with the same finite length. It makes absolutely no sense to compare empirically obtained N ( a ) histograms for finite time series with N ( a ) histograms as they are theoretically expected in the limit of infinitely long time series. Significant deviations from a random distribution of points as detected by a scaling index analysis of a time series embedded in a d -dimensional space can have different origins. In particular, such deviations are not necessarily due to deviations from dynamical randomness, i.e., temporal correlations in the time series. Moreover, the delay technique generating the embedding space can lead to delicate superpositions of different k i d s of correlations that are difficult to separate from each other. It is therefore inevitable to apply additional procedures if one wants to disentangle genuinely temporal correlations from others; this issue will be taken up later on.

H. Atmanspacher & H. Scheingraber 4. Data Analysis
4.1 Experimental

The prototype of the experimental setup from which data are taken for analysis goes back to Schmidt (1970). It has been refined and applied to a large range of empirical questions in the work of the Princeton Engineering Anomalies Research (PEAR) project within the last two decades. The general question behind that work is whether some "intentional activity" of human agents changes the output of physical systems which are expected to produce random events. The work of PEAR suggests that there are significant deviations from randomness when human agent intention is involved (Jahn et al., 1997; see also Utts, 1991). A replication of the PEAR studies has been started at the Institut fiir Grenzgebiete der Psychologie und Psychohygiene (IGPP) at Freiburg in 1996. The present investigations refer to a small subset of data obtained in the IGPP replication study. The material core of the experiment is a random event generator (REG) whose output (after some processing) consists of binary sequences. In the IGPP replication study, a portable random event generator, different from the REG originally developed and used by PEAR, was utilized. More details concerning these two sources of randomness can be found in Nelson, Bradish and Dobyns (1989) and Bradish (1993). Both sources are semiconducting devices, producing a mixture of quantum and thermal noise. After 200 bits (0s and Is) were generated by the REG, the number of 1s is counted and the result is taken as the outcome of a single "trial" for the experiment. Sequences of 100 and 1000 trials, respectively, constitute experimental "runs" consisting of successive integer "raw data" xi scattered around a mean value of 100 with a standard error of The trials of a run can be graphically represented (depending on feedback options) on a monitor. This representation tiis obtained from the raw data xi by

m.

or, recursively, by

i The series t is thus cumulative in the sense that a constant value above (below) 100 in the raw data would produce a monotonically increasing (decreasing) graph for tion the monitor. If the REG produces random numbers,

Deviations from Dynamical Randomness

9

the overall expectation (for N + oo) is that the curve tidoes not significantly deviate from the baseline 0. According to the experimental protocol, human agents are asked to intentionally try to cause the curve to rise above or fall below the baseline, or to be intentionally neutral with respect to the appearance of the curve in different runs. (They are not explicitly asked to achieve deviations from randomness.) The corresponding "modes of intention" are denoted as "high," "low," and "baseline." It is left to the agents how to realize each of these modes of intention cognitively. The particular intention per run is preselected randomly or by the agents themselves. An experimental "session" consists of 1000 trials per each intention. For runs consisting of 100 (1000) trials, a session therefore amounts to 10 runs (1 run) per intention, i.e., 30 (3) runs in total. For the analysis described below, data from agents with 10 sessions each have been used, i. e., 10,000 trials per agent and intention. To avoid artificial correlation effects, the analysis has been based on the raw data Xi rather than the cumulative data ti. Regarding the values assumed by xi (roughly between 70 and 130) as states, the histogram of those states represents a state distribution. The most prominent result obtained from analyses of the data pool collected by PEAR is that the mean value of the distribution of state probabilities (integer values xi) shows small systematic shifts above and below the expected mean of 100.0 for experimental conditions with high and low intention, respectively. The precise values as given by Table 1 in Jahn et al. (1997) are 100.026 for high, 99.984 for low, and 100.013 for baseline. Although the corresponding high-low separation of 0.042 is tiny, Jahn et al. (1997) report an overall z -score of 3.81 (with p = 7 x for the entire set of individual sessions. (For more details on the importance of z-scores and effect sizes in this and related studies see Utts (1991) and Delanoy (1996)). This is not the only result that was found by PEAR. After perusing the available literature, it appears that the analyses that have been done are all based on state probabilities and do not explicitly take transition probabilities into account. As shown in Section 2, it is interesting to focus on those transition probabilities between states since they can reveal information that is not available in the distribution of state probabilities alone. In the following, we describe a scaling index analysis of time series obtained from a physical random event generator (REG) as described above under two different conditions: 1. The output of the REG is sampled without any additional constraints on the experimental setup; the corresponding data are used for calibration and allow us to derive a reasonable error estimate; 2. The output of the REG is analyzed for situations in which human agents are asked to carry out "intentional tasks" that are defined so as to correspond to non-random contributions to the activity of the REG. The main conceptual difference between the scaling index analysis and the analyses carried out by PEAR so far consists of the fact that we do explicitly

10

H. Atmanspacher & H. Scheingraber

address transition probabilities between states together with state probabilities. (Of course, there are other possibilities to realize such a purpose; see, e.g., Kurths et al., 1995.) As mentioned above, the data used for this purpose are data from the IGPP replication study. This study is not yet finished, implying that any assignment of individual runs to agent intentions is kept hidden so far. Therefore, our analysis is strictly bound to an analysis of deviations from randomness and does not refer to specific agent intentions.
4.2 Errors

In experiments such as those briefly sketched in Section 4.1, one should by no means expect easily detectable, major deviations from a perfectly random distribution. For this reason, extremely careful error estimates are mandatory for a sound scaling index analysis. These error estimates have to be based on the same parameters as used in the analysis of those data taken under the influence of human agents. For time series consisting of 10,000 data points At = 1 and d = 4 have been selected. Scaling indices a! are binned in steps of 0.01. For an optimal locality criterion, c, = 4.6 and €2 = 12.7 have been determined by minimizing the differences between N ( a ) histograms for calibration data (Atmanspacher et al., 1999). This means that we look for correlations on the mentioned distance scale only. Correlations on larger scales, e.g., approaching the diameter of the point distribution as a whole, are left out of consideration. Calibration data are taken from the experimental random number device as described in Section 2. Figure 1 shows an example of a resulting N ( a )histogram.

Fig. 1. Histogram of scaling indices N ( a ) for a set of 10,000 calibration data obtained from the IGPP replication of the PEAR experiment. The embedding dimension is d=4 so that the theoretical expectation for N+oo (and Aa+O ) is a 6 -function at a=4 .

Deviations from Dynamical Randomness

I1

The relative differences (normalized with respect to the total number Ntot of points)

between any two histograms N ( C Y and N ( a ) z (for specifications see Section )~ 4.3) will be shown and discussed in integral form according to

4

2

( a ' )= N:,t

lo

( N (4, N -

( 4 2 )

Such an integral representation' has the advantage that consistent trends in the differences 6 over extended ranges of a become clearly visible even if they are small and noisy. Values aeXt at which A is maximal (minimal) correspond to the onset of negative (positive) differences 6 after an extended range a < aeXt of positive (negative) differences 6 . Hence, an extremum in A indicates that the differences 6 giving rise to it are at a < aeXt . of two As an example, the dotted line in Figure 2 shows the differences histograms. They are calculated for each a bin separately and normalized with

Fig. 2. Example of differential and integral representations of the differences between two N ( a ) histograms. Dotted line: relative normalized differences 6 1 , ~ each bin. Solid for line: relative normalized integral differences A1, 2 (same as dotted line in Figure 4).
'AS always, integrals are numerically evaluated by a sum of finitely many terms, here given by the values of N ( a ) within bins of Aa=O.Ol .

12

H. Atmanspacher & H. Scheingraber

respect to the total number NtOt of points. The solid line in Figure 2 shows the corresponding integral plot of A1,2, again as relative deviations. For a < aext = 3.18, the differences 61,2 are consistently negative. This trend bechanges at a,,, = 3.18, where Al,2 is minimal and the differences come positive. The relative deviation at aeXt= 3.18 is negative and amounts to 0 1 , 2 3.56%. In order to obtain reasonable error estimates for the analysis, the fluctuations of such histograms for 100 different realizations of calibration data were considered. Subtracting each of the 100 individual N ( a ) histograms from a mean N ( a ) histogram, a mean fluctuation (Acal) was determined as a function of a (see Figure 3). The applied procedure, in detail described by Atmanspacher et al. (1999), is basically heuristic; further work to back it up formally is in progress. Insofar as the calibration data can be regarded as surrogate data (Theiler et al., 1992), this mean fluctuation represents a proper error estimate for the histograms. Values of (Acai) that differ from zero quantify how much the density of an individual set of calibration data fluctuates around the density of an average distribution of calibration data with N,,, = 10,000 on purely statistical grounds (given that the overall density is constant). Deviations in the left wing of N ( a ) refer to "overdense" regions in the embedding space. They indicate that local correlations in an individual set of calibration data deviate from those of an average distribution of calibration data, i.e., are more or less homogeneous than that average distribution (with a finite number of points). "Underdense" regions such as voids (characterized by "anti - correlations) correspond to the right wing of N ( a ) where errors typically are much larger and prevent any significant detection (Atmanspacher, Scheingraber and Wiedenmann, 1989). In another publication (Atmanspacher et al., 1999), other sets of surrogate
V

Fig. 3. Mean values (solid line) and standard deviations (shaded area) of A(cr,,,) ma of 1 as a function of a,, .

for the maxi-

Deviations from Dynamical Randomness

13

data such as fitted Gaussian distributions and various kinds of Monte Carlo permuted data were studied in addition. It turned out that fluctuations as obtained from the calibration data are both most conservative and most realistic for an error estimate with respect to our present purposes. The detailed procedures and arguments can be found in Atmanspacher et al. (1999). 4.3 Some First Results The empirical material analyzed was taken from a series of experiments carried out at the Institut fiir Grenzgebiete der Psychologie und Psychohygiene (IGPP) at Freiburg. The purpose of the IGPP study is a replication of the results obtained by PEAR as briefly summarized in Section 4.1. Only those agents were selected for the analysis who had finished 10 sessions within the IGPP replication study by the end of September 1997. Due to the experimental protocol, the corresponding amount of data is 10,000 per mode of intention and per agent. Therefore, individual agents were coded by numbers (1, ..., 16), and modes of intention were randomly coded by numbers (1, 2, 3) for each agent separately before the data were released for analysis. Hence the analysis itself amounts to nothing else than an analysis of deviations from randomness. In particular, it does not make use of any information concerning correlations with psychological observables. Time series provided by sequences of 10,000 data points per intentional mode for each agent have been analyzed, regardless of any other possible discriminations. This is to say that we disregarded any information other than the distinction between different but unknown (not yet decoded) modes of intention. In other words: we assume that each sequence of 10,000 data points represents a statistical ensemble. The reason is that pilot studies showed that possible deviations from randomness have to be expected so small that a sensible scaling index analysis requires data series of at least 10,000 points to be analyzed in at least four dimensions, d = 4 . (Other parameters are as given in Section 4.2.) More detailed studies (with shorter time series) may be possible if agents can be identified for which major deviations from randomness are observed. The first step of the analysis was the determination of the relative integral differences between the N ( a ) histogram for a suitable representative for a random sequence and the three N ( a ) histograms corresponding to the three modes of intention for each agent. Selecting the mean histogram over all calibration data (Section 4.2) as such a suitable representative,

was calculated for each agent and mode of intention. The solid line in Figure 2 represents as a function of a for intentional mode #2 of agent #16. There is a prominent negative peak of 3.56% at a = 3.18, where the corresponding error is 1.1 (see Figure 3). This means that the deviation from

14

H. Atmanspacher & H. Scheingraber

randomness in this example is significant with 3.20 (corresponding to = 0.0017 ). It represents the most pronounced effect resulting from an analy. sis of the extrema of Acal,int The significances of differences A might be underestimated if one considers only the extrema of A . This is due to the fact that the errors in the critical range 2.9 < a < 3.3 depend strongly on a . As a consequence, deviations smaller than their extrema may be more significant than those at the extrema, particularly if they are located at small enough values of a . For this reason, it is interesting to check the dependence of significance of deviations on a in addition to the deviations themselves. For instance, the significance of deviations for agent #16/intention #2 turns out to be highest at a = 2.98 where an overwhelming 9.520 is obtained. Significances of deviations from randomness and additional features due to other agents and modes of intention are summarized in Figure 6 and Table 2 of Atmanspacher et al. (1999). It is remarkable that those data points contributing to significant deviations from randomness refer almost exclusively to states or transitions between states, respectively, located within a small stripe (= f 5 ) around the mean (x) = 100. This implies that major deviations are not caused by (transitions between) states far away from (x) . Lucadou (1986) reported a similar observation with respect to his preferred observable "pragmatic information." The relationship between this observable and dynamical randomness remains to be clarified. Due to the fact that the deviations for agent #16 are so prominent, it has been checked whether they can also be observed with 3-dimensional rather than 4-dimensional embedding. Reducing the embedding dimension to three makes it possible to analyze the 10 sessions (comprising 10,000 data points altogether) individually. The characteristic features indicating non-random contributions remain pronounced for all 10 sessions with 1000 data points each; quantitative significances have not yet been determined. It is remarkable that the characteristic features are homogeneously distributed over the sessions. This underlines the option to further analyze the empirical material concerning agent #16. Since the analysis as performed in this study is sensitive to deviations from randomness in the transition probabilities as well as state probabilities, it is desirable to distinguish between these different types of non-randomness in the experimental data. If the deviations are due to temporal correlations, they should vanish (or at least decrease) for a Monte Carlo randomization of the sequence of states. This can be tested by generating random permutations of the time series taken under agent intention and subtracting the resulting N ( a ) histogram from the mean histogram of the calibration data. It turned out that all significant deviations drop well below20 if the corresponding sequences are randomized. The hypothesis that we are dealing with deviations from randomness in the transition probabilities rather than randomness with respect to states can be
p

Deviations from Dynamical Randomness

15

further backed up by a simulation of small temporal correlations within an otherwise random time series. As an example, one can replace a small proportion, say 60 randomly selected sets of three successive data points ("triplets") in a random series of calibration data with N,,, = 10,000 by 60 ordered triplets of the form {91,93,95). Analyzing the modified time series according to the procedure described above leads to deviations A from the mean calibration distribution which are plotted as a solid line in Figure 4. The dotted line in the same figure reproduces (from Figure 2) the deviations obtained for agent #16 intention #2.
5. Concluding Remarks

The applied analysis was basically used as a non-parametric procedure to check the hypothesis of randomness in the data. There are strong indications that this hypothesis is rejected for some agents. Therefore, our results do strongly encourage further investigations of deviations from randomness in physically random devices and their relationship to the intention of human agents with appropriate techniques. In particular, it seems worthwhile to focus on the dynamical aspects of such deviations in addition to more straightforward analyses of state probabilities. There are some immediate options for future lines of research. 1. As soon as the identification of agents and modes of intention in the IGPP study is revealed, it will be interesting to see in which way

Fig. 4. Solid line: relative normalized integral differences A between the N ( a ) histograms of a distribution with 60 ordered triplets and a mean random calibration distribution. Dotted line: relative normalized integral differences A between the N ( a ) histograms of the distribution corresponding to agent #16/intention #2 and a mean random calibration distribution (same as solid line in Figure 2).

16

H. Atmanspacher & H. Scheingraber
intentional modes (andlor other psychological observables) correlate with deviations from dynamical randomness. Moreover, it has to be investigated whether those agents ( e . g . , #16) generating such deviations are able to reproduce them. Experimental results from more agents than studied so far are to be analyzed. In addition to material that can be provided by IGPP and a similar replication study at the University of Giessen, there is a tremendously rich data pool at PEAR. These data will be systematically surveyed in the near future. The scaling index analysis allows us to identify those data points contributing to the observed deviations from randomness. Since they are typically located in a small stripe around the mean(x) = 100, a search for patterns in those points is considerably facilitated. It is not yet clear whether such patterns are of the type giving rise to the solid line in Figure 4. The relationship between significantly non-random features in the scaling indices and the underlying stochastic process is both highly nontrivial and non-unique. Further work is necessary to explore this issue in more detail.

Finally, it should be emphasized that the results presented so far are not to be understood in terms of a "proof7for the "reality" of yet unknown relationships between physical and mental systems. Statistical analyses can indicate evidence for something but never "prove" its "reality." Furthermore, we do not think that adding more results of the same kind to the existing material would change the situation with respect to its broader acceptance. Such results will not be accepted in the corpus of serious scientific knowledge unless plausible concepts concerning the context under which they occur can be presented. For these reasons there is no particular value in an analysis of the overall evidence for deviations from dynamical randomness for the full sample of 16 agents with three modes of intention each. Our research strategy focuses on a step-by-step procedure oriented toward understanding the origin of and the boundary conditions for those deviations for which the applied analysis indicates evidence. If at all, this can be done most effectively with those agents who seem to be capable of generating them.

Acknowledgments
We are grateful to Werner Ehm, Jiirgen Kurths, and Gerda Wiedenmann for helpful comments.

References
Atmanspacher, H., Bosch, H., Boller, E., Scheingraber, H., & Nelson, R. D. (1999). Deviations from physical randomness due to human agent intention? Chaos, Solitons, & Fractals, 10, 935-952. Atmanspacher, H., Rath, C . , & Wiedenmann, G. (1997). Statistics and meta-statistics in the concept of complexity. Physica A, 234, 819-829.

Deviations from Dynamical Randomness

17

Atmanspacher, H., Scheingraber, H., & Wiedenmann, G. (1989). Determination o f f ( a ) for a limited random point set. Physical Review A, 40, 3954-3963. Atmanspacher, H., Wiedenmann, G., & Amann, A. (1995). Descartes revisited: the endolexodistinction and its relevance for the study of complex systems. Complexity, 1, 3 , 15-21. Bradish, G. J. (1993). PEAR portable REG prescription for operation. Technical Note of June 15, 1993. Deco, G., Schittenkopf, C., & Schiirmann, B. (1997). Dynamical analysis of time series by statistical tests. International Journal of Bifurcations and Chaos, 7, 2629-2652. Delanoy, D. L. (1996). Experimental evidence suggestive of anomalous consciousness interactions. Biomedical and Lije Physics, ed. by N. Ghista. Braunschweig: Vieweg, 397-410. Doob, J. L.(1953). Stochasticprocesses. New York: Wiley. Feller, W. (1950). An introduction to probability pheory and its applications, Vol. 1. New York: Wiley. Gaspard, P., & Wang, X. J. (1993). Noise, chaos, and ( E , T ) -entropy per unit time. Physical Review A, 235,291-343. Grassberger, P., Badii, R., & Politi, A. (1988). Scaling laws for invariant measures on hyperbolic and nonhyperbolic attractors. Journal of Statistical Physics, 51, 135-178. Halsey, T. C., Jensen, M. H., Kadanoff, L. P., Procaccia, I., & Shraiman, B. I. (1986). Fractal measures and their singularities: the characterization of strange sets. Physical Review A, 33, 11411151. Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of random binary sequences with pre-stated operator intention: a review of a 12-year program. Journal of Scientific Exploration, 11, 345-367. Kantz, H., Kurths, J., & Mayer-Kress, G., eds. (1998). Nonlinear analysis ofphysiological data. Berlin: Springer. Kantz, H., & Schreiber, T. (1997). Nonlinear time series analysis. Cambridge: Cambridge University Press. Kurths, J., Voss, A., Saparin, P., Witt, A., Kleiner, H. J., & Wessel, N. (1995). Quantitative analysis of heart rate variability. CHAOS, 5, 88-94. von Lucadou, W. (1986). Experimentelle Untersuchungen zur BeeinJluJbarkeit von stochastischen quantenphysikalischen Systemen durch den Beobachter. Frankfurt: Herchen, p. 225. Mandelbrot, B.B. (1974). Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier. Journal of Fluid Mechanics, 62, 331-358. Morfill, G. E., Demmel, V., & Schmidt, G. (1994). Der plotzliche Herztod. Neue Erkenntnisse durch die Anwendung komplexer Diagnoseverfahren. Bioscope, 2/94, 1 1-19. Nelson, R. D., Bradish, G. J., & Dobyns, Y. H. (1989). Random event generator qualification, calibration, and analysis. Technical Note PEAR 89001, April 1989. Packard, N. H., Crutchfield, J. P., Farmer, J. D., & Shaw, R. S. (1980). Geometry from a time series. Physical Review Letters, 45, 712-716. Paladin, G., & Vulpiani, A. (1987). Anomalous scaling laws in multifractal objects. Physics Reports, 156, 147-225. Priestley, M. B. (1988). Non-linear and non-stationary time series analysis. London: Academic Press. Rath, C., & Morfill, G. (1997). Texture detection and texture discrimination with anisotropic scaling indices. Journal of the Optical Society ofAmerica A, 14, 3208-3215. Rosenblum, M. G., Pikovsky, A. S., & Kurths, J. (1996). Phase synchronization of chaotic oscillators. Physical Review Letters, 76, 1804-1807. Schmidt, H. (1970). A PK test with electronic equipment. Journal of Parapsychology, 34, 175181. Schreiber, T. (1999). Interdisciplinary application of nonlinear time series methods. Physics Reports, 308, 1-64. Siffling, R. (1996). Phasenraumbetrachtungen bei synchronen Zeitreihen. Diploma thesis, Universitat Miinchen. Theiler, J., Eubank, S., Longtin, A., Galdrikian, B., & Farmer, J. D. (1992). Testing for nonlinearity in time series: the method of surrogate data. Physica D, 58, 77-94. Tong, H. (1990). Non-linear time series analysis. Oxford: Oxford University Press. Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science, 6, 363-403. Wackerbauer, R., Witt, A., Atmanspacher, H., Kurths, J., & Scheingraber, H. (1994). A comparative classification of complexity measures. Chaos, Solitons, & Fractals, 4, 133-173.

18

H. Atmanspacher & H. Scheingraber

Wiedenmann, G . , Scheingraber, H., & Voges, W. (1997). Source detection with the scaling index method. In Data Analysis in Astronomy 5, eds. di Gesu, V. et al. Singapore: World Scientific, 203-21 1. Young, K., & Crutchfield, J. P. (1994). Fluctuation spectroscopy. Chaos, Solitons, & Fractals, 4, 5-39.

Journal of Scientific Exploration, Vol. 14, No. 1, pp. 19-33,2000

0892-3310100 O 2000 Society for Scientific Exploration

Valentich Disappearance: New Evidence and a New Conclusion

325 Langton Avenue Los Altos. CA 94022

Victorian UFO Research Society I? 0. Box 43 Moorabbin, 3189 Australia

Abstract-This paper presents new evidence regarding the now-famous disappearance of Frederick Valentich, who was flying a Cessna airplane on the evening of October 21, 1978, somewhere near Cape Otway SW of Melbourne. The testimony of three witnesses is given, each of whom claim they saw an airplane descending downward at a steep angle with a much larger object with green lights flying just above it. A plot of the most probable flight path is also included. Based on this new evidence, taken in conjunction with the pilot's own in-flight reporting of sighting events, we have to conclude that there appears to be sufficient evidence to suggest that Valentich's airplane probably crashed into the sea SE of Cape Marengo between 3 and 12 miles offshore. The nature of the large object with green lights that accompanied the airplane during its steep descent remains to be identified. Keywords: pilot disappearance - accident analysis - UFO - crash investigation

Introduction
The in-flight disappearance of Frederick Valentich over Bass Strait, Australia, on October 21, 1978, has become one of the most well-publicized mysteries of aviation since Amelia Earhart disappeared on July 3, 1937. Accounts of this tragic event may be found elsewhere (International UFO Reporter, 1978; Bass Strait mystery, 1979; Haines, 1987; Norman, 1979; Pinkney and Ryzman, 1980; Valentich, 1980). Despite the coordinated efforts of private pilots and the Australian government's search-and-rescue airplanes immediately following the event, no trace of Cessna DSJ (its registration letters: "Delta Sierra Juliet") of any kind was ever found. What has made this event such a perennial and popular mystery was the existence of an air-to-ground radio (voice) transmission between young Valentich and a flight service specialist, Steve Robey, who was working at Melbourne International's "Tullamarine" airport at the time of the disappearance. Other pilots overheard this transmission and, because of intense and immediate pressure on the civil aviation authorities, the Department of Transport

20

R. F. Haines & P. Norman

(DOT)released a printed transcript of the conversation long before the official accident report was issued. The authors also have listened carefully to this tape recording; the eyewitness description and other sounds therein should be of interest to those who are truly interested in UFO phenomena (Haines, 1981). A detailed account of the entire event is found elsewhere (Haines, 1987). There is nothing in this 13-minute audio tape that contradicts the new evidence presented below. Other than a short article published in Australia (Norman, 1991), there has been no new evidence that relates directly to this reported aerial encounter and subsequent aircraft disappearance Aircraft pacing and other forms of reported interference with airplanes by unusual and nonaerodynamically shaped objects is not exceptional. One of the authors (R.H.) has compiled the following list of such events for the general period 1948-1989: 55 cases involving airplane pacing, 15 cases in which the aerial object completely circled the airplane one or more times, 12 cases in which the object suddenly disappeared from the pilot(s) sight, 22 cases involving a head-on approach to the airplane and near-miss by objects that did not appear to be airplanes, and scores of incidents in which on-board electromagnetic hardware was affected only when the UFO was nearby (Haines, 1992; Sturrock et al., 1998).

Selected Background Information
Pilot Frederick Valentich, 20, made arrangements with Southern Air Service, located at the Moorabbin airfield SSW of Melbourne city center, to rent a Cessna 182L model, single-engine, propeller-driven airplane for his night flight. He submitted his flight plan to the briefing officer at the airfield at 5:20 p.m. and finally took off alone at 6:19 p.m. for what was to be a "full-reporting" flight. This means that he was supposed to check in by radio with flight service personnel at certain defined checkpoints for safety reasons. His destination was King Island, about halfway between the Australian mainland and the tip of Tasmania (see lower left inset in Figure 1). Flying at 120 miles per hour (neglecting wind effects), the journey from Cape Otway to the nearest point of land on King Island would be about 48 miles (24 minutes of flight) flying at 4,500 feet altitude. The sun would set at 6:48 p.m.; but it was almost 7:00 p.m. when Valentich finally reached his designated (radio) reporting point near Cape Otway. This conclusion is based on a complete flight path reconstruction, including prevailing wind conditions. His radio call at 9:00:29 stated, "Melbourne, Delta Sierra Juliet. (Now at) Cape Otway, descending for King Island." He was right on time. (Note the 2-hour time difference between local and GMT used in the official transcript. We will use GMT for the remainder of this paper). According to his flight plan, Valentich planned to climb to at least 4,500 feet altitude for his water crossing (for safety and visibility reasons). We assume

Valentich Disappearance

21

es observed his blue and white Cessna from the resort town of Apollo Bay as it flew SW over the water at an unspecified distance. Several local pilots have pointed out that it is normal procedure to "cut the corner" at the cape when flying to King Island (i.e., not to fly all the way to Point Franklin, Crayfish Bay, or the lighthouse itself before turning left for King Island; Figure 1). Valentich had flown this same route in the past, and presumably, he cut the comer on this flight as well. Doing so would shorten his trip by about 6 miles, saving both time and fuel. Indeed, Norman (1991) interviewed fishermen who had camped along the Parker River (south of Point Lewis; Figure 1) that night. They apparently saw the Cessna make this turn

Path (Dashed Line)

u
Otway
'

0

1

2

16Oo

Assumed Altitude

3,000 feet

\
\

\

9 : 0 3 ~ ~ m w times with re voice tnnsc.pi t+etai.

Last Radio

Encine begins to lose power

---1'

\-9:09:00

\

\

Probable h a t i o n of Initial Visual Contact

Fig. 1. Enlarged scale chart of region from Apollo Bay to Cape Otway.

22

R. F. Haines & P. Norman

about 3 to 4 miles ENE of the Cape Otway lighthouse. The eastern sky was now dark, although the western sky still possessed some orange glow from sunset. The scattered ground lights visible off Valentich's right side likely helped him maintain his general flight path direction up to this point. After changing his heading to the left, he probably continued on out over Bass Straight toward the nondirectional beacon (NDB; inset, Figure 1) on King Island (a magnetic heading of 154.5"). Flying at 4,500 feet altitude and between 110 and 120 miles per hour (there was a tail wind of about ten knots out of the NW), he reported by radio to Steve Robey, who was handling this particular air sector that evening, that he saw "a large aircraft below 5,000" (feet altitude). The time was exactly 9:06:14 according to the official transcript of this interchange. Table 1 presents the DOT voice transcript as it may possibly relate to the new evidence presented here. The key for Table 1 defines the speech timing and inflection symbols that were added based on a detailed analysis of the original voice tape by R.H. In the years following this event, one of the authors (P.N.) succeeded in locating and interviewing a number of people traveling or living in the region along Great Ocean Road, which runs north and south through Apollo Bay. Reports were obtained from 20 eyewitnesses in this region, describing an erratically moving green light in the sky at that same time of evening as Valentich's flight. In addition, P.N. learned of three primary eyewitnesses who shed valuable new light on this event. Their testimony is recounted here. They saw both the lights of a small aircraft and a very large green light traveling directly above it. The primary witness, Mr. Ken Hansen (pseudonym), who was 47 years old at the time, told his wife of what he and his two nieces had just seen on their way home, but she laughed at his story. The following morning at work he told his fellow employees, who believed what he said about seeing the airplane, but not about the large green object flying above it, the details of which are given below. Of course, at this early date, he could not have known anything about Valentich's description of a green light flying near him. Hansen decided to drop the subject to avoid further ridicule. Years later, he happened to discuss his sighting with a local policeman, who later mentioned the story to Guido Valentich, father of the missing pilot. Guido Valentich then told author P.N., who interviewed Hansen and his two nieces. Both girls gave the same basic details as their uncle. Site Visit to Apollo Bay During a visit to the area between Cape Otway and the resort town of Apollo Bay on March 17, 1998, both authors had an opportunity to meet Mr. Ken Hansen (pseudonym), who was then age 67. Hansen lives in the resort town of Apollo Bay. As he had told author P.N. in 1991, he said that he had seen, with his two nieces, an odd aerial event the same night that Valentich had disappeared. We asked if he would take us to his original observation site so that we

Valentich Disappearance
TABLE 1 Official Voice Transcript Between Flight Service (FS) and the Cessna Aircraft (DSJ)

23

Time (GMT)

From DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ DSJ

To FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS FS

Text Melbourne, this is Delta Sierra Juliet. Is there any known traffic below five thousand?' Delta Sierra Juliet--No known traffic. Delta Sierra Juliet. I am-seems (to) be a large aircraft below 5,000. D Delta Sierra Juliet-What type of aircraft is it? Delta Sierra Juliet-I cannot affirm. It is four bright . .. it seems to me like landing lights. Delta Sierra Juliet. [This statement affirms to the pilot that the person on the ground heard his transmission.] Melbourne, this (is) Delta Sierra Juliet. The aircraft has just passed over me at least a thousand feet above. Delta Sierra Juliet-Roger-and it, it is a large aircraft-confirm? Er, unknown due to the speed it's travelling.. . is there any airforce aircraft in the vicinity?Delta Sierra Juliet. No known aircraft in the vicinity. Melbourne.. . it's approaching now from due east- towards me.Delta Sierra Juliet.

IlOpen microphone for two secondsll
Delta Sierra Juliet. It seems to me that he's playing some sort of game.'-He's flying over me two-three times at a time at speeds I could not identify.' Delta Sierra Juliet-Roger. What is your actual level? My level is four and a half thousand, four five zero zero.Delta Sierra Juliet.. . And confirm-you cannot identify the aircraft. Affirmative.' Delta Sierra Juliet-Roger... standby. Melbourne-Delta Sierra Juliet. It's not an aircraft'. . . it is llopen microphone for two secondsll [This duration measured as three seconds. No information appears to have been removed from the tape.] Delta Sierra Juliet-Melbourne. Can you describe the.. .er-aircraft?

FS DSJ FS DSJ FS DSJ

DSJ FS DSJ FS DSJ FS

FS

DSJ

R. F. Haines & P. Norman
TABLE 1 Continued

DSJ

Delta Sierra Juliet... as it's flying past it's a long shape' //open microphone for three seconds// (cannot) identify more than that. It has such speed //open microphone for three seconds//.It is before me right now Melbourne.' DSJ FS Delta Sierra Juliet-Roger. And how large would the -er-object be? Delta Sierra Juliet-Melbourne. It seems like it's (stationary). [Author R.H. has determined that this word should be "chasing me" based on special filtering]. What I'm doing right now is orbiting, and the thing is just orbiting on top of me also' .. . It's got a green light,' and sort of metallic (like)-. It's all shiny (on) the outside.Delta Sierra Juliet. Delta Sierra Juliet // open microphone for 5 seconds // [measured as 3 seconds] It's just vanished.' Delta Sierra Juliet. Melbourne would you know what kind of aircraft I've got?' It is (a type) military aircraft?' Delta Sierra Juliet. Confirm the.. . er-aircraft just vanished. Say again. Delta Sierra Juliet. Is the aircraft still with you?' Delta Sierra Juliet... It's ah.. . Nor //open microphone for two seconds// (now) approaching from the southwest. Delta Sierra Juliet Delta Sierra Juliet - The engine is, is rough idling. -I've got it set at twenty three-twenty four... and the thing is-coughing. Delta Sierra Juliet-Roger. What are your intentions? My intentions are-ah... to go to King Island-Ah, Melbourne, that strange aircraft is hovering on top of me again Nopen microphone for two seconds// it is hovering and it's not an aircraft. Delta Sierra Juliet. Delta Sierra Juliet-Melbourne //open microphone for 17 seconds// [A very strange pulsed noise is also audible during this transmission.] Delta Sierra Juliet, Melbourne.

FS DSJ

FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ

DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS DSJ FS

FS DSJ

DSJ FS

DSJ

End of official DOT transcript
Note: - = a normal pause in communications (based on the first author's flying experience); ... = a longer than normal pause (i.e.,several seconds); ' = an upward ending voice inflection (such as an interrogative question); = a downward voice inflection. Parentheses ( ) enclose words that are open to interpretation because they are not clearly audible. Brackets [ ] enclose the authors' comments.

-

Valentich Disappearance

25

might reconstruct each step of his sighting. He gladly agreed to do so, during which time he gave us the following information. Sighting details obtained from Mr. Hansen. Mr. Hansen and his two nieces had been shooting rabbits on the late afternoon of October 21, 1978, in the hills about 2 km west of Apollo Bay in the direction of Marriners Falls. He said that it was dusk, but he could not recall the exact time. They were in his fourwheel-drive vehicle driving east on Barham Valley Road toward his home on the southern outskirts of the town. Figure 2 shows an enlarged scale drawing of the road on which they were travelling when they sighted the lights in the sky. Hansen was driving (in the left front seat), and one niece, Tracy, was sitting in the right front seat. His other niece was in the back right seat. Tracy first sighted colored lights in the sky on their right side. The automobile was travelling about 30 miles per hour at the time in the left lane. Suddenly, she said, "What is that light in the sky?" Point A of Figure 2 shows their location at this time. As the automobile continued, Hansen craned his neck to look out the right side window in the direction that she was pointing. He caught sight of some lights and said to her, "Those are only the lights of an airplane." "No," she replied, "I mean that other large green light above it!" He drove on and then turned to look again some 10 to 15 seconds later. At that point, he also was able to make out two separate sets of lights in the clear but darkening sky. They were now near point B in Figure 2. They continued down the road, although Mr. Hansen was now slowing down because of the left turn ahead and because he wanted to better see the strange set of aerial lights. Mr. and Mrs. Hansen live near a small airstrip located just south of Apollo Bay, and he is knowledgeable about aircraft and the appearance of their lights at night. He noted clearly the familiar lights of a small airplane (white navigation light; red wingtip light) that were visible. He told us that these colored lights on the aircraft

Whess stood here

(center of 1 200 deg (Right-hand photo)
2 photo) 160 deg (Center photo)

Fig. 2. Sighting area of Mr. Hansen and his nieces, SW of Apollo Bay, Australia (north is up).

26

R. F. Haines & P. Norman

were separated by about the sanie angle that is subtended by a marble (0.65 inches) held at arm's length (approximately 22 inches from the eye) or about a 1.7" arc. Both aerial objects had passed through a 30" arc toward the east during this initial sighting interval, which lasted about 28 seconds. Not wanting to stop on the small bridge crossing Barham River, he drove on at about 20 miles per hour and finally decelerated to zero at point C of Figure 2. The car's measured transit time from point B to point C was no more than 45 seconds. Although it is not uncommon to see the lights of sinall airplanes in the vicinity, the presence of the large green light was so unusual that Hansen decided to pull over, stop, and get out of his automobile. He said that when he did so, he clearly saw a second, large, greenish, circular light "like it was riding on top of the airplane." Its angular diameter was equivalent to that of a tennis ball held at arm's length (approximately a 6.8" arc), for an angular ratio for the two objects of about one to four. Its color was similar to the navigation lights on an airplane. He also said that it kept a constant distance above and slightly behind the airplane's lights at all times. He stood watching t'or another 15 to 20 seconds until both lights disappeared from sight. Thus, the entire sighting from point A to point C-3 lasted about 93 seconds. Figure 3 is a photo-collage taken by author R.H. at the three locations along Barham Valley Road referred to in Figure 2. Mr. Hansen said that it was so dark, he was barely able to make out the tops of the trees and hill against the southeast sky. Both the airplane and accompanying light (which appeared to fly in parallel with the Cessna) seemed to descend at an apparent 30 to 40" angle (above the horizontal) along a straight line approximately as shown by the dashed line in Figure 3. Both lights eventually disappeared behind the hilltop at a magnetic bearing of about 126" froin Hansen's location (left section of Figure 3). No sounds were heard coming from the direction of the lights at any

Fig. 3. Three contiguous photographs of aighting area (from 126 to 200" rnagnetic bearing).

Valentich Disappearance

27

time during this sighting. The witnesses never saw the airplane strike the ground or the sea. Estimating Airplane Position Although there are too many unknowns to calculate a definitive flight path, we felt that some attempt should be made to estimate the position of the Cessna if it had continued downward on a relatively linear path, as described by the three witnesses. One difficulty in this regard arises from the possibility that the airplane and the accompanying light may not have been flying in a plane of travel normal to the line of sight but obliquely toward or away from the witness's location to some degree. If this was the case, then even a level flight path could appear to descend toward the distant horizon when viewed from the ground. This well-known optical illusion would make it appear as if the airplane was descending when it was not. Of course, there is no way to test this possibility in regard to Valentich's flight. If the plane of travel was not normal to the witness's line of sight and the airplane was descending, then the location of the "splash point" could extend over a wide range of angles and distances or may not have occurred at all. All that is known for certain is that the airplane and accompanying green light traveled somewhere within the arc defined by the two lines C-200 and C-126 in Figure 1. Other difficulties have to do with the accuracy of the perception of temporal duration itself and memory accuracy long after an event. As Hawkinds and Meyer (1965) found, most people tend to underestimate duration when they personally attend to a task and overestimate duration if they were not personally involved in a task. Although individual differences make it difficult to apply these findings to this case specifically, it is likely that Mr. Hansen underestimated the total duration of his sighting to some degree. In spite of the above difficulties, we attempted, in each of the following sections, to estimate what might have taken place using both the eyewitnesses' testimony and Valentich's in-flight DOT narrative. Our objective was to try to establish the most likely "splash down" point of the airplane in Bass Strait, assuming that the airplane continued to descend along a linear path after it disappeared from the view of the three witnesses. Probable flight path using the witness's observing time estimates. If the small airplane Mr. Hansen and the girls saw was traveling at 100 miles per hour and was seen for a total of 93 seconds (between points A and C-3 in Figure 2), it would have traveled a distance of 2.58 miles. Indeed the splash point would be only 1.2 miles off the shoreline. This linear distance is plotted on Figure 1 as line A-N near Cape Marengo. If the Cessna had been flying more slowly, say 80 miles per hour, it would have traveled a total distance of only 2.1 miles, placing it even nearer to the shoreline. Although there were many tourists in town at the time and the spring weather was relatively warm, clear, and calm, only the three witnesses reported seeing anything at that time of the
I

28

R. F. Haines & P. Norman

evening. For these reasons, the above position estimates appear to be too close to the shore. Estimating the distance between the eyewitnesses and the airplane based on the airplane's subtended visual angle(from memory). We then attempted to estimate how far Hansen actually was from the airplane using his memory estimate of the angular size of its lights. The linear distance between the red wingtip light and the white tail light on a Cessna 182L airplane, viewed from the side, is about 12 feet, and this was said to be equivalent to the angle subtended by a marble held at arm's length. Therefore, the calculated distance to the airplane would be only 404 feet. This distance is clearly in error for several reasons: (a) a large scale topographic chart of the area shows that the distance to the hills seen in the left part of Figure 3, behind which the airplane allegedly was seen to disappear, is about 3,000 feet away; and (b) engine sounds from a small airplane would have been heard at such close range, regardless of wind velocity and direction, yet no sound was heard. It is more likely that his use of a "marble" as a reference object is too large. Valentich's engine was running at this time and is heard on the voice tape. Assuming this angular estimate is 50% too large, we are left with a subtended angle of a 5 1 min arc between the two airplane lights and a calculated separation distance from point C (Figure 2) to the airplane of just over 800 feet, which is still too small a value for the same reasons discussed above. Indeed, if the airplane had disappeared just behind the indicated hill and had not leveled out, it would have impacted the ground and not the ocean. Its wreckage would have been found immediately. Thus, Mr. Hansen's recollection of the angular size of the airplane's lights is too large by perhaps several orders of magnitude. What might the maximum distance be between the witness and the airplane? Another estimate can be made by knowing the distance acuity for unaided vision by someone who does not wear prescriptive lenses. This is reasonable because Mr. Hansen did not need to wear corrective eyeglasses at the time. Estimated distance between the eyewitnesses and the airplane based on normal visual acuity. The human visual system can correctly discriminate two point lights at night as being separate at very small angles (less than 0.3 min arc or less; Haines, 1980). If we use this lower visual acuity threshold for the above calculations (i.e., the angular separation between the two colored lights on the airplane that can be correctly perceived as separate), we find a practical maximum separation distance between the witness at point C (Figure 1 and 2) and the airplane of 137,457 feet (26 miles). This maximum distance estimate is far too large, considering the relatively short amount of time the lights were in view and the impossibly high velocity the Cessna would have had to fly to cover this total visual angle. Estimating airplane altitude and range at disappearance. The Cessna disappeared behind low hills to the SE of the witnesses, which were about 180 feet above sea level. Site measurements indicated that these hills ranged from 5-10" arc above the local horizontal. If the Cessna were 26 miles away, it would have been at an altitude of either 12,027 feet or 24,233 feet altitude, re-

Valentich Disappearance
I
I

29

spectively. There is little reason to accept either value as correct for several reasons. First, other witnesses saw the small plane pass overhead earlier toward the south at an altitude of no more than 5,000 feet. Second, Valentich himself indicated that his altitude at 9:09:06 p.m. was 4,500 feet (less than 4 minutes before his final disappearance); indeed, this airplane could not have climbed fast enough to reach such altitudes in the available amount of time. Finally, the eyewitnesses' total viewing duration of about 93 seconds was much too short to account for an airplane flying at this large a distance and altitude. In short, the theoretical, maximum distance to the airplane of 26 miles is, again, far too large. Estimating distance to airplane by its assumed altitude. If the Cessna was at an altitude of 4,500 feet when it disappeared below the line of hills south of Apollo Bay (along line C-3 in Figure 2) and these hills were about 180 feet high (determined from topography chart), then the horizontal distance to the airplane would have had to be about 14 miles. This point would lie along an extension of line C- 126" (Figure 1). Interestingly, Mr. Hansen s estimate of the distance to the airplane was from 10 to 12 miles. We can assume that a lower initial aircraft altitude reduces this distance. Because the eyewitnesses saw the airplane descending at a fairly rapid rate, let us assume it was at an altitude of only 1,000 feet when it disappeared behind these same hills; this yields a horizontal range of only about 2.5 miles from the witnesses (point N on Figure 1). Point N is only about 1 mile farther from the shoreline than the extension of point C3 (discussed above). Viewing duration, airplane velocity, and distance traveled. The following time and distance estimates are based on the eyewitness testimonies and lie between viewing lines B-1 and C-3 of Figure 2. Assuming that the airplane and the strange light were (a) flying in a plane normal to the line of sight, (b) flying at 100 miles per hour (8,800 ftlmin.), and (c) viewed for 65 seconds, they would have traveled a distance of 9,504 feet or 1.8 miles. Next, assuming that the airplane was at an altitude of only 2,000 feet when first seen by Mr. Hansen (from point B, Figure 2) and descended at a constant 30' angle, it would have descended the 4,000-feet glide path distance to the ground in only 27.3 seconds. The airplane was viewed for a significantly longer period than thismore than two times longer-before it disappeared. One or more of the following factors may explain this anomaly: The assumed descent angle is too steep; the velocity of the airplane was less than 100 miles per hour; its altitude, distance, or both is in error; the airplane was not flying in a plane that is normal to the line of sight; or the airplane leveled out after it disappeared behind the foreground hills. For example, if we repeat this calculation using a smaller descent angle of 20°, a speed of 80 mph, and an initial altitude of 2,000 feet, the airplane would travel the 5,847-foot-long flight path to the ground in about 50 seconds, which is more nearly equivalent to what was described by the eyewitnesses. In summary, the authors are more inclined to accept the nearer distance estimates (i.e., 3 miles to sea) than the farther distance estimates (26 miles) of
7

30

R. F. Haines & P. Norman

the airplane's final disappearance point into the sea because they are more in line with the eyewitnesses' temporal estimates than with the angular estimates. The minimum-controlled-flight (stall) speed for this model Cessna is 48 knots with no flaps, zero bank angle, and center of gravity in its most forward position. Traveling at this velocity for the total viewing duration of 93 seconds at a descent angle of 30" along line B-X (Figure 1) yields a distance traveled of only 1.43 miles (to the point of aircraft disappearance behind the distant hills). This distance is clearly too small (Figure 1). The descent angle that yields a flight path length that is most compatible with all of the above facts (2,000 feet initial altitude) is between 5 and 10". A descent angle of 10" yields a distance of 11,517 feet to the surface of the sea. Traveling at 48 knots, the Cessna would require 142 seconds to travel this distance. A descent angle of 5" yields a distance of 22,946 feet and a flight time of 4 minutes, 43 seconds, to sea impact. Yet another estimate of airplane flight path can be derived from knowledge of its engine-off glide path ratio, which is between 7:1 and 8: 1, yielding a descent angle of 8" and 7.4" below the horizontal, respectively. Here, the airplane would glide (at 7:1) the 2.73 miles to the earth's surface from a starting altitude of 2,000 feet in 2.4 minutes, assuming a speed of 60 knots. The corresponding glide distance (at 8: 1) is 2.94 miles in 2.6 minutes; both duration values are reasonable. Nonetheless, engine sounds can be heard in the background of the audio tape throughout this period, which suggests a higher forward velocity than 60 knots. The Valentich audio tape transcript. At 9:09:52, Valentich stated that the unknown aerial object near his airplane appeared to be "a long shape." At 9: 10:20, he said, ". .. its got a green light and sort of metallic like, it's all shiny on the outside." Then, almost two minutes later, at 9:12:09, he said, "My intentions are-ah.. . to go to King Island-ah, Melbourne. That strange aircraft is hovering on top of me again llopen microphone for two seconds11 it is hovering-and it's not an aircraft." These were the last words ever heard from the young pilot according to the audio tape. Note that these descriptions by Valentich correspond in color and general size with the testimony of the primary eyewitness on the ground near Apollo Bay. Significantly, the signal strength and audio quality of Valentich's radio transmission did not change at any time during the entire tape, indicating that his altitude was above at least 3,000 feet. Line of sight transmission is blocked to Melbourne below this approximate altitude at this distance. Estimating UFO size. It is reasonable to assume, on the basis of psychophysical research data, that Mr. Hansen's angular estimates were basically accurate in comparing the size of the two aerial objects because they were seen side by side at the same time. Psychophysical research supports this view. Thus, the UFO's apparent angular diameter was about 4 times larger than the distance between the airplane's two external lights. Using this ratio and the known dimension separating the two airplane lights (12 feet), we find that the

Valentich Disappearance

31

UFO would be about 48 feet across, assuming it was at the same distance as the airplane. A hypothetical aircraftflight path. The dashed line in Figure 1 presents one possible flight path for Cessna DSJ, which is consistent with the voice transcript. (Unfortunately, the transcript does not contain any references to particular spatial locations after 9:00:29, and even this location is not known exactly.) Tick marks are approximately 1 minute apart (assuming an airspeed of 1 10 miles per hour) and taking the wind differential into account. The main objective of this flight path reconstruction is (a) to bring the aircraft's position into correspondence with where Mr. Hansen and the girls said they saw it located and (b) to identify a general "splash down" area in Bass Strait from which search operations should commence. This flight path reconstruction is based on the possibility that at about 9:06:30, Valentich either became disoriented and frightened and banked back toward the mainland for reascns of safety, or the presence of the unidentified aerial object somehow affected his compass so that he thought he was continuing on to his original destination. A number of other magnetic compass interference cases in aircraft have been reported (Haines, 1992; Sturrock et al., 1998). We assume the following: 1. The Cessna's altitude began to descend at about 9: 10:30, shortly after Valentich began to fly in circles ("orbit"). His engine began to malfunction at 9: 11:52 (engine trouble is audible on the audio tape). 2. The airplane continued to descend at approximately 500 feet per minute so that the airplane was at 2,000 feet altitude upon reaching point B (Figure 1). 3. Radio transmission ceased after 9: 12:45 because of progressive line-ofsight signal loss caused by the earth's curvature. Of course, there is no way to determine the accuracy of this hypothetical flight path. A small dark oval (UFO) object with a dashed trail is also drawn in at various locations on Figure 1 in accordance with Valentich's description. It is obvious that his aircraft was the focus of attention of this strange object.

Summary
Based on what is already known about his flight plan and what can be learned from the new eyewitness evidence, we have come to the following conclusion regarding the fate of Frederick Valentich. We conclude, on the basis of the evidence presented above, that Frederick likely crashed into Bass Straight. The ground witness testimony places the airplane's approximate flight path somewhere within the arc defined by the lines C-200" and C- 126" in Figure 1, ESE of Cape Marengo. The most likely range of distances from the witnesses is from 3 to 12 miles as is discussed above. Consider the following line of evidence. First, no wreckage of Valentich's airplane has been found. If he had crashed

32

R. F. Haines & P. Norman

on land, search-and-rescue personnel would have found crash debris in the 20 years following the disappearance. Locating a crash at sea is a far more difficult task. Second, Valentich's airplane was seen flying in a southerly direction east of Apollo Bay around 9:00 p.m. (based on certain reasonable assumptions). Third, Valentich was clearly disoriented by 9: 10 p.m. at the latest and probably earlier. Many pilots will not admit this to the authorities for fear of pending medical investigations that might be required, which would put their flying career in jeopardy. Fourth, the Cessna could have flown a distance of 27.5 miles at 110 miles per hour during these 15 minutes. Of course, the main question is, in which direction was Valentich flying? Fifth, Valentich was flying in circles by 9: 10:20 (and possibly earlier) and admitted to being confused about the relative magnetic bearings to the UFO by 9: 11:23. Clearly, he did not know where he was at that point. It is possible that he could have flown back toward the mainland at some time after 9:06, either deliberately or by mistake. Perhaps he was somehow captivated by the strange object he saw flying near his airplane. Sixth, at 9: 12:09, Valentich reported that the strange aircraft was ".. .hovering on top of (him) again ... and (was) not an aircraft." Is this what Mr. Hansen and his nieces witnessed from the ground minutes later? If so, then the Cessna was descending within the area shown in Figure 1 and may have impacted the water somewhere within the dashed area shown in Figure 1. Underwater search activities should begin in this region of Bass Strait. According to an Australian Marine Research report (Ocean Currents, 1997), Bass Strait is a shallow continental shelf with an average depth of from 50 to 70 m. Tide and wind action results in the mixing of the Bass Strait and the Tasman Sea, causing the saltier, colder (1-3°C) surface waters to sink (downwelling) and fall, much like a waterfall down the continental shelf slope, "beginning midway between Flinders Island and the Victorian coast and extending north almost to Jervis Bay" (Ocean Currents, p. 5). The Bass Straight Cascade pours toward the east. Tides in Bass Strait "originate from the tidal wave traveling southward down the east coast of Australian. As the wave passes the eastern entrance of Bass Strait, some of its water is deflected into it, slowing down to 80 km per hour in the shallower water. The rest of the wave continues at high speed around Tasmania in a clockwise direction to reach the western entrance to Bass Strait some 3 hours later. The wave front entering from the west meets the wave front entering from the east, causing large tides along a north-south line in the middle of Bass Strait." Because of the velocity and force of these currents, it is likely that underwater debris may be carried a long distance. The relatively low mass aluminum structure of Valentich's Cessna airplane would not sink quickly, nor would it dig into the bottom surface very far as would an anchor or the hull of a heavy ship. It might be possible to locate a particular area where such debris would accumulate over time. Computer simulations should be run to develop estimates of the debris field on the sea bottom, given tides and currents in the

Valentich Disappearance

33

We may never know exactly what happened to Frederick Valentich. Nevertheless, an attempt should be made to locate the airplane. An underwater search should be mounted, despite the 20 years that have elapsed since the event took place.

References
The Bass Strait mystery. (1979, February). Australian UFO Bulletin, pp. 7-11. Foreign forum. lntemational UFO Reporter (1978, December). 3, 2-10. Haines, R. F. (1980). Observing UFO. Chicago: Nelson-Hall. Haines, R. F. (1981). Results of sound spectrum analysis of a tape recorded radio transmission between Cessna VH:DSJ and Melbourne Flight Service: I. The metallic noises. The Journal of UFO Studies, 3, 14-23. Haines, R. F. (1987). Melbourne episode: Case study of a missing pilot. Los Altos, CA: LDA Press. Haines, R. F. (1992). Fifty-six aircraft pilot sightings involving electromagnetic effects. Proceedings of the 1992 MUFON UFO Symposium, pp. 102-128. Hawkinds, N. E., & Meyer, M. E. (1965). Time perception of short intervals during finished, unfinished and empty task situations. Psychonomic Science, 3, 473-474. Norman, P. (1979). Mystery deepens in pilot disappearance case. The MUFON Joumal, 141,s. Norman, P. (1991, March). On the UFO trail. Australian UFO Bulletin, 5. Ocean currents around Tasmania (1997). CSIRO Marine Research [on-line]. Available: http://www.marine.csiro.au/LeafletsFolder/tascurrents.html. Pinkney, J., & Ryzman, L. (1980). Alien Honeycomb. Sydney: Pan Books, pp. 81-85. Sturrock, P. A., et al. (1998). Physical evidence related to UFO reports: The proceedings of a workshop held at the Pocantico Conference Center, Tarrytown, New York, September 29-0ctober 4, 1997. Journal of Scientific Exploration, 12, 179-229. Valentich, G . (1980, September). My son Frederick. The Australian UFO Bulletin, 14.

Journal of Scientific Exploration, Vol. 14, No. 1, pp. 35-52,2000

O 2000 Society for Scientific Exploration

0892-3310100

Protection of Mice from Tularemia Infection with Ultra-Low, Serial Agitated Dilutions Prepared from Francisella tularensk-Infected issue'

Department of Family Medicine Uniformed Services University of the Health Sciences 4301 Jones Bridge Road Bethesda, MD 20814 e-mail: wjonas @rnxa. usuhs.mil

The Department of Chemistry United States Naval Academy Annapolis, MD

Abstract-Reports of immunomodulation with serial agitated dilutions (SADs) of cytokines, hormones, minerals, and whole tissue led to this inquiry as to whether exposure to a complex SAD preparation produced from Francisella tularensis-infected mice could alter the immune response and the effects of subsequent challenge with this pathogen in vivo. Six SAD preparations of reticuloendothelial tissue from I? ttularensis-infected C3HlHeN mice were produced through a process of serial loglo and logloo dilutions in 70% ethanol interspersed with 30-second agitation. SAD reparations were analyzed for protein content and for contamination with H-NMR spectroscopy. Three preparations contained detectable protein by Lowry and NMR analysis, and three were diluted beyond detection of protein. These preparations were administered orally for 1 month to 147 animals randomly assigned to SAD or dilutent control groups. All animals were then challenged with a lethal dose (LDS0or LD75) of F. tularensis and evaluated for time to death and total mortality. In a series of 15 trials, the SAD preparations consistently produced increased mean times to death (MTD; MTD SAD = 18.6 days [range, 12.9-25.61; MTD controls = 13.7 days [range, 11.6-15.6]), and decreased mortality (SAD: 53%; control: 75%) when compared with matched control groups given the dilutent only. Protection was not related to the level of dilution, the number of times vortexed, or the presence or absence of original substance from the tissue. Active and inactive solutions could be distinguished from one another using 'H NMR-spectroscopy. Two preparations induced specific anti-tularemia IgG antibody production before challenge. This anomalous finding needs independent repetition and further investigation.

P

Keywords: homeopathy - nosodes - tularemia - animal model - immunology - vaccination - ultra-low dilutions - SADs

35

36

W. B. Jonas & D. K. Dillner
Introduction

Dose-dependent reverse effects (DDREs), or hormesis, refers to the observation that when living organisms are exposed to very low doses of infectious or toxic agents, their reaction is often the opposite of that observed after exposure to high doses (Calabrese, McCarthy, and Kenyon, 1987; Luckey, 1980). Biological and physical agents of all types (including potent toxins, essential nutrients, pesticides, and penicillin) demonstrate DDREs on an equally broad range of living systems from single-cell bacteria and fungi to multicellular organisms and across a variety of phyla and species, including humans (Calabrese and Baldwin, 1997; Fursi, 1987; Neafsey, '1990; Stebbing, 1987). These reverse effects are assumed to occur because low-level exposures to otherwise damaging agents induce protective and reparative responses in cells (such as induction of heat shock proteins) without causing cellular damage (Boxenbaum, Neafsey, and Fourier, 1988; Stebbing, 1982; van Wijk and Wiegant, 1997). Recent studies report that under certain conditions, however, these effects occur with solutions diluted beyond the point when sufficient molecules remain to provide any specific molecular stimulus (Bastide, 1997; Endler and Schulte, 1994; Schulte and Endler, 1998). The majority of these studies report that vortexing the solution in a serial fashion during the dilution process can enhance or preserve this activity. Diverse effects from these serial agitated dilutions (SADs) have been reported in clinical studies (usually in reference to homeopathy; Boissel et al., 1996; Kleijnen, Knipschild, and ter Riet, 1991 ;Linde e t al., 1997) in toxicology (Anderson et al., 1990; Bascands et al., 1987; Cazin et al., 1987; Fisher and Capel, 1982; Fisher et al., 1987; Larue et al., 1985; Linde et al., 1994; Wagner, Kreher, and Jurcic, 1988), and in immunology (Bastide et al., 1987; Bastide, Doucet-Jaboeuf, and Daurat, 1985; Benveniste et al., 1991; Carriere, Dorfman, and Bastide, 1988; Daurat et al., 1986; Daurat, Dorfman, and Bastide, 1988; Davenas et al., 1988; Harisch and Kretschmer, 1988; Maddox, Randi, and Stewart, 1988; Poitevin, Davenas, and Benveniste, 1988; Sainte-Laudy and Belon, 1996b). Immunomodulating effects include increased cytotoxic and antibody-secreting cell production with SAD preparations of cytokines (Bastide, Doucet-Jaboef, and Daurat, 1985; Carriere, Dorfman, and Bastide, 1988; Daurat et al., 1986; Daurat, Dorfman, and Bastide, 1988), NK cell stimulation with hormones (Bastide et al., 1987; Daurat et al., 1986), histamine release, basophil degranulation and CD subset shifting with histamine and anti-IgE antibodies (Benveniste et al., 1991 ;Davenas et al., 1988; Harisch, Kretschmer, and von-Kries, 1987; Poitevin, Davenas, and Benveniste, 1988; Sainte-Laudy and Belon, 1996a, 1996b; others

This research was conducted according to the principles set forth in the National Institutes of Health (NIH) Publication No. 85-23, Guide for the Care and Use of Laboratory Animals, and the Animal Welfare Act of 1986, as amended. The views, opinions and assertions expressed in this article are those of the authors and do not reflect official policy of the Department of Defense, the Department of Health and Human Services or the U.S. Government.

SAD Prophylaxis of Infection

37

have reported no such effect from anti-IgE SADs, however: Hirst et al., 1993; Ovelgonne et al., 1992), and macrophage and mast cell activation with SAD preparations from minerals (Davenas, Poitevin, and Venveniste, 1987; Harisch and Kretschmer, 1988; Harisch and Kretschmer, 1989), nonspecific adjuvants (Poitevin, Davenas, and Benveniste, 1988), and whole tissue homogenates (Sainte-Laudy, Haynes, and Gershwin, 1986). Jacques Benveniste, a prominent immunologist and one of the foremost proponents of this ultrahigh SAD phenomenon, also claims that biologically active signals from SAD preparations can be captured and transmitted electronically to biological systems (Benveniste et al., 1997; Hadji, Arnoux, and Benveniste, 1991). Interestingly, homeopathic physicians since the 18th century have claimed that administration of SAD preparations of infectious tissue (called "nosodes" in that literature) can prevent the spread of epidemic diseases (Boenninghausen, 1887; Campbell, 1909; Castro and Nogueira, 1975; Davis, 1904; English, 1987; Fox, 1987; Gibson, 1958; Krishnamurty, 1970; Linn, 1904; Morgan, 1899; Rastogi and Sharma, 1992; Taylor-Smith, 1950). In addition, studies done in the first half of this century reported that SADs of infected tissue could alter immune reactions as evidenced by alteration of the Schick test (a now outdated method of screening for diphtheria immunity; Chavanon, 1932; Paterson and Boyd, 1941; Shephard, 1967). With rare exceptions, these studies have major flaws that make it impossible to judge the efficacy of this approach despite persistent claims. There are numerous speculative hypotheses as to how such information might be captured and stored in SAD preparations, if this indeed occurs. These hypotheses include differences in minority oxygen singlets (Berezin, 1990), water cluster formation in solvents (Anagnostatos, Pissis, and Viras, 1995), electromagnetic signal transfer (Endler et al., 1997), informational content of solutions (Bastide and Lagache, 1997), the interaction of "systemic memory" with complex feedback dynamics of living systems, and others (see Jonas and Jacobs, 1996, pp. 85-91 for a short summary). One current testable hypothesis is that it occurs through the arrangement of minority oxygen singlets in water or alcohol (Berezin, 1990). Consistent with this hypothesis are reports that the action of SAD-enhanced preparations are reduced or eliminated by extremely high or low temperatures, after exposure to electromagnetic radiation and when prepared in the absence of oxygen (Benveniste et al., 1991; Cazin et al., 1991). In addition, broadening in the hydroxyl region and other changes are said to occur in the nuclear magnetic resonance (NMR) spectra of these preparations (Demangeat, Gries, and Poitevin, 1997; Weingartner, 1989). In the present study, we analyzed whether a SAD preparation of infected tissue when given to mice can induce protection against an infectious challenge by the same organism. We chose murine tularemia (Francisella tularensis, live vaccine strain [LVS]) as the model because infection with this organism has been characterized in our lab and is rapidly lethal for C3HlHeN mice, providing a clear endpoint-death and days to death, as outcome measures (Fortier et

38

W. B. Jonas & D. K. Dillner

preparations of tissue from F. tularensis-infected mice would respond no differently than would control mice given only dilutent to a subsequent challenge with a lethal dose of F. tularensis.

Methods
SAD Preparations

Two specific pathogen free, 6-week old, male C3HlHeN mice (Harlan Sprague Dawley, Indianapolis, IN) were infected intranasally with a sublethal dose (lo3 colony forming units [CFUs]) of F. tularensis, LVS (ATCC, Rockville, MD). On Day 7 postinfection, the mice were sacrificed, and target organs for the infection (lungs, liver, and spleen) were removed and processed aseptically. These organs were placed in 15 ml gellsaline media and homogenized in a sterile Ten Broek homogenizer. Tissue homogenates were serially diluted in phosphate-buffered saline (PBS) and plated on cystine heart agar plates with Isovitalex (Difco, Detroit, MI) and Fe phosphate (Sigma, St. Louis, MO) in triplicate. Plates were incubated at 3 7 ' ~ 72 hours and colonies for counted to assess bacterial numbers in infected tissues. Tissue from two additional LVS-infected mice was homogenized in a 500 ml Wedgewood mortar with 60 mg of Lactose, U.S.P. (Pfanstiehl Lab., Waukegan, IL), for 1 hour and mixed with the liquid portion of tissue homogenate. Six SAD preparations were made from the mixed tissue homogenate in the following manner:
1. 0.1 ml of infected tissue homogenate was mixed with 0.9 ml of 70%

ethanol in sterile, glass, 5 ml tubes. This mixture was agitated (as described below) for 30 seconds. This tube was labeled FtISAD-ld (F. tularensislagitated ultra-high dilution - 1st loglO[decimal] dilution). 2. 0.1 ml of FtISAD-ld then was mixed with 0.9 ml of 70% ethanol in a sterile 5 ml glass tube. This mixture was agitated again for 30 seconds. This tube was labeled FtISAD-2d. 3. The above procedure was repeated until serial agitated dilutions were prepared at the 3d, 7d, and 14d levels, each number representing the loglodilution of the original preparation. 4. Three more preparations were made using a homogenate-to-solute ratio of 1:100. This was done by mixing 0.01 ml of infected tissue homogenate with 0.99 ml of 70% ethanol in sterile, glass, 5-ml tubes and agitating for 30 seconds. This tube was labeled FtISAD-lc (F: tularensislagitated ultra-high dilution-1st log loo [centesimal] dilution). This procedure then was repeated as in the log 10 preparations until the 30c, 200c, and lOOOc levels were obtained. Each number in this series represents the logloodilution of the original preparation. Immunomodulation has been reported with these solutions in vitro, in vivo, and in clinical trials (Bastide, Doucet-Jaboeuf, and Daurat, 1985; Bastide et al., 1987; Benveniste et al., 1991 ; Carriere, Dorfman, and Bastide, 1988; Daurat et

SAD Prophylaxis of Infection

39

al., 1986; Daurat, Dorfman, and Bastide, 1988; Davenas et al., 1988; Ferley et al., 1989; Harisch and Kretschmer, 1988; Linde et al., 1997; Papp et al., 1998; Poitevin, Davenas, and Benveniste, 1988; Reilly et al., 1994; Sainte-Laudy and Belon, 1996b). All agitation was performed manually (except the lOOOc preparation) by vertically shaking the solution vial vigorously onto a firm surface 60 times at the rate of two succusions per second. The 1000c preparation was prepared commercially using the 3c as a starting dilution (Quinn Pharmaceuticals, Berkeley, CA). Control groups received 70% alcohol agitated 60 times in an identical fashion. Analysis of SAD and Control Preparations Each SAD preparation was divided into three portions for analysis of bacterial count, protein content, and Proton-Nuclear Magnetic Resonance (IH NMR) spectroscopy. Bacterial count. We determined bacterial count (CFU) on the original homogenate as described under SAD Preparation. Each of the subsequent dilutions made with 70% alcohol had no viable bacteria on culture. Estimated nonviable bacterial concentration was made from the original homogenate. Protein analysis. Protein concentration was estimated by the Lowry spectrophotometric analysis method (Markwell et al., 1978). The sensitivity of this method is 0.0 1 mglml of protein. ' H ~ ~ ~ - s p e c t r o s cSamples were analyzed for 'H NMR-spectral differop~. ences in an attempt to distinguish between active and control solutions and to look for possible paramagnetic contaminants. NMR Spectroscopy was done at the Department of Chemistry of the U.S. Naval Academy, Annapolis, Maryland. Three cleaned and dried NMR sample tubes (Wilmad 528-PP, 5mm, Wilmad Glass Co., Buena, NJ) were filled with 0.8 ml of the test SAD preparation or control alcohol solution. The alcohol analyzed was the same 70% ethanol solution used to prepare the SADs. A single, clean, coaxial inner cell (Wilmad WGS-5BL) was filled with 0.2 ml of deuterium oxide (Isotec, Inc., Miamisburg, OH) and inserted into the sample tubes to provide the locking solvent for external referencing with each spectra. Recording was done with a General Electric NMR QE-300 spectrometer. Sample tubes were washed three times between each analysis with the solution to be evaluated. The central peak of CH3 was defined at 1.5 parts per million (ppm). The spectral areas of interest for this study were the H 2 0 and OH signals between 4.5 and 6 ppm, where evidence of increased proton exchange in the hydroxyl region of the solution was detected (Cooper, 1980). Spectral broadening was quantified by calculating the ratio of OH curve height to width at midaxis for controls and by subtracting the same ratio from each sample. Animal handling. All experiments were conducted at the barrier animal facility of the Department of Cellular Immunology, Walter Reed Army Institute of Research, Rockville, Maryland. The barrier facility is designed to inhibit

40

W. B. Jonas & D. K. Dillner

spread of infectious agents from one animal cage to another, as well as from investigators to animals and from animals to personnel. Animal handling was performed in a laminar flow hood to ensure safety and prevent contamination of the animals. All animals received uniform handling, identical feed, water and environmentally controlled conditions, including constant temperature, humidity, and a 12-hour light-dark cycle. Randomization and matching. The animals were age-matched, 5- to 6week-old, male, C3HlHeN mice (Harlan Sprague Dawley). Seven SAD preparations were tested in a total of 15 trials using 8-12 mice per trial. For each trial, animals were randomly divided by an animal caretaker into two groups of 4-6 mice each and placed in separate cages. These cages then were randomly assigned to either the experimental (SAD) or control (70% ethanol) group. All animals were treated in a uniform way receiving the preparation (SAD or ethanol) at a dose of 0.03 ml per mouse given orally through a sterile pipette. All preparations (including controls) were shaken for 10 seconds before administered. Blinding. Expectation control of the experiment was done over 2 months by having research assistants who were completely blind to the nature of the experiment and to the hypothesized "active" and "inactive" preparations administer the SADs. Blinding of the tularemia challenge (the only maneuver theoretically possible to influence the outcome) was done by having a third party cover all cage labels for the animals to be challenged and having a single individual (W.J.) deliver all challenges in a uniform way on randomly selected cages. In this way, both topical and group blinding was maintained for the critical elements of the experiment. Prophylaxis of animals. The prechallenge dosage regime was one dose (0.03 ml), twice per day for 3 days followed by one dose every 3 days for 10 doses. After challenge, mice were given the SAD or control preparation twice a day for 20 days. Total number of doses was approximately 16 per mouse delivered over the 30 days prior to challenge and 40 doses delivered in the 20 days after challenge. The animals readily took this amount orally, obviating the need for direct gastric delivery, anesthesia, or multiple injections. Challenge procedure. On Day 30 from the start of prophylaxis (3 days after the previous SAD or control dose), all mice were challenged intranasally (in.) with either a LD50 or LD75 dose of LVS (lo3 or lo4 CFU, respectively). This dose will kill 50 to 75%, of untreated mice within 20 days. Animals were anesthetized with 0.06 mg of Telazol (Robbins, Richmond, VA), i.m., and then inoculated i.n. with LVS diluted in 0.05 ml PBS to the desired concentration. Bacterial counts were done on the challenge solution in duplicate both preand postchallenge. Two trials were conducted for each SAD preparation (three for FtISAD-14d) using a high (LD75) and low (LD5O) challenge dose. In addition, two trials were conducted in mice given a mixed SAD prophylaxis regime that dosed animals with ascending dilutions (FtISAD-14d, 30c, 200c, lOOOc each given twice for 1 week). These groups were called the ascending groups.

SAD Prophylaxis of Infection
Outcome Measures Antibody screening. Before starting SAD prophylaxis and again before infectious challenge, blood was drawn from each group through the lateral tail vein, pooled, centrifuged, and the serum frozen at -20°C. Serum was analyzed using enzyme-linked immunosorbent assay (ELISA) for antibody titers to whole organism LVS using the following procedure. Immulon plates were coated with 5 x lo6organisms and incubated overnight at 4OC. Plates then were washed three times with saline-Tween solution, blocked with 200 ul of 10% fetal bovine serum and rewashed. Each serum batch was screened for the presence of antibodies by adding serial 10-fold dilutions of serum (100 pllwell) and incubated for 90 minutes at 37°C and washed. One tenth pglwell of peroxidase labeled goat antimouse Ig antibody was added and incubated for 90 minutes at 37OC. Substrate (KPL solution A and B, 1:1) was then added at 100 p1 per well and color read at 410 nm in 30 minutes. A reading was considered positive if optical density of the test serum was equal to or greater than positive control serum produced by LVS vaccinated mice. Mortality and time to death. The two primary outcome measures were time to death and mortality. Animals were checked twice daily after infectious challenge and the death day for each animal recorded. The experiment was terminated 30 days postchallenge. Data analysis. Data were entered into a spreadsheet format (Microsoft Excel, Microsoft Corp., Redmond, WA) and analyzed by standard statistical software packages (Statview 11, SuperAnova; Abacus Concepts, Berkeley, CA; and JMP, SAS Institute, Cary, NC). To control for the influence of censured data from mice that survived challenge, both parametric and nonparametric methods were used. The difference between harmonic mean death day (the mean reciprocal death time, MRDT) of matched groups was evaluated using a two-tailed, paired t test and the Wilcoxon sign rank test. The MRDT was the sum of one over each death day divided by the total number of mice in the group. This calculation gave an estimate of resistance to infection in the group based on time to death and overall mortality while controlling for censured and outlying data from surviving animals. The value obtained was between 0 and 1, in which 0 = total resistance and 1 = death of all animals on the first day. Death-day data were also analyzed without this transformation using the Mann-Whitney U nonparametric test. Mortality ratios were analyzed with standard chi-square methods, and stratified mortality was evaluated using the Mantel-Haenszel rate ratio method. One-way analysis of variance (ANOVA) was used to evaluate the influence of SAD treatment effect while controlling for challenge dose with both death day and MRDT as outcome measures.

42

W. B. Jonas & D. K. Dillner

Results
Bacterial Count

Bacterial counts done on the stock homogenate showed these tissues harbored lo7 CFU of LVS. No viable bacteria were found in any of the SAD preparations nor in the two alcohol control preparations that were cultured. Estimates of the number of nonviable bacteria in the SADs indicated that only the FtISAD-3d and 7d preparations contained bacteria: lo4and 1 organism, respectively (Table 1, column 2).
Protein Analysis

Protein concentration was 0.55 mglml in the initial homogenate, 0.098 mglml in the FtISAD-3d preparation and less than 0.001 mglml in all other SAD preparations and the two controls tested. Estimated protein content for the FtISAD-7d and 14d preparations were 1 nglml and 0.1 fglml, respectively (Table 1, column 3). The calculated amount of protein given to each mouse over the 45-day experiment was 16.5 micrograms, 1.68 nanograms, and 0.17 fentograms for the 3d, 7d, and 14d levels, respectively (Table 1, column 4). Preparations in the centesimal series had no remaining protein, either detectable or theoretical.
NMR Spectroscopy

The 'H NMR-spectra of the SAD groups were easily distinguished from control groups regardless of the dilution level (Table 2). All SAD preparations had a moderate to large amount of OH spectral broadening compared with control groups. The average height to width ratio of SAD preparations was 3.4 (range 1.8-4.8) compared with control preparations where the height-to-width

TABLE 1 Analysis of SAD Preparations
Preparation Homogenate 3d 7d 14d 30c 200c 1oooc Control 1 Control 2 CFU count Protein 550 (glml) 9.8 (glml) (1 nglml) (0.1 fglml) 0 0 0 0 0 Proteinlmouse NA 16.5 (g) (1.68 ng) (0.17 fg) 0 0 0 0 0

Note: CFU = colony forming units. Analysis is of CFU, protein concentration, and total protein received per mouse during prophylaxis. Numbers in parenthesis are calculations based on analysis of the stock homogenate.

SAD Prophylaxis of Infection
TABLE 2 NMR Spectra of SAD Preparations Preparation 3d 7d 14d 30c 200c lO0Oc Mean Control Height = width ratio OH curvea broadening

43

Note: Results of NMR spectroscopy evaluating OH spectral curve broadening between 4.5 and 6 PPm. a Height=width ratio of control preparation minus the height=width ratio of SAD preparation. Higher numbers indicate greater broadening.

ratio was 11. The 30c and 200c preparations had height to width ratios of 4.8 and 3.0, respectively. NMR spectroscopy was not done on the commercial SAD preparation because the alcohol concentration of this preparation was not identical to other SADs (50% vs 70% ETOH). Evaluation of Prophylaxis Antibody screening. Serum from the 30c- and 200c-treated mice was positive for anti-tularensis antibodies after Week 4 of SAD treatment. Anti-tularensis antibodies were not found in other SAD-treated groups. Titer endpoints were 1: 100 or less before prophylaxis and greater than 1: 10,000 just before challenge in the 30c- and 200c-treated mice and in the LVS-vaccinated

TABLE 3 Anti LVS Antibodies in SAD Groups Preparation 3d 7d 14d 30c 200c 1oooc Ascending Positive control Negative Control Note: LVS = live virus strain; AUHD = agitated ultra-high dilutions; ND = Not done. Endpoint anti-tularensis antibody titers of pooled serum from mice before SAD treatments and just prior to LVS challenge. a This sample was drawn 3 days after start of SAD treatment. This sample was drawn 15 days before LVS challenge. Pre = AUHD treatment Pre = LVS challenge

~

44

W. B. Jonas & D. K. Dillner

mice (positive controls). Titer endpoints were always 1:100 or less in the 3d, 7d, 14d, 1000c, and ascending groups and in the normal (untreated) mice (Table 3). MRDT. The MRDT was calculated for SAD-treated and control mice for each matched trial. MRDT ranged from 0 to 0.16 for all trials and, as expected, was higher in the trials with a higher challenge dose. SAD-treated mice consistently showed reduced MRDT (increased resistance to infection) over their

Mean Reciprocal Death Times

SAD Level
Fig. 1. Mean reciprocal death times (MRDT) of SAD-treated mice (first columns) and their matched control groups (second columns) over 15 trials. Lower MRDT scores indicate increased resistance to infection. First trial listed for each SAD level is the LDS0challenge. Second and third trial listed is the L D 7 challenge. ~

SAD Prophylaxis of Infection
TABLE 4 Analysis of Mortality After SAD Treatments Mortality Preparation 3d 7d 14d 30c 200c 1oooc Ascending Dilutions(3d-14d) Solutions(30-As) All Groups
a Relative

No. in group

SAD

Control

Relative riska

Note: Mortality of mice treated with each SAD preparation and matched control group. risk after stratification. ant el-~aenszel chi-square for all groups (95%CI 1.23-2.92, p c .005).

matched control group (mean difference = .03). All but two of the 15 trials showed a reduced MRDT in the SAD group (Wilcoxon sign rank test, z = 3.408, p = .0007, paired t test, t = 7.518, p = .0001, two-tailed). There was no relationship between level of dilution in the SAD preparations and protection. All preparations except FtISAD- 1000c demonstrated protection (Figure 1). These results were checked directly against actual death day using the raw data for all SAD groups with Day 30 as the cut off. Mean time to death for SAD-treated mice was 18.6 (range 12.9-25.6) and for controls was 13.7 (Range: 11.6-15.6). The Mann Whitney U test using death day for all groups was 3.087, ( p = .002), consistent with the MRDT evaluation. SAD treatment delayed death by approximately 5 days. One way ANOVA showed that SAD treatment had a significant impact on death day (F = 11.5 1, p = .0009) and MRDT (F= 4.96, p = .037) even after controlling for challenge dose (Table 5). Mortality. Each group was monitored daily for deaths up to 30 days postchallenge. Relative risk was above 1 in all groups except FtISAD-1000c. The Mantel-Haenszel rate ratio (after stratification into treatment groups) was not different than the crude rate ratio for all groups, indicating that no individual preparation disproportionally influenced the overall result (M-H RR 1.89; 95%CI 1.23-2.92, two-tailed p = .0037; Table 4). Overall mortality was 53% in the SAD group and 75% in the matched control group with little difference between solution or dilution groups. Chi-square analysis on the proportion of deaths in each group was 8.41 ( p = .0037). SAD treatment prevented 22% of deaths. The number of organisms required to kill 50% of the animals (LDSO) in the control group was 6,000 CFU compared with 20,000 CFU in the SADtreated group (Table 5).

46

W. B. Jonas & D. K. Dillner
TABLE 5 Summary Evaluation of SAD Effects Outcome SAD Control 13.7 0.10 75% 7300 6000 Evaluation
p value

18.6 MDT (days) 0.07 MRDT By nonparametric test Controlled for challenge dose Mortality 53% Prevented fraction of deaths by SAD preparations = 22% Challenge dose (CFU) 7300 LD5o (CFU) 20,000

z = 3.08
t=7.518 z = 3.41 F = 4.96 = 8.41

x2

.002a .0001 .0007' .037~ .0037

Note: CFU = colony forming units; MDT = mean time to death; MRDT = mean reciprocal death time. Summary evaluations of mortality (MDT) and MRDT in all groups. a Mann-Whitney U test. Students two-tailed t-test. Wilcoxon sign rank test. way ANOVA on mean time to death after controlling for challenge dose.

Conclusion
The results in this series of experiments did not confirm the null hypothesis that ultra-low, SAD dilutions prepared from infected tissue acted like chemically identical controls in their ability to protect against subsequent bacterial challenge by the same organism. SADs reduced overall mortality by 22%, delayed death by 5 days, and tripled the number of organisms required to kill 50% of the animals. Two SADs (30c and 200c) stimulated the production of specific anti-tularensis antibodies in animals after 1 month of exposure. No protein was detected in SAD preparations, but they could be distinguished from control solutions with 'H NMR-spectroscopy. All SAD preparations, except the 1000c, consistently imparted some protective effect. Environmental exposures while in transit or differences in the method of preparation of the commercial SAD may have altered the lOOOc solution, rendering it different than those produced in our lab. Data from studies using the SADs made at our lab suggest protection was not related to level of dilution, number of times vortexed, or absence of bacteria or protein in the preparations. The lack of relationship between response and dilution level is probably a reflection of the wide gap between dilutions selected for testing. In vitro work on SADs in which multiple serial, 10-fold dilutions are examined usually report a quadratic or sinusoidal pattern (rather than a linear or logarithmic) doseresponse relationship (Boxenbaum, Neafsey, and Fournier, 1988; Davenas, Poitevin, and Benveniste, 1987; Davenas et al., 1988; Poitevin, Davenas, and Benveniste, 1998; Schulte and Endler, 1998). Peaks and valleys of effect are found when multiple and closely spaced dilutions are tested. We tested only six widely spaced dilutions and so may have missed an existent dose-response relationship.

SAD Prophylaxis of Infection

47

SAD solutions could be distinguished from control solutions using 'H NMR-spectroscopy; however, the relevance of these findings in unclear. Broadened hydroxyl band width can occur from a variety of causes, including contamination with paramagnetic particles or increased proton exchange in the OH bonding region. This study was not designed to control for these possibilities, and therefore, no conclusions can be made about this finding. Reports in the literature testing the SAD prophylaxis hypothesis in laboratory models or in veterinary populations have yielded mixed results. Taylor reported no prophylactic effect from a SAD preparation derived from the husk of the lungworm Dictyocaulus vivparus in experimental lung infection in cows after there were promising reports from veterinarians (Taylor, Mallon, and Green, 1989). The experiment used the 30c preparation, dosed the animals four times with the SAD over a month, and gave no doses after challenge. Study power was low. Oberbaum, Weismann, and Bentwich (1989) reported on a study on a retroviral infection with a SAD preparation. This study used SAD preparations (range 6d-30d) derived from C type B-trophic retrovirus (LP-BM5 MuLV), which produces a murine AIDS-like syndrome (MAIDS). Groups of mice were treated either orally or by injection (i.p.) three times a week, but, in contrast to Taylor, SAD preparations were given only after the infectious challenge. Animals given the 12d SAD prepared from stock containing less than 5 infectious units of virus had significant reductions in spleen weights (the primary marker of the disease) compared with saline-alcohol treated animals. No other level of SAD demonstrated an effect (Oberbaum, Weismann, and Bentwich, 1989). Day reported that a nosode prepared from parvovirus stopped an epidemic of kennel cough in dogs (Day, 1987). No control group was included, however, and therefore, no comparison with the natural course of the epidemic could be made. Apparently, the use of nosodes and other SAD and homeopathic preparations are popular among European animal farmers for the control of epidemic outbreaks. Several reports of SAD use for control of bovine mastitis have been reported (Day, 1986; Merck, Sonnenwald, and Rollwage, 1989). Langer reported on a series of field tests and controlled experiments on bovine mastitis using various mixtures of nosodes, SADs, and homeopathic remedies that were both protective and therapeutic (Langer, 1990). What is the mechanism of SAD-induced protection? Besides artifact or experimental error, the most likely explanation is that SAD preparations of LVSinfected tissue contain sufficient immunogenic signals to stimulate an immune response in recipients. Protection does not seem dependent on humoral immunity because anti-tularemia antibodies were detected only in serum from 30cand 200c-treated mice (Table 3). This is not unusual in tularemia infection, however, in which the primary protective mechanism is thought to be cellular rather than humoral (Tarnvik, 1989). Specific antibody induced by SAD treatment, then, cannot account for protection seen in all groups treated with SAD treatment. Induction of specific antibody response to SAD treatments on other

48

W. B. Jonas & D. K. Dillner

systems is also variable. Davies reported no antibody production 2 weeks after three doses of a SAD preparation from influenza A2 (Davies, 1971). Weisman and colleagues, on the other hand, have reported highly significant modulation of specific antibody generation in mice using SAD preparations of K-lipid hemocyanin when given three times a week for 8 weeks (Weisman et al., 1997). Neither of these investigators included cytokines, tissue protein, or other adjuvants reported in vitro to modulate immune function when prepared as SADs. If ultra-low dose SADs do have specific effects, one can hardly expect those effects to be large or induced after only a few doses over a short time period. No research has been done to systematically investigate these questions. What is the active agent in the final SAD solutions? Of the many speculative hypotheses previously discussed, one is that stable isotopic positional correlations (IPCs) of minority oxygen singlets are produced around molecules in the stock solutions and subsequently "locked in" during agitation by the polaronic self-stabilization properties of nonlinear media (Berezin, 1990). IPCs, then, might mimic the original molecules through interaction with receptors and enzyme systems on or in the cell. This hypothesis takes into consideration current information about SAD action and is testable by manipulating the concentration of oxygen isotopes in the solution as it is prepared. In addition, the molecular organization of these solutions could be further evaluated using techniques such as thin layer crystallography, NMR, infrared, Raman, or other spectrographic methods (del Giudice, Preparata, and Vitiello, 1988; Popp, Li, and Mei, 1989). One currently popular hypothesis is that water can hold and deliver specific electromagnetic signals (with coherent wavelengths similar to a laser) to which biological systems respond (Benveniste et al., 1997). Others claim these observed effects are all attributable to experimental error and artifact (Maddox, 1988; Metzger, 1988; Plasterek, 1988; Seagrave, 1988). We currently are attempting to replicate these findings under more stringent conditions (e.g., total blinding and using uniform aerosol dosing methods, additional control groups, etc.) Whatever the mechanism, protection produced by SAD preparations in this study was unexpected yet consistent over 15 trials. If these effects represent a real and general phenomenon, SAD preparations might provide a simple and rapid method of protecting against infectious agents for which we do not yet have adequate prophylactic or therapeutic regimes or to agents with emerging resistance to current treatments. This is especially important as the world becomes increasingly mobile and the chance for rapid spread of unusual or resistant infectious pathogens increases. Further investigation into this phenomenon is warranted.

Acknowledgments
The authors thank Terri Western, Department of Cellular Immunology, for her help in blinding, randomization, and SAD administration; Robert Burge, Division of Biometrics, for his help with statistical analyses of our data; Au-

SAD Prophylaxis of Infection

49

gust Salvado, past Director of WRAIR, and Ann Fortier and Carol A. Nacy, previously at the Department of Cellular Immunology, for their support and critical analysis of the project.

References
Anagnostatos, G. S., Pissis, P., & Viras, K. (1995). Possible water cluster formation by dilution and succussions. In Anagnostatos, G. S., and von Oertzen, W. (Eds.), Atomic and nuclear clusters (pp. 215-217). Heidelberg, Germany: Springer-Verlag. Anderson, D., Fisher, P., Francis, A. J., Philipps, B. J., & Jenkinson, P. C. (1990). Studies of the adaptive repair response in cultured cells. In Proceedings of the First International Congress on Ultra Low Doses, 1, A4.2. Bordeaux, France: University of Bordeaux. Bascands, J. L., Girolami, J. P., Cabos, C., Pecher, C., Bompart, G., Suc, J. M., & Manuel, Y. (1987). Low dose of cadmium (Cd) protects rat glomerular cell culture against the direct toxic effect of cadmium. Heavy Metals in the Environment, 2, 118-120. Bastide, M. (1997). Signals and images. Dordrecht, The Netherlands: Kluwer Academic Publishers. Bastide, M., Daurat, V., Doucet-Jeboeuf, M., Pelegrin, A., & Dorfman, P. (1987). Immunomodulator activity of very low doses of thymulin in mice. International Journal of Immunotherapy, 3, 191-200. Bastide, M., Doucet-Jaboeuf, M., & Daurat, V. (1985). Activity and chronophannacology of very low doses of physiological immune inducers. Immunology Today, 6, 234-235. Bastide, M., & Lagache, A. (1997). A communication process: A new paradigm applied to highdilution effects on the living body. Alternative Therapies in Health and Medicine, 3, 35-39. Benveniste, J., Davenas, E., Ducot, G., Cornillet, B., Poitevin, B., & Spira, A. (1991). L'agitation de solutions hautement kiluees n'indiut pas d'activite specifique. Comptes Rendus de 1 'Academie des Sciences Paris, 312, 461-466. Benveniste, J., Jurgens, P., Hsueh, W., & Aissa, J. (1997). Transatlantic transfer of digitized antigen signal by telephone link. Journal of Allergy and Clinical Immunology [abstract], 99, S175. Berezin, A. A. (1990). Isotopical positional correlations as a possible model for Benveniste experiments. Medical Hypotheses, 31,43-45. Boenninghausen, C. (1887). Prophylaxis. Homoeopathic Physician, 7,63-64. Boissel, J . P., Cucherat, M., Haugh, M., & Gauthier, E. (1996). Critical literature review on the effectiveness of homoeopathy: Overview of data from homoeopathic medicine trials. Report to the European Commission, Brussels. Homoeopathic Medicine Research Group, Sussex, UK. Boxenbaum, H., Neafsey, P. J., & Fournier, D. J. (1988). Hormesis, Gompertz functions, and risk assessment. Drug Metabolism Reviews, 19, 195-229. Calabrese, E. J., & Baldwin, L. A. (1997). The dose determines the stimulation (and poison): development of a chemical hormesis database. International Journal of Toxicology, 16, 545-559. Calabrese, E. J., McCarthy, M. E., & Kenyon, E. (1987). The occurrence of chemically induced hormesis. Health Physics, 52, 531-41. Campbell, N. (1909). Reasons for having faith in the nosodes. Medical Advances, 47L, 633-644. Carriere, V., Dorfman, P., & Bastide, M. (1988). Evaluation of various factors influencing the action of mouse a,B interferon on the chemiluminescence of mouse peritoneal macrophages. Annual Review of Chronophamaco1ogy, 5,9-12. Castro, D., & Nogueira, G. (1975). Use of the nosode meningococcinum as a preventive against meningitis. Journal of the American Institute of Homeopathy, 68, 2 11-219. Cazin, J. C., Cazin, M., Chaoui, A., & Belon, P. (1991). Influence of several physical factors on the activity of ultra low doses. In Doutremepuich, C. (Ed.), Ultra low doses (pp. 69-80). Washington, DC: Taylor & Francis. Cazin, J. C., Cazin, J. L., Gaborit, A., Chaoui, J., Boiron, J., Belon, P., Chenuault, Y., & Papapanayotou, C. (1987). A study of the effect of decimal and centesimal dilutions of arsenic on the retention and mobilization of arsenic in the rat. Human Toxicology, 6, 315-320. Chavanon, P. (1932). La diphterie. Niort, France: Imprimerrie St.-Denis. Cooper, J. W. (1980). Spectroscopic techniques for organic chemists. New York: John Wiley & Sons. Daurat, V., Carriere, V., Douylliez, C., & Bastide, M. (1986). Immunomodulatory activity of thy-

W. B. Jonas & D. K. Dillner
mulin and a,D interferon on the specific and the nonspecific cellular response of C57BLl6 and NZB mice. Zmmunobiology, 173, 188. Daurat, V., Dorfman, P., & Bastide, M. (1988). Immunomodulatory activity of low doses of interferon a,D in mice. Biomedicine and Pharmacotherapy, 42, 197-206. Davenas, E., Beauvais, J., Oberbaum, M., Robinzon, B., Miadonna, A., Tedeschi, A., Pomeranz, B., Fortner, P., Belon, P., Sainte-Laudy, J., Poitevin, B., & Benveniste, J. (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature, 333, 816-818. Davenas, E., Poitevin, B., & Benveniste, J. (1987). Effect on mouse peritoneal macrophages of orally administered very high dilutions of silica. European Journal of Pharmacology, 135, 313-319. Davies, A. E. (1971). Clinical investigations into the action of potencies. British Homoeopathic Journal, 60, 37-41. Davis, J. J. (1904). Some experience in the prophylaxis of variola. Medical Advances, 42, 225-227. Day, C. (1986). Clinical trial in bovine mastitis. British Homoeopathic Journal, 75, 11-14. Day, C. E. I. (1987). Isopathic prevention of kennel cough-Is vaccination justified? International Journal of Veterinary Homeopathy, 2,45-50. del Giudice, E., Preparata, G., & Vitiello, G. (1988). Water as a free electric dipole laser. Physical Review Letters, 61, 1085-1088. Demangeat, J. L., Gries, P., & Poitevin, B. (1997). Modification of 4 MHz N.M.R. water proton relaxation times in very high diluted aqueous solutions. In Bastide, M. (Ed.), Signals and images (pp. 95-110). Dordrecht, The Netherlands: Kluwer Academic Publishers. Endler, P. C., Pongratz, W., Smith, C. W., Schulte, J., Senekowitsch, F., & Citro, M. (1997). Nonmolecular information transfer from thyroxine to frogs. In Bastide, M. (Ed.), Signals and images (pp. 149-160). Dordrecht, The Netherlands: Kluwer Academic Publishers. Endler, P. C., & Schulte, J. (1994). Ultra high dilution: Physiology and physics. Dordrecht, The Netherlands: Kluwer Academic Publishers. English, J. M. (1987). Pertussin 30-Preventive for whooping cough? British Homoeopathic Journal, 76, 61-65. Ferley, J. P. D., Zmirou, D., D7Adhemar,D., & Balducci, F. (1989). A controlled evaluation of a homeoeopathic preparation in influenza-like syndromes. British Journal of Clinical Pharmacology, 27, 329-335. Fisher, P., & Capel, I. (1982). The treatment of experimantal lead intoxication in rats by penicillinamine and plumbum metallicum. Journal of the Society of Ultramolecular Medicine, I , 30-31. Fisher, P., House, I., Belon, P., & Turner, P. (1987). The influence of the homoeopathic remedy plumbum metallicum on the excretion kinetics of lead in rats. Human Toxicology, 6, 321-324. Fortier, A. H., Slayter, M. V., Ziemba, R., Meltzer, M. S., & Nacy, C. A. (1991). Live vaccine strain of Francisella tularensis infection and immunity in mice. Infection and Immunity, 59, 2922. Fox, A. D. (1987, April). Whooping cough prophylaxis with pertussin 30. British Homoeopathic Journal, 69-70. Furst, A. (1987). Hormetic Effects in pharmacology: Pharmacological inversions as prototypes for hormesis. Health Physics, 52, 527-530. Gibson, D. M. (1958). Nosodes and prophylaxis. Homoeopathy, 8, 11 1-124. Hadji, L., Amoux, B., & Benveniste, J. (199 1). Effect of dilute histamine on coromary flow of guinea-pig isolated heart. Inhibition by a magnetic field. Federation of Associations and Society of Experimental Biology Jounal, 5, 7040. Harisch, G., & Kretschmer, M. (1988). Smallest zinc quantities affect the histamine release from pertoneal mast cells of the rat. Experientia, 44, 761-762. Harisch, G., & Kretschmer, M. (1989). Histamine release from rat peritoneal mast cells after oral doses of homeopathically prepared minerals and disodium cromoglycate. Journal of Applied Nutrition, 41, 45-49. Harisch, V. G., Kretschmer, M., & von-Kries, U. (1987). Beitrag zum histamin release aus peritonealen mastzellen von mannlichen wistar-ratten. DTW Deutsche Tierarztliche Wochenschrijt, 94,497-540. Hirst, S. J., Hayes, N. A., Burridge, J., Pearce, F. L., & Foreman, J. C. (1993). Human basophil degranulation is not tri gg ered by very dilute anti-serum against human IGE. ~ a t u i e 366, , 525-527.

SAD Prophylaxis of Infection

51

Jonas, W. B., &Jacobs, J. (1996). Healing with homeopathy. New York: Warner. Kleijnen, J., Knipschild, P., & ter Riet, G. (1991). Clinical trials of homoeopathy. British Medical Journal, 302, 316-323. Krishnamurty, P. S. (1970). Report on the use of influenzinum during the outbreak of epidemic in India in 1968. Hahnemanian Gleanings, 37, 225-226. Langer, P. H. (1 990). Summary of evidence for the efficacy of Para Plus in the treatment of mastitis. Ontario, Canada: Dorchester Laboratories. Larue, F., Dorian, C., Cal, J. C., Guillemain, J., & Cambar, J. (1985). Influence de pretraitment de dilutions infinitesimales de mercurius corrosivus sur la mortalite induite par le chlorure mercurique. Nephrologie, 6, 86. Linde, K., Clausius, N., Ramirez, G., Melchart, D., Eitel, F., Hedges, L. V., & Jonas, W. B. (1997). Are the clinical effects of homoepathy placebo effects? A meta-analysis of placebo-controlled trials. Lancet, 350, 834-843. Linde, K., Jonas, W. B., Melchart, D., Worku, F., Wagner, H., & Eitel, F. (1994). Critical review and meta-analysis of serial agitated dilutions in experimental toxicology. Human and Experimental Toxicology, 13,481-492. Linn, A. M. (1904). Variolinum, a prophylaxis against smallpox. Medical Advances, 42, 431-436. Luckey, T. D. (1980). Hormesis with ionizing radiation. Boca Raton, FL: CRC Press. Maddox, J., Randi, J., & Stewart, W. W. (1988). "High-dilution" experiments a delusion. Nature, 334,287-290. Markwell, M. A. K., Hass, S. M., Bieber, L. L., & Torbert, N. E. (1978). Modification of the Lowry procedure to simplify protein determination in membrane and lipoprotein samples. Analytical Biochemistry, 87, 206-210. Merck, C., Sonnenwald, B., & Rollwage, H. (1989). The administration of homeopathic drugs for the treatment of acute mastitis in cattle. Berliner und Miinchener Tierarztliche Wochenschrift, 102,266-272. Metzger, H. (1988). Only the smile is left. Nature, 334, 375-376. Morgan, W. L. (1899). Epidemics, endemics, and contagions. Homoeopathic Physician, 19, 479-488. Neafsey, P. J. (1 990). Longevity hormesis. A review. Mechanisms ofAgeing and Development, 51, 1-31. Oberbaum, M., Weismann, Z., & Bentwich, Z. (1989). Treatment of experimentally-induced AIDS in mice by very high dilutions of virus, a preliminary study. In Bastide, M. (Ed.), Signals and images (pp. 127-129). Paris, France: Atelier Alpha Bleue. Ovelgonne, J. H., Bol, A., Hop, W., & van Wijk, R. (1992). Mechanical agitation of very dilute antiserum against IgE has no effect on basophil staining properties. Experientia, 48, 504-508. Papp, R., Schuback, G., Beck, E., Burkard, G., Bengel, J., Lehrl, S., & Belon, P. (1998). Oscillococcinum in patients with influenza-like syndromes: a placebo-controlled, double-blind evaluation. British Homoeopathic Journal, 87, 69-76. Paterson, J., & Boyd, W. E. (1941). Potency action. A preliminary study of the alteration of the Schick-test by a homoeopathic potency. British Homoeopathic Journal, 31, 301-309. Plasterek, R. (1988). Explanation of Benveniste. Nature, 334, 285-286. Poitevin, B., Davenas, E., & Benveniste, J. (1988). In vitro immunological degranulation of human basophils is modulated by lung histamine and Apis mellifica. British Journal of Clinical Pharmacology, 25,439-444. Popp, F. A., Li, K. H., & Mei, W. P. (1989). Physical aspects of biophotons. Experientia, 44, 576-685. Rastogi, D., & Sharma, V. (1992). Study of homoeopathic drugs in encephalitis epidemic (1991) in Uttar Pradesh (India). Central Council for Research on Homeopathy Quarterly Bulletin, 14, 1-11. Reilly, D., Taylor, M. A., Beattie, N., Cambell, J. H., McSharry, C., Aitchison, T., Carter, R., & Stevenson, R. D. (1 994). Is evidence for homeopathy reproducible? Lancet, 344, 1601-1606. Sainte-Laudy, J., & Belon, P. (1996a). Analysis of immunosuppressive activity of serial dilutions of histamine on human basophil activation by flow cytometry. Injlammation Research, 45, S33-S34. Sainte-Laudy, J., & Belon, P. (1996b). Inhibition of basophil activation by high dilutions of histamine. Agents Actions, Supplements, 38, C245-C247. Sainte-Laudy, J., Haynes, D., & Gerswin, G. (1986). Inhibition effects of whole blood dilutions on basophil degranulation. International Journal of Zmmunotherapy, 2, 247-250.

52

W. B. Jonas & D. K. Dillner

Schulte, J., & Endler, P. C. (1998). Fundamental research in ultra-high dilutions and homeopathy. Dordrecht, The Netherlands: Kluwer Academic Publishers. Seagrave, J. C. (1988). Evidence of non-reproducibility. Nature, 334, 559-559. Shephard, D. (1967). Homeopathy in epidemic diseases. Robinson, G. E. (Ed.). Rusington, England: Health Science Press. Stebbing, A. R. (1987). Growth hormesis: A by-product of control. Health Physics, 52, 543-547. Stebbing, A. R. D. (1982). Hormesis-The stimulation of growth by low levels of inhibitors. Science of the Total Environment, 22, 2 13-234. Tarnvik, A. (1989). Nature of protective immunity to Franciscella tularensis. Review of lnfectious Disease, 11,440-451. Taylor, S. M., Mallon, T. R., & Green, W. P. (1989). Efficacy of a homoeopathic prophylaxis against expeirmental infection of calves by the bovine lungworm Dictyocaulus viviparus. Veterinaly Record, 124, 15-17. Taylor-Smith, A. (1950). Poliomyelitis and prophylaxis. British Homoeopathic Journal, 40, 65-77. van Wijk, R., & Wiegant, F. A. C. (1997). The similia principle in surviving stress: Mammalian cells in homoepathy research. Utrecht, The Netherlands: Homeopathy International. Wagner, H., Kreher, B., & Jurcic, K. (1988). In vitro stimulation of human granulocytes and lymphocytes by pico- and fentogram quantities of cytostatic agents. Arzneimittel-Forschung (Drug Research), 38,273-275. Weingartner, 0 . (1989). NMR-Spektren von sulfur-potenzen. Therapeutikon, 3, 438-442. Weisman, Z., Oberbaum, M., Topper, R., Harpaz, N., & Bentwich, Z. (1997). High dilutions of antigens modulate the immune response to KLH. In Bastide, M. (Ed.), Signals and images (pp. 179-190). Dordrecht, The Netherlands: Kluwer Academic Publishers.

Journal of Scientific Exploration, Vol. 14, No. 1, pp. 53-72,2000

0892-33 10/00 O 2000 Society for Scientific Exploration

The Correlation of the Gradient of Shannon Entropy and Anomalous Cognition: Toward an AC Sensory System

The Laboratories for Fundamental Research 330 Cowper Street, Suite 300 Palo Alto, CA 94301 email; may@ P o r g

Abstract- In this study, we hoped to replicate earlier findings that have demonstrated strong evidence for anomalous cognition (AC), as well as a significant correlation between the quality of the AC with the gradient of Shannon entropy, but not with the entropy itself. We created a new target pool and a more sensitive analytical system compared with those of earlier studies. We then invited five experienced receivers (i.e., experiment participants) to contribute 15 trials each. In addition to the usual rank-order analysis, two other methods were used to assess the quality of the AC. The first of these was a 0 to 7 rating scale that has been used in the earlier studies. The second, a figure of merit, was based on a fuzzy-set encoding of the targets and responses. The primary hypotheses were (a) that a significant correlation would be seen between the figure of merit quality assessment and the gradient of Shannon entropy for the associated target and (b) that the correlation using the rating assessment would be consistent with earlier findings. A secondary hypothesis was that the figure of merit quality would not correlate with the entropy of the associated target. All hypotheses were confirmed. Our results are part of the growing evidence that AC is mediated through a sensory channel. Keywords: anomalous cognition - pattern analysis -entropy gradient

Introduction
Lantz, Luke, and May (1994) reported on two experiments, the first of which was conducted in 1992, to test sender condition and target type in an anomalous cognition (AC) experiment. The hypotheses in these studies addressed whether a sender is necessary for AC information transfer and whether AC performance differs when the targets are static photographs or dynamic material, such as videotape. Lantz, Luke, and May (1994) found that a sender is not a necessary component in a successful AC experiment and that the data supported, but not significantly, a target-type preference in favor of static material. Because there were no significant interactions in a 2 x 2 analysis of variance (ANOVA), the data were combined across the sender versus no-sender condition. Blind ranking achieved a sum-of-ranks for the static targets of 265 in which the chance expectation was 300, leading to an effect size of 0.248 andp = .007. The analysis

54

E. May, J. Spottiswoode, & L. Faith

of the 100-trial dynamic targets led to a sum of ranks of 300, an effect size of 0.000, andp = .500. A second experiment was conducted one year later and was also reported in Lantz, Luke, and May (1994). In that study, a sender was not used, and the protocol differed considerably from the first experiment. Four participants contributed a total of 45 trials in two target-type conditions, leading to a combined effect size of 0.550 for the static targets and the same value for the dynamic ones. Lantz, Luke, and May (1994) discussed the apparent contradiction between the results of their two studies. They speculated that the static targets were better in their first study because of a lack of content parity between the static and dynamic targets. Nonetheless, this does not explain the similarity between static and dynamic targets in their second study. In addition, their results are inconsistent with those of some of the Ganzfeld research regarding static versus dynamic targets. Bem and Honorton (1994) found that dynamic targets produced better results in the Ganzfeld experiment than did static targets. The data from both of Lantz, Luke, and May (1994) studies were analyzed to investigate whether AC performance depended on the gradient of Shannon entropy of the targets (May, Spotiswoode, and James, 1994a). This idea arose from our laboratory's anecdotal evidence that AC functioned particularly well when targets were especially dynamic. That is, when targets involved large changes of energy-entropy, such as underground nuclear explosions, particle accelerators, or rocket launches. In several instances, AC was outstanding when targets underwent massive changes in energy or entropy in a very short period of time during the session. Bem and Honorton's finding is also suggestive that an entropic change in the target might lead to better results. As a possible explanation for these observations, consider that AC may be mediated through a specialized sensorial system and that this system might behave similarly to the five known sensorial systems. We might reasonably expect, then, that AC would correlate positively with the changes of the sensor-input signal and correlate less well with the level of the sensor-input signal itself. In vision, for example, the system is sensitive to changes in brightness across the field, but relatively insensitive to the absolute level of illumination. Analogously, we hypothesized that the AC system might be sensitive to changes in the level of information content across a target, but insensitive to the absolute level of that measure. In the first experiment, May, Spotiswoode, and James (1994a) found a significant correlation between the gradient of the Shannon entropy of the target and the quality of AC (Spearman rank-order correlation coefficient, r,, of .452, df = 26, t = 2.58, p = 7.0 (10) for static photographic targets. Unfortunately, with dynamic targets, there was little evidence of AC and a resulting small correlation with the gradient of the entropy. In the second experiment, they found strong evidence for AC in both static and dynamic targets and for the two target types combined, the correlation between AC performance and entropy gra-

AC Sensory System

55

dient r, was .337, df = 31, t = 1.99, p = .028. As predicted, the correlation with the entropy itself was considerably smaller (r, = .234, t = 1.34, d = 31, p = f .095). The correlation for the combined static targets from both studies was r, = .16l, d = 41, p = 152. Because of the different target systems and protocols f in these studies, the results remain somewhat ambiguous. This report provides a detailed description of an experiment to replicate May, Spotiswoode, and James (1994a) entropy findings.

Hypotheses
The primary hypotheses were:

A significant correlation exists between the quality of AC, as measured by a fuzzy-set technique, and the gradient of Shannon entropy of its associated target. The correlation of the gradient with the quality of AC as measured by the upper half of the rating scale shown below in Figure 2 will be consistent with the static target correlation seen in the earlier experiments (May, Spotiswoode, and James, 1994a).
The concept of fuzzy sets was first applied to the analysis of AC data by May et al. (1990). A fuzzy-set definition of a target is similar to the commonly used descriptor lists in which an analyst is asked to ascribe the presence or absence of each element in a list of items. Instead of a forced yes or no to the presence of an element, such as water, a fuzzy approach allows for a quantitative coding of a subjective impression. For example, water might be 30% visually impacting in a target and therefore is coded as 0.3 rather than either 1 or 0. A response is coded in a similar way. Three quantities are defined from the fuzzy-set representation of a target and a response. The accuracy is defined as the percent of the target that was described correctly; the reliability is defined as the percent of the response that was correct; and the figure of merit is defined as the product of the two. Because the fuzzy-set measure is less granular than a rating measure, being a rational rather than an ordinal scale, we chose it as the primary measure. The rating scale correlation was included as a historical link to our earlier experiments. A more detailed description of the technique can be found in the AC Data Analysis Section, below. We also hypothesized that the correlation of the figure of merit with the total entropy of the target would be much less than the correlation with the gradient of the entropy.

Experiment Protocol
In contrast with the majority of our earlier AC studies, we designed a protocol in which the receivers were physically located from 5 to 4,500 km from the

56

E. May, J. Spottiswoode, & L. Faith

laboratory. In addition, many aspects of the experiment were handled automatically by two separate computers. Target Pool Construction For this experiment, we developed a completely new target pool based exclusively on the Corel Stock Photo Library of Professional Photographs. This library of copyright-free images is provided in digital form and comprises 100 images on each of 200 CD-ROMs. Each image is approximately 18 MB in size, which corresponds to a landscape format picture of 3200 x 1875 pixels in 24-bit color. Corel also publishes a booklet of thumbnail images of the complete set. Selection Criteria The first stage in constructing the target pool consisted of creating a design specification of the type of photographs that would qualify as a potential AC target. Based on earlier experience (May et al., 1990), we adopted a series of a guidelines. First, the photographs had to possess common properties. Thematic coherence: Each photograph had to be a real scene, as opposed to a collage. Where possible, the photographs also possessed elements that could be sketched easily. Size homogeneity: The photographs did not contain any surprises with regard to size. For example, there would not be a photograph of a brick followed by one of a mountain range. Pool coherence: All of the photographs would depict outdoor scenes. The following elements would not be included in the pool by construction or by photographic editing: People Transportation devices (e.g., boats, cars, etc.) Small human artifacts (e.g., tools, toys, etc.) We made every effort to remove these kinds of items, although they may have been present in some photographs. If so, they were difficult to see and were insignificant relative to the rest of the scene. Finally, we would not allow odd camera angles, unusual or distorted perspectives, or odd or unusual lighting conditions. Aside from the above restrictions, the target pool photographs could show any scene at any location. Following these guidelines, we rejected approximately half the original set of 20,000 photographs by visual inspection of the thumbnail images. Our long-standing earlier target pool consisted of 100 photographs divided into 20 packets of five dissimilar images each. In that pool, a target for a trial was determined by first choosing a random integer between one and 20 to se-

AC Sensory System

57

lect a packet and then choosing a random integer between one and five to select a target. The remaining four targets within the selected pack then served as decoys for a blind analytical assessment by rank ordering. For the development of this new pool, we chose a different approach. Namely, the analysis decoy target images would be determined after the AC trial was complete. To assure that we could do this in a blind and algorithmic fashion, we adopted a hierarchical design of groups, categories, and images. A group consisted of five categories, and each category contained five images. The images within a category would be as much alike each other as possible, although they must be of different scenes. Differing perspectives of the same scene were not included. Thus, a single category of "waterfalls," for example, would contain five similar, but different, waterfalls. In contrast, we made every attempt to choose categories within a group to be as different from one another as possible, to make them orthogonal in other words. For example, we would not have a "river" category in the same group as a "waterfall" category. The number of different groups was determined by the remaining 10,000 images that survived the first cut. Two laboratory personnel examined all 10,000 images on a high-resolution computer display, and approximately 800 candidate photographs met the above acceptance criteria. After some digital editing, we identified from this set of 800 photographs 12 groups of 25 images for a total of 300 targets. Table 1 shows the categories that were identified for each of the 12 target groups. No attempt was made to force the categories to be orthogonal across groups. Figure 1 shows an example of the digital editing of an image that was not selected as part of the pool to illustrate the capability to modify an image to conform to the construction guidelines. In the temple scene, nearly all the people were removed by making reasonable guesses as to what the image would have

TABLE 1 Categories for Each Target Group Category Group ID
1 2 3 4 5 6 7 8 9 10 11 12

1 Bridges Bridges Bridges Bridges Bridges Fields Cities Coasts Buildings Buildings Fields Coasts

2
Canyons Cities Lakes Mosques Churches Islands Coasts Fields Coasts Coasts Structures Mountains

3

4

5 Waterfalls Structures Towns Waterfalls Pyramids Waterfalls Windmills Rivers Waterfalls Rocks Streets Towns

1

Cities Fields Mountains Mountains Deserts Roads Deserts Lighthouses Pyramids Fences Rivers Roads

Structures Mountains Structures Roads Mountains Ruins Waterfalls Mountains Vineyards Lakes Ruins Ruins

Note: All Structures in the table represent Asian structures

58

E. May, J. Spottiswoode, & L. Faith

Fig. 1. An example of digital editing.

AC Sensory System
TABLE 2 Universal Set of Elements (USE)
Buildings Villages, towns, cities Ruins Roads Pyramids Windmills Lighthouses Bridges Coliseums Hills, cliffs, valleys Mountains Land-water interface Lakes, ponds Rivers, streams Coastlines Waterfalls Glaciers, ice, snow Vegetation Deserts Natural Manmade Prominent, central Textured Repeat Motif

been behind each individual. As the final step in preparing an image for the target pool, the picture was cropped, if necessary, and resized to 800 x 600 pixels, each having 24 bits of color information. Fuzzy -Set Encoding To facilitate subsequent computer analysis of AC trials, the images were encoded using a system of descriptive elements. Each element was assigned a fuzzy-set membership value for each image. We created a universal set of elements (USE), including 50 elements that we selected from the original set of 131 elements used in our earlier work (May et al., 1990). We also added elements for features that were unique to this particular set of photographs. Six individuals each coded all 300 images against this USE. As in earlier work, the coding criterion was the degree to which each element was visually impacting to the general scene. The range of visual impact ran from 0 to 1 in steps of 0.1. For example, in the bottom image in Figure 1, we might code 0.6 for "buildings" and 0.3 for "repeat motif." The principal investigator selected 24 elements out of the 50 and qualitatively condensed the scorings from the six coders to a single "consen~us'~ fuzzy-set representation of the targets. These 24 elements were selected on the basis of extensive experience, as well as on the formal analysis of a single study. The principal criterion used in the selection was that the elements should not be too "low level" such as lines and geometric shapes, nor should they be too "high level" such as and office building. These 24 elements were an attempt to strike a compromise between these two extremes. Table 2 shows the 24 elements that comprised the final fuzzy-set USE.' Receiver Selection Five experienced receivers participated in this experiment. They were chosen on the basis of their availability, their willingness to participate in a
'A detailed description of the target pool construction and its associated fuzzy-set encoding is currently being considered as a separate paper.

I

60

E. May, J. Spottiswoode, & L. Faith

lengthy AC study, and especially on their previous and sustained superior performance.

Number of Trials
The total number of trials for this study was 75 (i.e., 15 for each receiver) and was determined, in advance, by receivers' availability and statistical power considerations. We used the average effect size of 0.550 from a previous similar experiment (Lantz, Luke, and May, 1994) to compute a statistical power of 68% to reach significance (i.e., p = .05) for a single receiver and a power of 99% to reach a significant study.

Trial Protocol
We designate Experimenter 1 and Experimenter 2 as E l and E2, respectively. E l was located in the laboratory in Palo Alto, California, and E2 was located in a laboratory in Los Angeles. The complete target pool was independently installed on E l and E2's computers. Note that all communication between E l and E2 occurred only by e-mail. At a prearranged scheduled time, the following events took place in the order shown: E l requested that E2 generate a target for the upcoming trial. E2 invoked a computer program that first randomly selected one of the 12 groups, randomly selected one of the five available categories in that group, and then randomly selected a target image from within that category of five images. The program saved its choice to a binary file and did not notify E2 about any aspect of the selection. E2 notified E l that the selection process was complete. E l telephoned the scheduled receiver and acted as a monitor for an AC session lasting from 5 to 15 minutes. The receiver drew and wrote the impressions and faxed them to E l at the end of the session. E l requested that E2 generate a decoy set. E2 invoked a second computer program that read the binary file containing the target information and randomly selected a target image from each of the four remaining categories from within the selected group. The four decoy target numbers and the intended target number were randomly ordered and then automatically e-mailed to E l . E l analyzed the session and e-mailed the results to E2. At this point, nobody was aware of the selected target, and the analysis was complete before the receiver obtained feedback. E2 invoked a third program that read the original binary file and e-mailed the actual target number to E l .

AC Sensory System

61

E l posted the target photograph on a Web-site to which only the receiver had access and then telephoned the receiver to provide verbal feedback and to prompt the receiver to access the Web-site for visual feedback. All transactions were logged, and session and analysis details were automatically stored in a database. Typically, such a trial would be complete in 30-60 minutes. Furthermore, in contrast to our earlier studies, the analysis was completed on each trial before anyone was aware of the intended target. AC Data Analysis We had decided to perform three separate analyses on the AC data. The first of these was a standard rank ordering of the target pack, which consisted of four decoys and the intended target. E l was presented with the words and drawings along with the target pack associated with the trial, and the task was to rank order the targets from the best to worst matches to the response. After all N trials were analyzed for a single receiver, a continuity-corrected effect size omputed as:

where R,, is the average rank over the N trials, and the last term in the numerator is a continuity correction for small N. The z score associated with this effect size is given by:

The rank-order analysis was designated, in advance, as the primary indicator of AC in the study. Because, the primary goal of the experiment was to explore the relationship between the gradient of Shannon entropy and the quality of the remote viewing, we performed two additional analyses. Lantz, Luke, and May (1994) showed that assessing AC performance by rank ordering was not optimal for correlation studies for two reasons. First, the rank number is strongly dependent on the degree to which the photographs in the analysis pack are different from one another. Second, the ranking method discards information about the absolute quality of the match; it only describes the relative closeness of the match in comparison to the decoys. Consequently, a perfect match between a response and a target would be assigned the same first-place rank as a response that corresponded far less closely but nonetheless was sufficient to allow the analyst to assign a first-palace match. For historical reasons and for comparison with earlier entropy experiments, we used a slightly modified version of the 0 to 7 rating scale. Figure 2 shows a screen capture image of the scale that was presented to E l during the analysis. To be assigned a given assessment value, the correspondence between target and response must meet one of the criteria shown in Figure 2. As before (May,

62

E. May, J. Spottiswoode, & L. Faith

Fig. 2. The 0 to 7 Assessment Scale.

Spotiswoode, and James, 1994a), the scale was divided into two sections; an assessment of four and above indicating possible AC contact with the target and three and below indicating no contact. We recognized a number of difficulties with the rating scale. The assessment values are granular, that is, they are integers with no possibility of values in between. More important, the scale does not account for the amount of material in the response. For example, the response could be simply the word "city" and receive a value of 7 for the match to a city target, even though there might be many elements, such as a river, a bridge, and mountain background, in addition to the city in the target. Because of its extensive previous use, this rating scale was included in the analysis but defined only as a secondary measure to be used in the entropy correlation analysis. The primary measure of the absolute correspondence of a response to its intended target was the fuzzy-set based figure of merit; whereas the rank-order statistic was used as the primary measure for overall AC. May et al. (1990) provided a partial solution to the problem associated with the rating scale through the use of fuzzy sets. We used a fuzzy-set measure for an assessment of the degree of correspondence between a response and a target. We defined the figure of merit (FM) as the accuracy times the reliability. The accuracy is the percentage of the target image elements that were described correctly, and reliability is the percentage of the response elements that were correct. Although neither accuracy nor reliability alone is a suffi-

AC Sensory System

63

cient measure of AC, the product of the two is. Formally, the accuracy is defined by:
N j=l
N

accuracy =

,

(0 5 accuracy 5 1)

where N is the number of elements in the USE. Similarly, reliability is defined by:

C min(l;, Rj)
reliability =
j=1

N

,

(0 5 reliability 5 1)

where Tj and Rj are the fuzzy-set membership values for the target and response, respectively. Min(Tj Rj) means the minimum of the two quantities. The figure of merit (FM) is the accuracy x reliability. The fuzzy-set analysis for each trial occurred as follows. After the response was received by fax and while blind to the target, E l scored each element in Table 2 as to the degree to which that element was contained in the response. If the response contained the word "waterfall," then by definition, the waterfall element would receive a score of one. If, however, there was a vague sketch that might look slightly like a waterfall, then that element might only be scored as 0.3. Thus the entire USE was scored before E l was shown the analysis target pack. E l then displayed five targets for the trial, performed the rank ordering, the 0- to 7-point scale assessment, and finally entered the two target-dependent fuzzy-set elements. All results were inserted into a database for subsequent analysis. To summarize, the fuzzy-set elements for the targets were assigned, before the experiment, to represent the degree to which each element was visually impacting in the scene. The response elements were scored as to the degree to which the element was contained in the response. At this time, we have little evidence that a receiver is capable of not only recognizing that an element is in a target but also capable of determining its visual impact. For example, we rarely received a statement such as, "There is a river in the target but it is hardly noticeable." Thus, the target fuzzy-set encoding contained more information than could be obtained easily with AC. At this stage of our understanding of AC, we must be content with a simple recognition on the part of the receiver as to the presence or absence of a particular element. Therefore, before the calculation of the accuracy, reliability, and FM, we converted the target fuzzy set to a crisp set, containing only 1 for presence and 0 for absence for the membership values of the elements in the USE.

64

E. May, J. Spottiswoode, & L. Faith

This process is called an alpha cut in fuzzy-set parlance. That is, we specify a threshold for the fuzzy-set membership value so that if an element is equal to or above that threshold, it is converted to a 1 or set to 0, otherwise. We adopted the threshold value of 0.2 to remove some of the noise "clutter" of 0.1-encoded elements. This value was empirically determined as a reasonable value (May et al., 1990). An alpha cut was not applied to the response because the fuzzy element represented the degree to which the analyst felt that the given element was represented in the response. Finally, we added two additional elements, which were independently scored for each target in the analysis pack, to the USE shown in Table 2. The element "visual" was an assessment of the degree to which the drawings, independent of the labels or other written material, matched a target image. The element "analytic" was the degree to which the written material, independent of the drawings, matched a target image. By definition, these elements were scored as 1 for all targets and were added to the consensus-scored fuzzy-set representation of the targets. Thus, in the equations for accuracy and reliability, N is equal to 26, with 24 elements coming from Table 2 and two coming from these additional elements.

Entropy Analysis
An entropic analysis of a photographic image is an assessment only of intensity patterns and does not include any cognitive information. In this context, the gradient means transitions between light and dark regions. The details of how such an analysis is conducted can be found in May, Spotiswoode, and James (1994a). We will, however, summarize the approach here. The Shannon entropy for a single color plane with a depth of eight bits is given by:

where pi if the probability of observing an intensity value of j. The following discussion holds separately for each of the three color planes. The total entropy is the sum of the three color entropies. We computed this entropy for all targets in the following way: Each image was divided into rn x n patches where we constrained the patch size to be evenly divisible into 800 and 600, the standardized target size. The patch sizes chosen were 4, 8, 20, 40, and 100 pixels square. For a given size, we computed the entropy for each patch across the photograph. For example, using a patch size of 20, we would compute the entropy for each of the 40 x 30 different patches. The pis were determined by the empirical values contained in each patch. Finally, using standard numerical techniques, we computed the average absolute magnitude of the gadient2 in this 2-dimensional entropy

2 ~ hgradient is a formal measure of the "steepness" of the "hills" and "valleys" in the entropy space. e

AC Sensory System

65

Fig. 3.1,ow and high entropy gradlent images (top). Entropy per patch for two images (bottom).

space. Figure 3 shows images that have low entropy gradient, such as a pyramid and those that have high entropy gradient, such as a bridge. Both entropy plots have the same vertical scaling of 20 bits.' The steeper gradient of the "hills" and "valleys ' in the bridge plot results in that image having 365% greater entropy gradient than the pyramid irnage has. The entropy gradient calculation was performed for all 01 the patch sizes shown above for all targets. Additionally, we calculated what we call the total entropy in which we computed a single valile for the entire picture. That is, the single patch size was 800 x 600. All results were stored in the database for later analysis.
7

Correlation Analysi~ To closely approximate the patch size that was used in our earlier studies (May, Spotiswoode, and James, 1994a), we adopted a patch size of'20 x 20 as

'The rnax~nrl~l~n entropy

15 24

brts, given that thc \urn I, over all three 8-btt color plane\.

66

E. May, J. Spottiswoode, & L. Faith

the primary value for the correlation calculations. We did, however, examine any effects as a function of patch size. Similarly, we divided the assessment scale in half and used only the upper half for the correlation calculations, but we examined any effects as a function of scale division. For the correlation of gradient and entropy with the rating scale, we used the conservative, nonparametric Spearman's r method and converted the observed correlation to a standard normal deviate with Fischer's Z transform. The FM values are more nearly continuous as a consequence of its algorithm. Nonetheless, even in this case, we used the more conservative Spearman's r to compute the correlation.

Results
The results fall into the two categories of evidence for AC and correlation effects. All p values are quoted as single-tailed.
AC Results

Table 3 shows the average rank, continuity-corrected effect size, and associatedp value for the five participants' 15 trials. The results, using the rank-order statistic, illustrated in Table 3, show no AC in this study either for individual receivers or overall. The effect size falls below what we have come to expect from this group of receivers. We shall return to this point in the Discussion section. Note that the effect size for the total is not the average of the effect sizes for the individual receivers. This is because, taken as a study with 75 trials, the continuity correction is different.
Entropy Results

For our primary hypothesis, which requires a correlation test with the figure of merit, we find a Spearman's r for the average magnitude of the gradient of Shannon entropy correlated with the figure of merit of 0.212 with 73 degrees of freedom. This corresponds to Z = 1.83, p = .034. Figure 4 shows the scatter diagram for the gradient versus the figure of merit. Although the sum-of-rank statistic did not show significant evidence for AC, the primary hypothesis was confirmed. That is, the quality of the AC as
TABLE 3 AC Results Receiver
-

Average Rank

Effect Size

p value

8 127 221 497 937 Totals

AC Sensory System

67

Fig. 4. Correlation of FM with entropy and its gradient.

measured by the figure of merit significantly correlated with the gradient of Shannon entropy for a patch size of 20. The points for the gradient are shown as A, and the regression line is shown as the solid line in Figure 4. Next, we examined the correlation of the gradient of Shannon entropy with the assessment scale with values greater than three. With a patch size of 20, this correlation most closely replicates the earlier work. The second hypothesis was also confirmed. The combined static target results from the previous two studies produced a strong correlation (r, = .161, df = 41, p = .152). In this study, as measured by the upper half of the rating scale as shown in Figure 2, the gradient of Shannon entropy did correlate with the quality of AC ( r , = .146, df = 23, p = .246) at nearly the same level as before. Finally the secondary hypothesis was confirmed as well. The correlation between the total entropy and the figure of merit was small ( r , = .042, df = 73, p = .362). The points for the entropy are shown as x in Figure 4, and the regression line is shown as dashed.

Discussion
AC Result

Utts (1995) has shown that our experienced receivers exhibit a consistency of AC effect size over time, a result consistent with our own observations. Be-

68

E. May, J. Spottiswoode, & L. Faith

cause of this, we have come to expect significant results when, as in this study, we have sufficient statistical power to observe a significant result. When a study fails to exhibit significant evidence for AC, we are usually suspicious that some aspect of the protocol was responsible for the decrease in study effect size. There were several protocol differences between this study and our usual method: a new target pool was used, the primary analysis was completed before feedback was given, and the subjects were physically remote from the lab. We do not expect that the first two points are a factor, but the last probably is. This is not to suggest that distance between target and receiver is a modulating variable. Rather, we suggest that it is a matter of attention to the task. In the past, we have invited our receivers to the laboratory for their sessions. Many times, this involved flying them across the United States for a weeklong visit on two or three separate occasions. In these cases, when a receiver was present in the laboratory, they had our full attention for the trial. All activity in the laboratory was focused on that single trial. In this study, the monitor called each receiver at a prearranged time and conducted a short session by telephone. The trials therefore amounted to relatively brief interludes in the otherwise busy schedules of both the receivers and the experimenters. These are psychological conditions unlike the intense focus during trials in our earlier studies. There are numerous laboratory anecdotes about excellent AC performance under high attention. The "Put to the Test" AC trial that was shown on national U.S. television is just one example.4 In this example, approximately 10 people had their full attention on a trial that cost an estimated $100,000; the result was near perfect correspondence between responses and targets. These kinds of arguments can only be speculation, of course. One of the benefits of working with the same receivers over a protracted period of time is that our observed individual performance consistency allows such speculation. In this case, all of the receivers have been participating in experiments of this nature for more than 15 years. Further studies in which the receivers are in the laboratory will test this possible explanation.
Entropy Result Figure of merit assessment of the quality of the AC. As shown in Figure 4, we observed a significant correlation between the gradient of Shannon entropy and the quality of the AC as measured by the figure of merit. This correlation, however, was observed at a patch size of 20 x 20 pixels. A question arises about possible dependency of the correlation on patch size. Table 4 shows the patch size, Spearman's r (df = 73), its associated Z score, and p value tested against r, = .O. Except for patch sizes of 40 and 100, we see a consistent correlation as a
4~~~~

Productions, Sherman Oaks, CA; November 28,1995.

AC Sensory System
TABLE 4 Patch-Size Dependence
Patch Spearman's r
Z Score
p Value

function of the patch size. Perhaps the decrease for the larger patches is because the details of the intensity features are lost as they become an increasing fraction of the picture. For example, consider an 800 x 600 pixel image. The patch of size of 40 and 100 correspond to 0.3% and 2.1%, respectively, of the total area. These numbers intimately depend on the details of the target pictures in the study and do not generalize. A more important consideration, however, is to determine what other circumstances might induce an apparent correlation between the entropy gradient and the figure of merit. Because the targets were chosen randomly, the probability of matching a given response to the intended target is 20%, regardless of response bias on the part of the receiver or judging bias on the part of the analyst. In particular, analyst bias cannot systematically affect the figure of merit values because of this blind assessment. Thus, a number of potential artifacts are eliminated because of the differential match and the random selection of the target. Nonetheless, it might be that there is some variable that correlates independently both with the gradient of the entropy and the figure of merit. One such candidate is the cognitive complexity of the target. If the gradient of the entropy correlated significantly with some measure of cognitive complexity, and the figure of merit did so as well, then the observed correlation of the gradient of the entropy with the figure of merit would contain an artifact. As May et al. (1990) showed, a reasonable estimate for the cognitive complexity is the fuzzy-set sigma count for each target. The sigma count is simply the sum of the membership elements in the fuzzy-set representation of the target. The USE as shown in Table 2, represents high-level cognitive elements, whereas the USE that has been used in the past contained a large number of nonobject features such as ambiance, color, and low-level linear features. May, Spottiswoode, and James (1994a) reported a small and nonsignificant correlation of target sigma count and the gradient (r, = -.028, df = 98, p = .609). In our current USE, however, the elements are all features that might contain significant intensity patterns and thus might show an overall correlation of gradient with sigma count. As expected, therefore, for all 300 targets in the pool, we observed a significant correlation between the gradient of Shannon entropy, computed for a

70

E. May, J. Spottiswoode, & L. Faith

For patch size of 20, and the sigma count (r, = .199, df = 297, p = 2.59 x the 75 targets that were selected in the study, the correlation is larger (r, = .359, df = 72, p = 7.10 x The correlation of the sigma count with the figure of merit is small, however (r, = .0017, df = 72, p = .494). To determine the impact of these two correlations on the correlation of the gradient with the figure of merit, we consider a general case. Suppose that there is a significant correlation between variables X and Y, r(X, Y). Suppose further that X and Y both independently correlate with a third variable, 2. We must determine the conditions for the magnitude of these independent correlations such that the observed r(X, Y ) would be an artifact. Assume that the r(Y, Z ) is unity (i.e., completely correlated). We then can replace Z with Y and consider r(X, Z ) as r(X, Y). In this case, r(X, Y) is completely determined by r(X, 2).If r(Y, Z ) is less than unity, than the contribution to r(X, Y)from the independent correlation with Z will be smaller than in it is the unity case. In our case, the correlation of the figure of merit with the sigma count is r, = .OO17 and the correlation of the gradient with the sigma count is r, = .217 < 1. The contribution to the observed correlation of the gradient with the figure of merit for this potential artifact is therefore less then .OO17. Rating assessment of the quality of the AC. For historical and replication reasons, we examined the correlation of the gradient with the upper half of the blind rating scale shown in Figure 2. The Spearman's r was .I46 (df = 23, p = .246), which was consistent with the combined correlation of r, = .161, df = 41, p = .I52 for the static targets in the earlier two studies (May, Spotiswoode, and James, 1994a).

Conclusions
The primary and secondary hypotheses were confirmed. That is, the gradient of Shannon entropy of the target appeared to correlate with the quality of AC, whereas the quality did not correlate with the entropy itself-a result that is suggestive of a sensory system. We legitimately might ask how it is possible to see no AC in the study as defined by the accepted rank-order technique yet see a significant correlation with the gradient of the entropy. One way to understand this apparent contradiction is to examine closely the underlying assumptions of the two AC measurements that are involved, rank order and figure of merit. It is clear that the rank-order technique is a relative measure, which is strongly dependent on the orthogonality of the set of photographs in a judging pack. As an example, let us assume that the target is a small cabin next to a stream in the woods and that a minimal response includes flowing water but does not include the cabin or the woods. In the best case scenario, suppose the pack orthogonality was such that only one picture contained any water at all. In this case, with a modest amount of

AC Sensory System

71

AC, an analyst would have no trouble making a first-place match. In the worst case scenario, suppose the pack contained all five pictures with flowing water of various types, but only one contained a cabin in the woods as well. In this case, the analyst would, on average, end up with a third-place match. Thus, we can see that the rank statistic for the same response strongly depends on the photographs in the judging pack. In the figure of merit analysis of this same example, it might be that the scores for all the photographs in the worst case scenario might be identical, say 0.15, because the response matches each target with about the same level of small correspondence. But in the best case scenario, the figure of merit analysis will give the same score for the stream-cabin-in-the-woods target (i.e., 0.15), but all the decoy targets will be lower. The point is that the figure of merit for the intended target is independent from the content of the judging pack. Many trials in the best case scenario would likely yield a significant rankorder statistic, whereas in the worst case, the rank order is exactly at chance. For the small amount of AC that was assumed in the example, one might come to each of these conclusions, depending on the judging pack orthogonality. In our case, there is no question that the AC is far below what we have come to expect from our established receivers. Second, by the nature of our target pool bandwidth (May, Spotiswoode, and James, 1994b), it is difficult to assure strong orthogonality. In the past, these types of targets have done well when the AC functioning is strong; however, when the functioning is weak, as in this experiment, a rough threshold of AC is needed to produce a significant rank-order statistic. As we have illustrated from the example, it is unlikely that the threshold is zero. That is, small amounts of AC might still produce a rank-order statistic near chance. A correlation is algebraically not sensitive to the absolute level of one or both of its variables. We could add a large constant to either the gradient or the figure of merit and would find exactly the same correlation. Therefore, we believe that we have replicated the earlier finding in which the quality of AC is correlated with the gradient of Shannon entropy and not with the entropy itself. This result is part of the growing and compelling evidence that AC is mediated through a sensory channel. This might be either some combination of the known senses or an additional one. Functional brain imaging may resolve this question by allowing us to directly observe neural functioning during anomalous cognition.

Acknowledgements
We thank Maggie Blackman, Bob Bourgeois, Nicola Kerr, and Lisa Woods of the Rhine Research Center for their heroic and tireless effort at coding the 50 fuzzy elements for each of the 300 targets. We also thank Laura Faith at the Cognitive Sciences Laboratory for her significant contribution in the selection of the target pool and for her coding expertise. Finally, we are deeply apprecia-

72

E. May, J. Spottiswoode, & L. Faith

tive because this work would not have been possible without the generous support of the Fundagiio Bial of Porto, Portugal.

References
Bem, D. J., & Honorton, C. (1994). Does psi exist? Replicable evidence for an anomalous process of information transfer. Psychological Bulletin, 115, 4-18. Lantz, N. D., Luke, W. L. W., & May, E. C. (1994). Target and sender dependencies in anomalous cognition experiments. The Journal of Parapsychology, 58, 285-302. May, E. C., Spottiswoode, S. J. P, & James, C. L. (1994a). Shannon entropy: A possible intrinsic target property. Journal of Parapsychology, 58, 384-401. May, E. C., Spottiswoode, S. J. P, & James, C. L. (1994b). Managing the target-pool bandwidth: Possible noise reduction for anomalous cognition experiments. Journal of Parapsychology, 58,303-313. May, E. C., Utts, J. M., Humphrey, B. S., Luke, W. L. W., Frivold, T. J., & Trask, V. V. (1990). Advances in remote-viewing analysis. Journal of Parapsychology, 54, 193-228. Utts, J. (1995). In Mumford, M. D., Rose, A. M., & Goslin, D. A. (Eds.), An evaluation of remote viewing research and applications. The American Institute for Research. Report, September 29.

Journal of Scient$c Exploration, Vol. 14, No. 1, pp. 73-89,2000

0892-33 1 O/OO O 2000 Society for Scientific Exploration

Contributions to Variance in REG Experiments: ANOVA Models and Specialized Subsidiary Analyses

Princeton Engineering Anomalies Research School of Engineering/Applied Science Princeton University, Princeton NJ 08544

Abstract- Judicious application of a complementary set of sophisticated analytic techniques to large databases from humanlmachine anomalous interaction experiments can extract subtle structural features that might elude more simplistic analyses. The combination of a multi-factor analysis of variance (ANOVA) with various subsidiary, a d hoc approaches suggested by the ANOVA or directly by the data, can establish an instructive hierarchy of salient physical and subjective parameters and illuminate some of their specific details. In this particular study, the dominant finding is a significant correlation of anomalous effects with prescribed intentions of the human operators, compounded of small contributions from many individuals across many experimental conditions. The grand concatenation, which includes all combinations of successful and unsuccessful parameters or conditions, shows a chance probability for this correlation with intention on the order of The effect apparently is confined to non-deterministic devices; i.e., deterministic pseudo-random sources show no overall effect. The correlation with intention for non-deterministic sources alone has a chance probability of 106 . Beyond operator intention, most of the other technical, procedural, and subjective parameters explored show unimpressive contributions to the overall variance, with a few notable exceptions that are clarified in the subsidiary analyses. For example, individual differences among operators are indicated, but there is a relatively normal distribution of effect sizes, within which a few participants are distinguished by consistent achievement over large databases. The temporal development of effect sizes shows a consistent pattern of initial success that declines but then recovers. There is essentially no evidence for a dependence of effect size on spatial or temporal separation, supporting other indications that ordinary physical variables have little impact on the anomalous interactions. In sum, although the composite ANOVA models explain less than 1% of total variance, implying very small and subtle effects, the analysis provides strong evidence that the anomalies are statistically robust; they are not due to chance fluctuations, but are demonstrably correlated with definable subjective factors. Keywords: anomalies - ANOVA - consciousness - electronic random event generator - mindlmachine interactions - models -REG - RNG

Introduction
Beginning about four decades ago, electronic random event generators or random number generators (REG or RNG) have been used in a wide range of

74

Nelson et al.

laboratory experiments designed to test the hypothesis that human consciousness might interact directly with labile physical systems. The results provide clear statistical evidence that the behavior of these devices deviates from chance expectation in correlation with the pre-defined intentions of participants in the experiments (Radin and Nelson, 1989). In 1979, the Princeton Engineering Anomalies Research Laboratory (PEAR) began collecting large databases in REG experiments using particularly rigorous controls and a variety of optional parameters to assess the character and replicability of such anomalous mindlmachine interactions. Over a 12-year period of primary investigation, ten physical and psychological conditions were examined as possible mediating variables in the experimental results. A number of extensions and variations of the basic protocol were explored, using different REG sources as well as a selection of other physical systems, the performance of which was dependent on some form of random process. The experiments had accompanying calibrations which confirmed that the random sources were of high quality, producing data that consistently conformed to theoretical expectations in non-experimental conditions. In the active experiments, however, the data sequences and distributions were significantly correlated with the experimental variables, especially the operators' intentions to shift the means of the REG output distributions, and showed structure that could not be accounted for by chance fluctuations. This paper presents a compact summary of a formal analysis of variance (ANOVA), and a number of subsidiary ad hoc analyses of the results from these REG studies, comprising 1338 replications of the experiment, including a small set of variations on the basic protocol.

Equipment
The PEAR program has used three generations of random event generators, utilizing different primary sources of white noise but maintaining important common features of design. An original benchmark experiment employed a commercial random source sold by Elgenco, Inc., the core of which is proprietary. Elgenco's engineering staff describe this module as solid state junctions with precision preamplifiers, implying processes that rely on quantum tunneling to produce an unpredictable, broad-spectrum white noise in the form of low-amplitude voltage fluctuations (Nelson, Bradish, and Dobyns, 1989). A much simpler and more compact portable REG was based on Johnson noise in resistors, or so-called thermal noise, which also is a quantum-level phenomenon that produces a well-behaved, broad-spectrum fluctuation (Nelson, Bradish, and Dobyns, 1992). A later-generation device called the PEAR Micro-REG used a field effect transistor (FET) for the primary noise source, again relying on quantum tunneling to provide uncorrelated fundamental events that compound to an unpredictable voltage fluctuation. In all cases, the design begins with some white-noise frequency distribution. For example, the benchmark REG presents a flat spectrum + 1 db from

Variance in REG Experiments: ANOVA Models

75

50 Hz to 20 kHz. A low-end cutoff at 1000 Hz attenuates frequencies at and below the data-sampling rate. This filtering, followed by appropriate amplification and clipping, produces an approximately rectangular wave train with unpredictable temporal spacing. Gated sampling, typically at a constant 1kHz rate, yields a regularly spaced sequence of random bits, suitable for rapid counting. Other sources have been constructed that allow higher sampling rates, up to 2 MHz (Ibison, 1998), but this paper summarizes data from the standard unit only. Analog and digital processes are isolated by temporally alternating these operations to exclude contamination of the analog noise train by the digital pulses. To eliminate biases of the mean that might arise from such environmental stresses as temperature change or component aging, an exclusive or (XOR) mask is applied to the digital data stream. This is either a regularly alternating 110 pattern or a more complex mask comprising a randomly ordered array of all 8-bit bytes with equal occurrence of 110. The latter procedure also excludes all short-lag bit-to-bit and byte-to-byte auto-correlations. Finally, data for the experiments are presented and recorded in "trials" that are the sum of N samples (typically 200 bits) from the primary sequence, thus further mitigating any residual short-lag auto-correlations. The final output of the benchmark REG thus is a sequence of conditioned bits, and in the later devices, of bytes presented to the computer's serial port, which then are formed into a sequence of trials, usually generated at approximately 1 per second. Calibrations on all of the devices closely conform to statistical expectations for the mean, variance, skew, and kurtosis of the accumulated count distributions, and expectations for time-series of independent events (Nelson, Bradish and Dobyns, 1989; Nelson, 1993; Nelson et al., 1997).

Experimental Design
The basic experimental designs also embody protocol-level protection against artifactual sources of apparent effect. Following a "tripolar" protocol, participants generate data under three conditions of pre-specified intention, namely to achieve high (HI) or low (LO) mean values, or to generate baseline (BL) data. With the exception of the intention held in the participant's mind, which is pre-recorded in computer files, these three conditions are otherwise the same; all potentially influential variables are maintained constant within an experimental session or series. In addition to the primary variable of tripolar intention, a number of secondary parameters are available as options that can be explored in separate sessions, and assessed as factors that may contribute to the experimental outcomes. These include:

1 . Human variables such as the identity of the individual operators (participants), their gender, and whether they are "prolific," i.e., have performed sufficient replications of the experiment to permit robust comparisons. In this category we also include the replication number or

,

76

Nelson et al. serial position of the session as a factor that reflects operator experience. 2. Physical variables such as the different noise sources, including not only the true random sources described earlier, but also various hardware and algorithmic pseudo-random generators, designated as non-deterministic and deterministic sources, respectively. 3. Operational variables, including spatial separation of the operator from the machine (up to thousands of miles) and separations in time (up to several hours or in a few cases, one to four days); information density (bits per second); the number of trials in automatically sequenced "runs;" the instruction mode (volitional or instructed); and the type of feedback provided to the operator.

Analysis of Variance
The benchmark REG database was accumulated over a period of 12 years, with contributions from 108 individual operators, 30 of whom met our criterion for prolific operators by generating a minimum of 10,000 trials per condition (the equivalent of 10 experimental series or replications). The primary database of 1262 independent replications, comprising a total of 5.6 million 200-bit trials, was analyzed by a regression-based ANOVA, previously reported (Nelson et al., 199 1 ) . Since this phase was completed, two smaller data sets have been added, for a total of 1338 replications, and new versions of two factors (the random source variable is simplified; operator identification is now enhanced with gender specification) included in the original analysis have been defined to help understand the results of the original analysis and those of other special-purpose, detailed analyses. Although they are defined a posteriori, they provide legitimate assessments of the questions they represent, within the context of the full complex of potential influences in this experiment. The discussions that follow thus rely on the original analysis, supplemented by additional information derived from the expanded assessment using the new factor definitions. The REG database is complex, involving nine analytical factors with two to five levels, and a tenth (operators) with more than 100. The analytical matrix has unbalanced cells, requiring that the ANOVA be based on multiple regression modeling, employing a "model-comparison" procedure for partitioning the regression sum of squares. In addition to the comprehensive model and a similar analysis addressing only the contributions of prolific operators, a number of smaller models have been used to examine the details of variance contributions in particular subsets and in individual operator databases. Details of these may be found in the earlier technical report (Nelson et al., 1991). The formal models indicate that anomalous effects appear as small statistical signals in a background of noise; in most cases the amount of variance ex-

Variance in REG Experiments: ANOVA Models

77

plained is on the order of one percent or less. Hence, it is only through the accumulation of large databases that these small effects can be examined. Full Database Analysis The tables, figures, and comments in this section summarize the major results of the original 1991 analysis of variance (Nelson et al., 199 1) based on models for all (1262) replications, the prolific operator subset, and a selection of restricted subsets relevant to particular questions. Each regression model is evaluated in terms of its sum of squares (SSR), degrees of freedom (dB, associated p-value (p), and proportion of total variance explained by the parameters used in the model (R* Following this, a partitioning of the regression sum of ). squares reveals the contributions due to intention and to each of the secondary parameters. The latter combine the main effect and the interaction with intention for each parameter, and include an "unaccounted" entry that indicates the average over- or underestimation of factor contributions resulting from the model-comparison approach to the breakdown of the regression sum of squares. This is on the order of 1% or less of the total residual variance and is not significant. Tables 1-6 all have the same format, showing the composite sum of squares, degrees of freedom, F-ratio, and a p-value indicating the significance of each factor's contribution, where "p-value" refers to the chance of an outcome more extreme than the observed one, assuming the null hypothesis. In subsequent discussions, the terms "suggestive" or "marginal" refer to p-values between 0.10 and 0.05, and "significant" to those less than 0.05. Figures 1-4 illustrate the overall effect of parameters, usually by displaying the mean shift, or effect size, as a function of intention, with one-sigma error bars. The unit for the effect size is the equivalent number of bits per 50 trials, and corresponds to the number of excess bits per 10,000 binary events. The model for the entire data set, with the decomposition of the total sum of squares due to the experimental factors, is presented in Table 1. The corresponding overall mean shifts are shown in Figure 1. All data and parameters
TABLE 1 All Data (All Parameters Except Operators) Model: SSR = 199220; df = 50, 113703; p = 0.0050; ' = 0.00070 R Parameter Intention Device Location Protocol Series Runlength Assignment Control Feedback Unaccounted Sum of Squares

df

F-Ratio

Probability

Nelson et al.

Intentions Fig. 1. Correlation of mean shift with intention. All data are from the comprehensive model including all factors except operators. Units for the mean shift are the number of bits per 50 trials, which is equivalent to the number of bits per 10,000. Point estimates (circles) with one-sigma error bars.

are included, with the exception of the operators factor, whose interactions with intention comprise more than 300 degrees of freedom, entailing calculations that exceed the computational system limits. This factor is addressed later in the prolific operators model. The regression model for the full database across all operators and parameters is significant, clearly indicating that the combined influence of the experimental parameters adds information to the nominally random distribution. The model explains less than 0.1% of the variance, however, which is consistent with the very small ratio of anomalous effect to stochastic "noise." We will see in subsequent analyses that certain well-defined subsets have an R~an order of magnitude larger, but none of the models explain much more than 1% of variance. As predicted, intention is the primary contributor to the regression, with a mean squared deviation (and F-ratio) four times that of the next largest parameter, and, as the figure shows, the relationship between intention and meanshift is consistent with the experimental hypothesis. While HI and LO are properly displaced relative to BL, the meanshift relative to theoretical expectation is greater for HI than LO. This imbalance appears consistently in various subsets but the differences are not statistically significant. The BL data also show a tendency for high deviations that are fairly consistent, although not significant, in curious contrast to the calibrations, which properly do not exhibit any consistent trends (Nelson, 1993). "Device" (type of random source) and "location" are significant and marginal secondary parameters, respectively, suggesting a need for separate examination of the various levels of

Variance in REG Experiments: ANOVA Models

79

each. Briefly, such subsidiary analyses show that results with the diode source and with a non-deterministic shift-register-based device are similar to each other, but differ from those with the deterministic or algorithmic pseudo-random devices. The shift-register device originally was classified as pseudorandom, but subsequently was determined to be a combination of pseudo-random and truly random components, hence qualifying as a non-deterministic source (Jahn et al., 1997). With respect to "location" the subsidiary analysis shows a second-order interaction: local and remote data are similar, while the "B" location (operator in the next room, delayed feedback) differs, but the latter effect is driven by a confounding interaction with the device type. This will be discussed further, but we should note in this context that the database for the "B" location is comparatively small and hence any inferences must be tentative. To examine the effect of large spatial separations on the anomalous effects more thoroughly, a special-purpose analysis using linear regression was applied. The results show no trend in scoring associated with increasing separation of operator from machine, up to several thousands of miles (Dunne and Jahn, 1992). In the full model, "assignment" (whether the instruction was random or volitionally chosen) and "control" (automatic vs. manual sequencing of trials) are not represented precisely because 16 early series mixed these parameters. When these series are excluded, "assignment" is associated with a p-value of 0.52, and "control" with ap-value of 0.29, hence neither factor is a contributor in the grand concatenation. The "protocol" (the experiment's length and purpose, particular questions, etc.), "series" (series position), "runlength," and "feedback factors are all non-significant in the main analysis. For "series," however, other detailed assessments indicate a clear, albeit complex, non-linear structure (Dunne et al., 1994) which will be discussed later. In qualification tests of the multiple regression modeling procedure, the corresponding analysis applied to arbitrarily assigned calibration data in the full database model yielded a non-significant regression, and parameter contributions that were well within chance variation: one factor had a marginal pvalue (0.063), but none reached the nominal significance level (Nelson et al., 1991). Thus, the appearance of structure in the experimental database cannot be attributed to artifacts of the modeling procedure. Prolific Operator Results Table 2 and Figure 2 show the results for a subset of the full original database consisting of 1060 series produced by the 30 "prolific" operators, each of whom generated 10,000 or more trials per intention. In this model, which constitutes approximately 80% of the full database, the operator factor can be assessed, and because the various parameter levels are represented more evenly, the results may be interpreted with higher confidence. This prolific operator model is highly significant and explains more than twice as much of the variance (0.2%) as the full model, suggesting that the inclusion of both multiple

80

Nelson et al.
TABLE 2 Prolific Operators (All Data and All Parameters)
Model: SSR = 499986; df = 137,97135; = 4 . 4 ~ 1 0 - R = .0020 ;' ~ Parameter Intention Operators Device Location Protocol Series Runlength Assignment Control Feedback Unaccounted Sum of Squares

df

F-Ratio

Probability

Meanshift
(Bits150 trl)

1
I

Intentions Fig. 2. Correlation of mean shift with intention. Data are from the prolific operators model including all factors. Mean shift units are the number of bits per 50 trials, which is equivalent to the number of bits per 10,000. Point estimates (circles) with one-sigma error bars.

replications by individuals and a factor that represents individual differences may serve to clarify the effects of the parameters. Again, intention is highly significant and is the primary contributor to the regression. The operator parameter also is significant, indicating individual differences in performance and the need to consider each operator's data separately to assess individual responses to parameters. Such analyses have been detailed in the earlier technical report (Nelson et al., 1991), and will be considered further in discussion of the updated ANOVA. Figure 3 shows the results of a cluster analysis of the intention-linked performance of the 30 prolific operators. There are three well-defined clusters: one for success in the direction of intention (HI - LO), one for a small number of operators with

Variance in REG Experiments: ANOVA Models

81

Delta (HI-LO)

SS Intentions 1 SS Total

Fig. 3. Clustering of prolific operator effect sizes. The difference between HI and LO mean shift (arbitrary units) is plotted against the proportion of the sum of squares attributable to intention. Operator numbers are printed in a point size proportional to the operator's database size. The computed clusters are indicated by dotted lines around a group with large effect sizes in the intended direction (upper group) and opposite to intention (lower group). The remaining operators did not generate effects correlated with intention.

large effects in the direction opposite to intention, and a third cluster for whom intention was not a significant contributor to the regression model. Both device and location appear as important secondary parameters, similar to the grand concatenation shown in Table 1, but for location we must again apply the previously mentioned caveats concerning the device interaction stemming from the small "B" data set. Feedback shows a marginal contribution, and examination of the subset means suggests, surprisingly, that the simple digital feedback and non-feedback conditions produce somewhat higher scores than the more engaging and informative graphic feedback mode while detailed examination shows this to be driven largely by early trials, which had larger effect sizes but which were done only with digital feedback. Subsequent studies suggest that feedback may indeed be an important parameter. Although the series position contribution to the multiple regression is nonsignificant, the assessment of subset means reveals a complex pattern of results as a function of series or replication number. As shown in Figure 4, the highly significant composite result in the first series declines to non-significance, and then recovers to a significant effect in later series. The figure shows that this non-linear progression of effect size is superimposed on a weakly defined linear trend, thus explaining why the contribution of this parameter to the simple regression model is modest. This pattern has been the subject of a more specific and detailed regression analysis (Dunne et al., 1994) that confirms and extends the indication of a strong influence of series position (corresponding to developing operator

82

Nelson et al.

Delta
(HI - LO)

Series Number Fig. 4. Sequential development of effect size as a function of series position. Units of effect size are bits per 50 trials, and the differences between the HI and LO intentions for each series are plotted as point estimates (circles), with one-sigma error bars. The point labeled 5+ includes series 5 and all subsequent series.

experience), despite the modest contribution from this factor in the primary ANOVA model. The subsidiary analysis reveals that a similar pattern of strong early performance followed by a decline and subsequent recovery obtains in both the high and low intentions. The quadratic component of the regression is significant ( p = 0.016) for the high-low difference, while the linear component is negligible. No such pattern is evident in the baseline data. Most of the other secondary parameters in the ANOVA model are not overall contributors across operators, but a number of individuals respond differently to the experimental variations. Detailed analyses of the associated patterns are beyond the scope of this paper, but may be found in the technical report (Nelson et al., 1991). The unaccounted variance in the prolific operator model is negative, indicating average overestimation of contributions, but this is inconsequential in magnitude. Again a corresponding analysis using arbitrarily assigned calibration data yields a non-significant regression and no evidence of non-chance variation. One parameter has a marginal p-value (0.086), but it is a factor different from that with a marginal p-value in the full database model, giving further evidence that calibration data show only normal chance fluctuations, and that the modeling procedure does not produce artifactual indications of parameter effects. ANOVA Update The specialized analyses of the REG database using the a d hoc tools and perspectives mentioned above revealed some influential variables not includ-

Variance in REG Experiments: ANOVA Models

83

ed in the original, 1991 analysis, and prompted the development of corresponding parameters to be included in an update of the analysis of variance. Of course this is an a posteriori procedure, in the sense that the questions arose out of specific analyses of the data, but the new factors are reasonable extensions or modifications of those in the original ANOVA. In this updated analysis, a factor called "gender" is represented with three levels: male (59 operators), female (50 operators), and co-operators (18 pairs). Although Dunne found instructive, significant differences in the average effects of men and women (Dunne, 1991), this factor is not a significant contributor in either the new overall model (see Table 3) or the corresponding prolific operators model. The finding is consistent with Dunne's observation that the compounded effects (composite 2-scores) do not differ significantly for men and women, although their average effect sizes do differ (Dunne, 1998). This is explained by the disproportionate contributions in very large databases by three female operators whose results differ significantly from the other forty-seven. The co-operator data were not included in the 199 1 analysis, but they have been assessed directly in a specialized, ad hoe analysis (Dunne, 1991). In particular, the effect sizes have been compared with those of single operators and appear to be larger by a factor of two or more, depending on the composition of the pairs. Despite the comparatively large mean shift for the co-operators, this database is too small relative to those for the individual male and female operators to produce a significant difference across the gender factor. We made no attempt to confirm the striking "bonded-pair" results found by Dunne because this would require a further dilution of cell populations in the co-operator subset of the database. The redefinition of the device parameter as a deterministic or non-deterministic "source," and the inclusion of "gender," have two other notable effects. The previous indication that location was a significant or marginal
TABLE 3 Revised Model, New Parameter Definitions: Source (Random vs. Deterministic), Gender (Female, Male, Co-operators)
Model: SSR = 204391 ; df = 56,120987; p = 0.0147; R = 0.00067 Parameter Intention Source Location Protocol Series Runlength Assignment Control Feedback Gender Unaccounted Sum of Squares Probability

84

Nelson et al.

parameter is mitigated here, as a result of clarifying the device distinction. Secondly, due to clearer definition of the device influences, as well as possible effects of gender differences, the series position parameter now is marginally significant. A separate, special-purpose definition of the gender factor using four levels includes, in addition, a special category that segregates the contributions of three high-performing female operators with moderate to large effect sizes and extremely large databases, resulting in disproportionate contributions to the overall effect. Their operator numbers (10, 78, 80) are notable in Figure 3, where the font size is proportional to the database size. This four-level "gender" factor is obviously defined a posteriori, and should be regarded as a means to confirm findings of other, independent analyses, (Jahn et al., 1997), within the context of the full analysis of variance. The full ANOVA model using this four-level parameter shows it to be a highly significant contributor @ = 3.4 x indicating that results for the small, selected group of operators clearly differ from the general pattern (see Table 4 and Figure 3). Although this conclusion must be tempered by the a posteriori nature of the analysis, it is clear that the combination of very large databases with moderate, positive effects contributes powerfully to the experimental outcome. Understanding the contributions of this source of variance may help to interpret other aspects of the experiment; it underscores the importance of large individual databases, where interactions with other variables are not confounded by significant individual differences. The updated regression models also include a newly defined device factor, called "source," with two levels: non-deterministic and deterministic. The former comprises the Elgenco-based REG and the first version of a hardware "pseudo-random" source, which was, in fact, non-deterministic because it employed randomly varying shift-register steps. A revised hardware pseudo-random source subsequently was developed, as well as an algorithmic pseudoTABLE 4 Revised Model, All Data: Contribution of High Performers
Model: SSR = 278012; df = 59,120984; p = 0.00005 1; R~= 0.00092 Parameter Intention Source Location Protocol Series Runlength Assignment Control Feedback Gender+hi-perf Unaccounted Sum of Squares
df

F-Ratio

Probability

Variance in REG Experiments: ANOVA Models

85

random source. The latter two are properly deterministic, and data generated using them are compared with those from the truly random, non-deterministic devices by means of the new source factor. The updated models show this parameter to be significant, with ap-value of 0.020 in the grand, overall regression, and p = 0.007 1 in the prolific operator model. When the non-deterministic sources, also shown in other analyses to yield no anomalous effect (Jahn et al., 1997), are excluded (see Table 5 ) , the explanatory power of the model increases greatly to nearly 1% of the variance, and the significance of the intention factor increases to No other factor except that representing series position is a prominent contributor to this model; without the relatively noisy deterministic source data, the series parameter becomes significant. It is also worth noting that in this model location shows no suggestion of differentiation, confirming other evidence that anomalous effects are not a function of spatial separation. Table 6 presents a model restricted to the deterministic sources alone, where we find no indication that the intention factor contributes to an
TABLE 5 Revised Model, Non-Deterministic Sources Only Model: SSR = 197238; df = 53,90210; p = 0.012; R2= 0.00087 Parameter Intention Location Protocol Series Runlength Assignment Control Feedback Gender Unaccounted Sum of Squares
df

F-Ratio

Probability

8253 3454 957

6 6

0.549 0.230

0.77 0.97

TABLE 6 Revised Model, Deterministic Sources Only Model: SSR = 128858; df = 47,30732; p = 0.313; R2 = 0.0017 Parameter Intention Location Series Runlength Assignment Control Feedback Gender Unaccounted Sum of Squares
df

F-Ratio

Probability

86

Nelson et al.

explanation of variance. Only the location factor is significant in this case, while the regression model itself is non-significant. Thus, the indications of differentiation by location can be attributed to the large, though inconsistent effects in the small "B" and "C" subsets of the deterministic data. Combining these two specialized questions in a single analysis, we generate a model that includes the exploratory parameter segregating the three highperforming operators, and excludes data from deterministic sources (which show no effect). The variance explained in this model increases to well over 1% (see Table 7). Intention is the primary contributor, followed by the fourlevel gender factor, the explanatory power of which is about one-third that of intention. Of the remaining parameters, only series position appears as a marginally significant contributor.

Conclusions
The most important finding in both the original and the updated analyses is a significant correlation of outcome with the pre-assigned intentions, compounded largely of small contributions from many individual operators across most of the experimental conditions, but with a disproportionate contribution from the three high-performing operators. Depending upon the particular subset of the large and complex database, the statistical significance ranges up to a few parts per million, with the grand concatenation, which includes all combinations of successful and unsuccessful parameters or conditions, showing a probability for the correlation with intention on the order of 2 x These comprehensive ANOVA models provide compact summaries of the major findings of the REG experiments, with the combined effect of all measured parameters taken into account. In conjunction with the complementary analyses addressing specific questions, this approach leads to a number of well-defined conclusions:
TABLE 7 Revised Model: Segregation of Non-Deterministic Sources and Contribution of High Performers Model: SSR = 291671; df = 56,90207; p = 4 . 0 ~ 1 0 - R~= 0.0013 ;~ Parameter Intention Location Protocol Series Runlength Assignment Control Feedback Gend+hi-perf Unaccounted Sum of Squares
df

F-Ratio

Probability

Variance in REG Experiments: ANOVA Models

87

1. Overall, the correlation with intention is a small-magnitude effect, equivalent to a distribution mean shift of about 1 part in 10,000, ranging up to an order of magnitude larger in certain subsets. This finding differs little between the original and updated analyses ( p = 0.00015, p = 0.00024, respectively). This result also is similar to that in Jahn et al. (1997) for the benchmark REG ( p = 0.00007), although a direct comparison is not appropriate since the ANOVA result considers the structuring effect of all three intentions, while the standard analysis regards only the differential between the HI and LO conditions. 2. The broad generality of the finding across most of the different combinations of parameters suggests a mechanism that operates at a very fundamental level. For example, a separate, specialized analysis indicates that the anomalous effect can be modeled most simply as an alteration of the fundamental binary probability of the random events (Jahn, Dobyns, and Dunne, 199 1). 3. The effect is apparently confined to non-deterministic devices. The updated version of the ANOVA confirms that deterministic pseudo-random sources do not change behavior in correlation with the operators' intentions, while non-deterministic random sources incontrovertibly do so. The full regression model indicates a significant contribution of the "source" parameter, and a model limited to data taken with non-deterministic sources is highly significant (4.0 x with the intention parameter also significant (1.0 x l ~ - ~ ) , the corresponding model for while data from deterministic sources alone is not significant, nor is the intention parameter. (We note, however, that location is significant and control is marginal in the latter model [ p = 0.002 and 0.061, respectively], and since other researchers have reported effects with pseudo-random sources (Radin and Nelson, 1989), a strong conclusion discounting deterministic sources would be premature.) 4. Although there are individual differences, there is a relatively normal distribution of effect sizes across individuals, with no indications of outliers indicating special performance in the sense that only certain "gifted" individuals can produce the anomalous effect. Nonetheless, consistent positive achievement over large databases does distinguish a few individuals. While the clear differentiation for the selected group of three operators is predetermined by the selection process, their distinctive performance is instructive. Inspection of their effect sizes shows them to fall within the range of the full distribution of operators, indicating that the differential is driven mainly by the large database sizes. 5. The gender variable, necessarily represented in the ANOVA by composite rather than average operator scores, does not show a significant contribution to the variance. Specialized analyses addressing the average effect size reveal that there are gender differences, however, comprising small variability and regular correlation with intention for men, and

88

Nelson et al. larger variability and less consistent correlations for women (Dunne, 1998). 6. The temporal development of effect sizes shows a consistent pattern, with initial significant success that declines but then recovers. A specialized analysis reveals that this is a broadly distributed pattern in the intentional conditions, which does not appear in baseline data (Dunne et al., 1994). 7 . The overall findings show essentially no evidence for a dependence on spatial or temporal separation, complementing other indications that ordinary physical variables have little impact on these anomalous interactions. A specialized analysis has supplemented this conclusion within REG and other databases (Dunne and Jahn, 1992).

In summary, we conclude that both the comprehensive ANOVA technique and the more sharply focused a d hoe analyses can play important roles in the assessment of complicated databases of this sort. On the one hand, analysis of variance allows an examination of large and complex data sets as a whole, with a perspective that displays the relative strength of effects from all the measured variables in the experiment. But ANOVA has limitations, especially in a database where the cell populations in the analytical matrix vary as much as do those in the REG ANOVA. The practical requirements of the experimental program have dictated an a d lib accumulation of data under various combinations of parameters in order to explore the engineering questions motivating the experiment, while also maintaining a viable psychological context for the human operators. In the course of the experimental development, emphases have changed to include a progressively more incisive examination of subjective factors. Thus, the REG experiment, although based on an unchanging fundamental design, has a complex and unbalanced, non-orthogonal set of variables. This prevents easy assessment of higher-order interactions, even when it is apparent that interactions among the various conditions may be important contributors to the explanation of variance. It thus follows that specialized, detailed analyses can help reveal the structure of the data in delimited subsets of the database, without compromising the integrity of interpretations. For example, the implications of spatial separation are critical to modeling the anomalous effects, and an analysis that examines both the linear and higher-order regression of effect size on distance provides essential information. Likewise, the serial position effects must be important indicators of psychological factors bearing on the results, and these can be detailed only in a d hoc formats. In this context, ANOVA not only provides guidance for focused assessments, showing, for example, that a specialized analysis must address the non-deterministic and deterministic sources separately, but it also confirms the legitimacy of the a d hoc studies. Thus, in these and other cases previously discussed, we see a complementary balance between the global perspective provided by ANOVA and the sharply focused, incisive answers provided by well-posed supplementary analyses.

Variance in REG Experiments: ANOVA Models Acknowledgments The Princeton Engineering Anomalies Research program has been supported by a number of foundations and individuals, including the Institut fiir Grenzgebiete der Psychologie und Psychohygiene, the Lifebridge Foundation, the Ohrstrom Foundation, Mr. Richard Adams, Mr. Laurance Rockefeller, and Mr. Donald Webster. We acknowledge also the contributions of the volunteer operators, and the editorial assistance of Elissa Hoeger. References
Dunne, B. J. (1991). Co-operator experiments with an REG device. PEAR Technical Report 91005. Princeton Engineering Anomalies Research, Princeton, NJ. Dunne, B. J. (1998). Gender differences in humanlmachine anomalies. Journal of Scientific Exploration, 12, 1, 3-55. Dunne, B. J., Dobyns, Y. H., Jahn, R. G., & Nelson, R. D. (1994). Series position effects in random event generator experiments, with appendix by Angela Thompson. Journal of Scientific Exploration, 8, 2, 197-216. Dunne, B. J., & Jahn, R. G. (1992). Experiments in remote humanlmachine interaction. Journal of Scientific Exploration, 6, 4 , 31 1-332. Ibison, M. (1998). Evidence that anomalous statistical influence depends on the details of the random process. Journal of Scientific Exploration, 12, 3,407-424. Jahn, R. G., Dobyns, Y. H., & Dunne, B. J. (1991). Count population profiles in engineering anomalies experiments. Journal ofscientific Exploration, 5, 2,205-232. Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G . J. (1997). Correlations of random binary sequences with prestated operator intention: A review of a 12-year program. Journal of Scientific Exploration, 11, 3,345-367. Nelson, R. D., Bradish, G. J., & Dobyns, Y. H. (1989). Random event generator qualification, calibration and analysis. PEAR Technical Report 89001. Princeton Engineering Anomalies Research, Princeton, NJ. Nelson, R. D., Bradish, G. J., & Dobyns, Y. H. (1992). The portable PEAR REG: Hardware and software documentation. PEAR Internal Document 92-1. Princeton Engineering Anomalies Research, Princeton, NJ. Nelson, R. D. (1993). CONTCAL: Continuous automatic calibrations, pseudo-intentions, and active experiments. PEAR Internal Document 93-4. Princeton Engineering Anomalies Research, Princeton, NJ. Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobyns, Y. H., & Bradish, G. J. (1997). FieldREG 11: Consciousness field effects, replications and explorations. Journal of Scientific Exploration, 12, 3,425-454. Nelson, R. D., Dobyns, Y. H., Dunne, B . J., & Jahn, R. G. (1991). Analysis of variance of REG experiments: Operator intention, secondary parameters, database structure. PEAR Technical Report 91004. Princeton Engineering Anomalies Research, Princeton, NJ. Radin, D. I., & Nelson, R. D. (1989). Consciousness-related effects in random physical systems. Foundations of Physics, 19, 12, 1499-15 14.

Journal of Scientific Exploration, Vol. 14, No. 1 , pp. 9 1-106,2000

0892-33 10/00 O 2000 Society for Scientific Exploration

Publication Bias: The "File-Drawer" Problem in Scientific Inference

Space Science Division National Aeronautics and Space Administration Ames Research Center MS 245-3, Moffett Field, CA 94035-1000 e-mail: jeffrey@sunshine.arc.nasa.gov

It is human nature for "the affirmative or active to effect more than the negative or privative. So that a few times hitting, or presence, countervails ofttimes failing or absence." -Francis Bacon, The Advancement of Learning

Abstract- Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect because the unpublished results are imagined to be tucked away in researchers' file cabinets, is a potentially severe impediment to combining the statistical results of studies collected from the literature. With almost any reasonable quantitative model for publication bias, only a small number of studies lost in the file drawer will produce a significant bias. This result contradicts the well-known fail-safe file-drawer (FSFD) method for setting limits on the potential harm of publication bias, widely used in social, medical, and psychic research. This method incorrectly treats the file drawer as unbiased and almost always misestimates the seriousness of publication bias. A large body of not only psychic research, but medical and social science studies as well, has mistakenly relied on this method to validate claimed discoveries. Statistical combination can be trusted only if it is known with certainty that all studies that have been carried out are included. Such certainty is virtually impossible to achieve in literature surveys.

Key words: publication bias statistics

-

meta-analysis - file drawer effect

-

1. Introduction: Combined Studies
The goal of many studies in science, medicine, and engineering is the measurement of a quantity to detect a suspected effect or to gain information about a known one. Observational errors and other noise sources make this a statistical endeavor in which one obtains repeated measurements to average out these fluctuations. If individual studies are not conclusive, improvement is possible by combining the results of different measurements of the same effect. The idea is to

92

J. D. Scargle

perform statistical analysis on relevant data collected from the literature1 to improve the signal-to-noise ratio (on the assumption that the noise averages to zero). (For clarity and consistency with most of the literature, individually published results will be called studies throughout this paper, and the term analysis will refer to the endeavor to combine two or more studies.) Two problems arise in such analyses. First, experimenters often publish only statistical summaries of their studies, not the actual data. The analyst then is faced with combining the summaries, a nontrivial technical problem (e.g., Rosenthal, 1978, 1995). Modern communication technology should circumvent these problems by making even large data arrays accessible to other researchers. Reproducible research (Buckheit and Donoho, 1995; Claerbout, Schwab, and Karrenbach, 1999) is a discipline for doing this and more, but even this methodology does not solve the other problem, which is publication bias.

2. Publication Bias
The second problem facing combined analysis is that studies collected from the literature often are not a representative sample. Erroneous statistical conclusions may result from a prejudiced collection process or if the literature is itself a biased sample of all relevant studies. The latter, publication bias, is the subject of this paper. The bibliography contains a sample of the rather large literature on the file-drawer effect and publication bias. The essence of this effect, as described by nearly all authors, can be expressed in statistical language as follows: A publication bias exists if the probability that a study reaches the literature, and is thus available for combined analysis, depends on the results of the study. What matters is whether the experimental results are actually used in the combined analysis, not just the question of publication. That is, the relevant process is the following sequence, in its entirety, following the initiation of the study: Study is carried out to a predefined stop point. All data are permanently recorded. Data are reduced and analyzed. A paper is written. The paper is submitted to a journal. The journal agrees to consider the paper. The referee and author negotiate revisions. The referee accepts the paper.

In some fields, this is called rneta-analysis, from the Greek meta, meaning behind, after, higher or beyond, and often denoting change. Its usage here presumably refers to new issues arising in the statistical analysis of combined data, such as the file-drawer effect itself. It is used, mostly in scientific terminology, to imply a kind of superior or oversight status, as in metaphysics. I prefer the more straightforward term combined analysis.

~
9. 10. 1 1. 12. 13. 14.

Publication Bias The editor accepts the paper for publication. The author still wishes to publish the paper. The author's institution agrees to pay page charges. The paper is published. The paper is located during a literature search. Data from the paper are included in combined analysis.

I refer to this concatenated process loosely as publication, but it is obviously more complex than what is usually meant by the term. Some of the steps may seem trivial, but all are relevant. Each step involves human decisions and so may be influenced by the result of the study. Publication probability may depend on the specific c o n c l ~ s i o non the size of the effect measured, and on the ,~ statistical confidence in the result. Here is the prototype for the analyses treated here: Each of the publishable studies consists of repeated measurements of a quantity, say x. The number of measurements is, in general, different in each such study. The null hypothesis is that x is normally distributed, for example:

this notation means that the errors in x are normally distributed with mean p and variance o. The results of the study are reported in terms of a shifted (i.e., p is subtracted) and renormalized (i. e . , ois divided out) standard normal deviate Z = (x - p)lo. The null hypothesis-usually that p is zero, one half, or some other specific value-yields

I

This normalization removes the dependence on the number of repeated measurements in the studies. In this step, it is important that o be well estimated, often a tricky business. A common procedure for evaluating such studies is to obtain a p value from the probability distribution P(Z) and interpret it as providing the statistical significance of the result. The discussion here is confined to this approach because it has been adopted by most researchers in the relevant fields. Nonetheless, as noted by Sturrock (Sturrock, 1994, 1997) and others (Jefferys, 1990, 1995; Matthews, 1999), this procedure may yield incorrect conclusions-usually overestimating the significance of putative anomalous results. The Bayesian methodology is probably the best way to treat publication bias (Big-

1

The literature of social sciences contains horror stories of journal editors and others who consider a study worthwhile only if it reaches a statistically significant, positive conclusion; that is, an equally significant rejection of a hypothesis is not considered worthwhile.

94

J. D. Scargle

gerstaff, 1995; Biggerstaff, Tweedie, and Mengersen, 1994; Givens, Smith, and Tweedie, 1995, 1997; Tweedie et al., 1996). This section concludes with some historical notes. Science has long known the problems of biased samples. An interesting historical note (Petticrew, 1998) nominates an utterance by Diagoras of Melos in 500 BC as the first historical mention of publication bias.3 See also Dickersin and Min (1993) for other early examples of awareness of publication bias. Publication bias is an important problem in medical studies (e.g., Allison, Faith, and Gorman, 1996; Dickersin, 1997; Earleywine, 1993; Faber and Galloe, 1994; Helfenstein and Steiner, 1994; Kleijnen and Knipschild, 1992; LaFleur et al., 1997; Laupacis, 1997; Persaud, 1996; Sacks et al., 1983), as well as in other fields (see Bauchau, 1997; Fiske, Rintamaki, and Karvonen, 1998; Riniolo, 1997; Silvertown and McConway, 1997; and Stanton and Shadish, 1997, for examples). The negative conclusions we shall soon reach about the commonly used procedure to deal with this problem yield a discouraging picture of the usefulness of combined analysis in all of these contexts. On the other hand, the application of modern, sophisticated methods such as those listed above (see also Taylor, 2000) is encouraging.

3. The "Fail Safe File-Drawer" Calculation
Rosenthal's influential work (Rosenthal, 1979, 1984, 1990, 199 1, 1995) is widely used to set limits on the possibility that the file-drawer effect is causing a spurious result. One of the clearest descriptions of the overall problem, and certainly the most influential in the social sciences, is
researchers and statisticians have long suspected that the studies published in the behavioral sciences are a biased sample of the studies that are actually carried out.. . . The extreme view of this problem, the "file drawer problem," is that the journals are filled with the 5% of the studies that show Type I errors, while the file drawers back at the lab are filled with the 95% of the studies that show nonsignificant (e.g., p > .05) results. (Rosenthal, 1979, p. 638)

A Type I error is rejection of a true null hypothesis. (Type I1 is failing to reject a false one.) This lucid description of the problem is followed by a proposed solution:
In the past, there was very little we could do to assess the net effect of studies tucked away in file drawers that did not make the magic .05 level.. . Now, however, although

Early Greek sailors who escaped from shipwrecks or were saved from drowning at sea displayed portraits of themselves in a votive temple on the Aegean island of Samothrace, in thanks to Neptune. Answering a claim that these portraits are sure proof that the gods really do intervene in human affairs, Diagoras replied "Yea, but ... where are they painted that are drowned?"

Publication Bias

95

no definitive solution to the problem is available, we can establish reasonable boundaries on the problem and estimate the degree of damage to any research conclusions that could be done by the file drawer problem. The fundamental idea in coping with the file drawer problem is simply to calculate the number of studies averaging null results that must be in the file drawers before the overall probability of a Type I error can be just brought to any desired level of significance, s a y p = .05. This number of filed studies, or the tolerance for future null results, is then evaluated for whether such a tolerance level is small enough to threaten the overall conclusion drawn by the reviewer. If the overall level of significance of the research review will be brought down to the level of just significant by the addition of just a few more null results, the finding is not resistant to the file drawer threat. (Rosenthal, 1984, p. 108) [The italic emphasis is original; I have indicated what I believe is the fundamental flaw in reasoning with boldface.]

By its very definition, the file drawer is a biased sample. In the nominal example given, it is the 95% of the studies that have 5% or greater chance of being statistical fluctuations. The mean Z in this subsample is

It is not zero. As we will see below, Rosenthal's analysis explicitly assumed that 2 = 0 for the file-drawer sample. Because this assumption contradicts the essence of the file-drawer effect, the quantitative results are incorrect. I now recapitulate the analysis given in Rosenthal(1984), using slightly different notation. For convenience and consistency with the literature, we refer to this as the fail-safe file-drawer, or FSFD, analysis. The basic context is a specific collection of published studies having a combined Z that is deemed to be significant, that is, the probability that the Z value is attributable to a statistical fluctuation is below some threshold, say 0.05. The question Rosenthal sought to answer is, How many members of a hypothetical set of unpublished studies have to be added to this collection to bring the mean Z down to a level considered insignificant? As argued elsewhere, this does not mirror publication bias, but this is the problem addressed. Let Npubbe the number studies combined in the analysis of the published literature (Rosenthal's K), and Nfiledbe the number of studies that are unpublished, for whatever reason (Rosenthal's X). Then N = Npub+ Nfiledis the total number of studies carried out. The basic relation (Equation 5.16 of Rosenthal, [1984]) is,

This is presumably derived from

i.e., an application of his method of adding weighted 2's as in Equation 5.5 of

96

J. D. Scargle

Rosenthal, by setting the standard normal deviate of the file drawer, Zfiled 0. = This is incorrect for a biased file drawer. Equation 4 can be rearranged to give

which is the equation used widely in the literature, and throughout this paper, to compute FSFD estimates. What fundamentally went wrong in this analysis, and why has it survived uncriticized for so long? First, it is simply the case that the notion of null, or insignificant results is easily confused with z = 0. While the latter implies the former, the former (which is true, in a sense, for the file drawer) does not imply the latter. Second, the logic behind the FSFD is seductive. Pouring a flood of insignificant studies-with normally distributed 2s-into the published sample until the putative effect submerges into insignificance is a great idea. What does it mean, though? It is, indeed, a possible measure of the statistical fragility of the result obtained from the published sample. On the other hand, there are much better and more direct ways of assessing statistical significance; I do not believe that FSFD should be used even in this fashion. Third, I believe some workers have mechanically calculated FSFD results, found that it justified their analysis or otherwise confirmed their beliefs, and were therefore not inclined to look for errors or examine the method critically. A simple thought experiment makes the threat of a publication bias with Nfiled on the same order as Npub clear: Construct a putative file-drawer sample by multiplying all published Z values by -1; then the total sample then has Z exactly zero, no matter what. The two-sided case often is raised in this context. That is to say, publication bias might mean that studies with either Z > 0 or 2 < 0 are published in pref> < erence to those with small IZI. This situation could be discussed, but it is a red herring in the current context (e.g., Iyengar and Greenhouse, 1988). Further, none of the empirical distributions I have seen published show any hint of the bimodality that would result from this effect, including those in Radin (1997). In addition, I first was puzzled by statements such as, FSFD analysis assesses "tolerance for future null results." The file-drawer effect is something that has already happened by the time one is preparing a combined analysis. I concluded that such expressions are a kind of metaphor for what would happen if the studies in the file drawers were suddenly made available for analysis. But even if this imagined event were to happen, one would still be left to explain how a biased sample was culled from the total sample. It seems to me that whether the result of opening this Pandora's file drawer is to dilute the putative effect into insignificance is of no explanatory value in this context-even if the calculation were correct. In any case, the bottom line is that the widespread use of the FSFD to conclude that various statistical analyses are robust against the file-drawer effect

Publication Bias

97

is wrong because the underlying calculation is based on an inapplicable premise. Other critical comments have been published, including Berlin (1998) and Sharpe (1997). Most important is the paper by Iyengar and Greenhouse (1988), which makes the same point but further provides an analysis that explicitly accounts for the bias (for the case of the truncated selection functions discussed in the next section). These authors note that their formulation "always yields a smaller estimate of the fail-safe sample size than does" Rosenthal's.

4. Statistical Models
This section presents a specific model for combined analysis and the filedrawer effect operating on it. Figure 1 shows the distribution of N = 1,000 samples from a normal distribution with zero mean and unit variance, namely

Publication bias means that the probability of completion of the entire
Null Hypothesis

--

1000 Studies p-value: 1.86-48

Filed:

a> -0.1 12 =

Published:

Fig. 1. Histogram corresponding to the null hypothesis: normal distribution of Z values from 1,000 independent studies. Those 5% with Z 2 1.645 are published (open bars), the remainder are "tucked away in file drawers," i.e., unpublished (solid bars). Empirical and exact values of Z (cf. Equation 14) are indicated for the filed and published sets.

98

J . D. Scargle

process detailed in Section 2 is a function of the study's reported Z. Note that I am not assuming this is the only thing that it depends on. I use the notation S(Z) = publication probability

(8)

for the selection function, where 0 _< S(Z) _< 1. Note S(Z) is not a probability distribution over Z ; for example, its Z integral is not constrained to be 1. This model captures what almost all authors describe as publication bias (e.g., Hedges, 1992; Iyengar and Greenhouse, 1988). In view of the complexity of the full publication process (see Section 2), it is unlikely that its psychological and sociological factors can be understood and accurately modeled. I therefore regard the function S as unknowable. The approach taken here is to study the generic behavior of plausible quantitative models, with no claim to having accurate or detailed representations of actual selection functions. Taken with the distribution G(Z)in Equation 7, an assumed S(Z) immediately yields three useful quantities: the number of studies published

the number consigned to the file drawer

and the expected value of the standard normal deviate Z, averaged over the published values

The denominator in the last equation takes into account the reduced size of the collection of published studies (Npub)relative to the set of all studies performed (N). Absent any publication bias [i.e., S(Z)= 11 2 = 0 from the symmetry of the Gaussian, as expected. Cut Of Selection Functions Consider first the following choice for the selection function:

S(Z)=

0 for Z < Zo 1 for Z > Zo,

where, in principle, Zo can have any value, but small positive numbers are of most practical interest. That is to say, studies that find an effect at or above the significance level corresponding to Zo are always published, whereas those that do not are never published. Putting this into the above general expressions Equations 9 and 11 gives

Publication Bias

99

where erfc is the complementary error function. The mean Z for these published studies is

Consider the special value Zo = 1.645, corresponding to the classical 95% confidence level and obtained by solving for Z the equation 1 p = - erfc 2

(2)

(with p = .05). This case frequently is taken as a prototype for the file drawer (Iyengar and Greenhouse, 1988; Rosenthal, 1979). Equation 13 gives Npub = 0.05N; that is (by design) 5% of the studies are published and 95% are not published. Equation 14 gives 2 = 2.0622. Let the total number of studies, be N = 1,000, i.e., 1,000 studies, of which 50 are published and 950 are not. We have chosen this large value for N, here and in Figure 1, for illustration, not realism. For the 50 published studies, the combined Z, namely m , Equation 15 gives an infinitesimal p value, = 2 in highly supportive of rejecting the null hypothesis. The FSFD estimate of the ratio of filed to published experiments (see equation 6) is about 78 for this case, an overestimate by a factor of around 4 of the true value of 19. The formula of Iyengar and Greenhouse (1988) discussed above gives 11.4 for the same ratio, an underestimate by a factor of about 1.7. Finally, Figure 2 shows the behavior of the filed and published fractions, as a function of Zo, including the fraction of the studies in the file drawer predicted by FSFD analysis, given by Equation 6 above. Two things are evident from this comparison: The FSFD prediction is a strong function of N, whereas the true filed fraction is independent of N. The FSFD values are far from the truth, except accidentally at a few special values of N and Zo. Step Selection Functions Because it is zero over a significant range, the selection function considered in the previous section may be too extreme. The following generalization allows any value between zero and one for small Z: S(Z) = So for Z < Zo 1 for Z > Zo
(0 i So 5 1).

J. D. Scargle
Published (solid) Filed (dashed)

Fig. 2. Plot of the fraction of the total number of studies that are published (thick solid line) and filed (thick dashed line) in the model given by Equation 12. The FSFD predictions for the filed fraction are shown as a series of thin dashed lines, labeled by the value of N, the total number of studies carried out.

Publication Bias Much as before, direct integration yields SO (1 - SO) erfc -

+

2

and

Equation 17 directly determines the ratio of the number of filed studies to the number of those published under the given selection function. The value of 2 is the quantity that would be (incorrectly) used to reject the hypothesis that there is no effect present and the sample is drawn from a normal distribution. Figure 3 shows this function. The basic result is that almost all of the rele-

Contours of log ( NtikdI Npub )

Fig. 3. This figure shows the dependence of R = Nfiled INpub the two parameters of the step seon lection function. Contours of logloR are indicated. R is large only for a tiny region at the bottom right of this diagram.

102

J. D. Scargle

vant part of the So-Zo plane corresponds to a rather small number of unpublished studies. In particular, R > 1 in only a small region, namely where simul> taneously So 0 and Zo > 0. The behavior of 3 is simple: roughly speaking >

-

where g(Zo) is a function on the order of unity or larger for all Zo > 0. Hence, the bias brought about by even a small file drawer is large unless So 1 (to be expected, because for such values of So the selection function is almost flat). Smooth Selection Functions It might be objected that the selection functions considered here are unreal in that they have a discrete step, at Zo. What matters here, however, are integrals over 2, which do not have any pathologic behavior in the presence of such steps. Nevertheless, I experimented with some smooth selection functions and found results that are completely consistent with the conclusions reached in the previous sections for step function choices.

5. Conclusions
Based on the models considered here, I conclude the following: Apparently significant, but actually spurious, results can arise from publication bias, with only a modest number of unpublished studies. The widely used fail-safe file-drawer (FSFD) analysis is irrelevant because it treats the inherently biased file drawer as unbiased and gives grossly wrong estimates of the size of the file drawer. Statistical combinations of studies from the literature can be trusted to be unbiased only if there is reason to believe that there are essentially no unpublished studies (almost never the case!). It is hoped that these results will discourage combined ("meta") analyses based on selection from published literature but encourage methodology to control publication bias, such as the establishment of registries (to try to render the probability of publication unity once a study is proposed and accepted in the registry). The best prospects for establishing conditions under which combined analysis might be reasonable even in the face of possible publication bias seem to lie in a fully Bayesian treatment of this problem (Givens, Smith, and Tweedie, 1997; Smith, Givens, and Tweedie, 1999; Sturrock, 1994, 1997). It is possible that the approach discussed in Radin and Nelson (1989) can lead to improved treatment of publication bias, but one must be cautious when experimenting with ad hoc distributions and be aware that small errors in fitting the tail of a distribution can be multiplied by extrapolation to the body of the distribution.

Publication Bias

Acknowledgements I am especially grateful to Peter Sturrock for guidance and comments in the course of this work. This work was presented at the Peter A. Sturrock Symposium, held on March 20, 1999, at Stanford University. I thank Kevin Zahnle, Aaron Barnes, Ingram Olkin, Ed May, and Bill Jefferys for valuable suggestions and Jordan Gruber for calling my attention to the book The Conscious Universe (Radin, 1997), and its author, Dean Radin, for helpful discussions. None of these statements are meant to imply that any of the persons mentioned agrees with the conclusions expressed here. References
Allison, D. B., Faith, M. S., & Gorman, B. S. ( 1996). Publication bias in obesity treatment trials? International Journal of Obesity, 20, 931-937. Discussion of publication bias in obesity treatment studies, mentioning the FSFD method. Bauchau, V. (1997). Is there a "file drawer problem" in biological research? OIKOS, 19,407-409. Concludes that the extent of the file-drawer problem in biology is an open question. Berlin, J. A. (1998). Publication bias in clinical research: Outcome of projects submitted to ethics committees. In Perman, J. A., & Rey, J., (Eds.), Clinical Trials in Infant Nutrition. Nest16 Nutrition Workshop Series, Vol. 40. Philadelphia: Vevet-Lippincott-Raven Publishers. Interesting overview of the subject of publication bias. "The existence of publication bias is clearly demonstrated by the studies described." (That is, various health and nutrition studies.) A clear restatement of the FSFD analysis is followed by the statement that "this approach is of limited utility for two reasons: First, because it uses only Z statistics and ignores quantitative estimates of effects (for example odds ratios), and second, because the assumption that all the unpublished studies have a Z statistic of exactly zero is unrealistic." The paper is followed by an interesting, free-wheeling discussion that touches on publication in obscure journals (something that could be added to the discussion in Section 2; something called reference bias), electronic journals, and registries. Biggerstaff, B. J. (1995). Random effects methods in meta-analysis with application in epidemiology. Colorado State University, Unpublished doctoral dissertation, Ft. Collins. Comprehensive discussion of a number of issues in combined studies, including random effects and Bayesian methods with application to environmental tobacco smoke and lung-cancer studies. Biggerstaff, B. J., Tweedie, R. L., & Mengersen, K. L. (1994). Passive smoking in the workplace: Classical and bayesian meta-analyses. International Archives o Occupational and Environf mental Health, 66, 269-277. Discussion and comparison of classical and Bayesian approaches to combined analysis. Buckheit, J., & Donoho, D. (1995). WaveLab and reproducible research. In Antoniadis, A. & Oppenheim, G. (eds.), Wavelets and statistics. Lecture Notes in Statistics No. 103. Springer-Verlag. Claerbout, J., Schwab, M., & Karrenbach, M. (1999). Various documents at http://sepwww. stanford.edu/research/redoc/. Csada, R. D., James, P. C., & Espie, R. H. M. (1996). The "file drawer problem" of non-significant results: Does it apply to biological research? OIKOS, 76,591-593. An empirical study that quantitatively supports the suspicion that publication bias is important in biology. Dickersin, K. (1997). How important is publication bias? A synthesis of available data. AIDS Education and Prevention, 9A, 15-21. Empirical study that reaches the conclusion that "publication is dramatically influenced by the direction and strength of research findings." Found that editors were less enthusiastic about including unpublished studies in combined analysis. Dickersin, K., & Min, Y. (1993). Publication bias: The problem that won't go away. Ann. NY Acad. Sci., 703, 135-146.

104

J. D. Scargle

Earleywine, M. (1 993). The file drawer problem in the meta-analysis of subjective responses to alcohol. American Journal of Psychiatry, 150, 1435-1436. Applies the FSFD calculation to a combined analysis of 10 published studies by V. Pollock of subjective responses to alcohol and risk for alcoholism. The resulting FSFD N is 26. In a response, Pollock agrees with the (dubious) conclusion that this means the analysis can be considered resistant to the file-drawer problem. Faber, J., & Galloe, A. M. (1994). Changes in bone mass during prolonged subclinical hyperthyroidism due to L-thyroxine treatment: A meta-analysis. European Journal of Endocrinology, 130,350-356. Analysis combining 13 studies of a thyroid treatment; the FSFD calculation yielded 18 as the "fail-safe N," which seems to be taken as evidence of the resistance of the results of the analysis to publication bias. Fiske, P., Rintamaki, P. T., & Karvonen, E. (1998). Mating success in lekking males: A metaanalysis. Behavioral Ecology, 9, 328-338. Combined analysis of zoological data. "When we used this (the FSFD) method on our significant effects, the number of 'hidden studies' needed to change our results ranged from 15 ... to 3183 . .., showing that most of our results are quite robust." Robust against including a set of unbiased unpublished results, but not against the relevant biased set. Givens, G. H., Smith, D. D., & Tweedie, R. L. (1995). Estimating and adjusting for publication bias using data augmentation in bayesian meta-analysis (Technical Report 95131). Ft. Collins, CO: Colorado State University, Department of Statistics. A Bayesian analysis of publication bias, using the data augmentation principle, with simulations and application to data from 35 studies of the relation between lung cancer and spousal exposure to environmental tobacco smoke. Givens, G. H., Smith, D. D., & Tweedie, R. L. (1997). Publication bias in meta-analysis: A Bayesian data-augmentation approach to account for issues exemplified in the passive smoking debate. Statistical Science, 12, 221-250. A Bayesian analysis of publication bias, based on choosing a model for the selection function and marginalizing over the parameters of the model. LaFleur, B., Taylor, S., Smith, D. D., & Tweedie, R. L. (1 997). Bayesian assessment of publication bias in meta-analysis of cervical cancer and oral contraceptives. University of Colorado reprint? Application of a Bayesian method for assessing publication bias to studies of the possible effects of oral contraceptive use on the incidence of cervical cancer. They conclude that publication bias, probably caused by the explicit disregard of "low quality" studies, yielded a spurious statistical connection between oral contraceptive use and the incidence of cervical cancer in a previous study. Hedges, L. V. (1992). Modeling publication selection effects in meta-analysis. Statistical Science, 7,246-255. Interesting overview, with comments on the nature of the publishing process and results that are "not statistically significant." Introduces a quantitative model for publication bias that is much like that given here and offers a quantitative test for the presence of publication bias. Helfenstein, U., & Steiner, M. (1994). Fluoride varnishes (Duraphat): A meta-analysis. Community Dentistry and Oral Epidemiology, 22, 1-5. The authors apply the FSFD method to a combined analysis of studies designed to detect the cavity-preventive effect of a fluoride varnish called Duraphat. Application of the FSFD method yields the conclusion (unwarranted, based on my conclusions) that "It is very unlikely that underreporting of non-significant results could reverse the conclusion into an overall nullresult." Iyengar, S., & Greenhouse, J. B. (1988). Selection models and the file-drawer problem. Statistical Science, 3, 109-135. An excellent overview and nearly definitive study using much the same selection function approach as used here, reaching more or less the same critical conclusions. In addition, these authors' Equation 4 offers a presumably improved basis for the FSFD estimate. The paper is followed by extensive comments by Larry Hedges; Robert Rosenthal and Donald Rubin; Nan Laird, G . Patil and C. Taillie; M. Bayarri; C. Radhakrishna Rao; and William DuMouchel-all followed by a rejoinder from Igengar and Greenhouse. Much of this discussion seems to evade the simple issue raised here.

Publication Bias

105

Jefferys, W. H. (1990). Bayesian analysis of random event generator data. Journal of Scientific Exploration, 4, 153-169. This excellent paper criticizes the classical use and interpretation of "p values" in the analysis of data from psychic random number generator experiments and argues for a Bayesian alternative. Jefferys, W. H. (1995). Letter to the editor. Journal of Scientific Exploration, 9, 121-122, 595-597. Incisive comments on the meaning of "p values." Kleijnen, J., & Knipschild, P. (1992). Review articles and publication bias. Arzneimittel Forschung/Drug Research, 42, 587-591. Overview of publication bias in drug research. Laupacis, A. (1997). Methodological studies of systematic reviews: Is there publication bias? Archives of Internal Medicine, 157, 357-358. Brief suggestion that publication bias should be considered in medical research. Matthews, Robert A. J. (1999). Significance levels for the assessment of anomalous phenomena. Journal of Scientific Exploration, 13, 1-7. Persaud, R. (1 996). Studies of ventricular enlargement. Archives of General Psychiatry, 53, 1165. Criticism of a combined study of ventricular enlargement in patients with mood disorders for misinterpretation of the results of a FSFD calculation. In the case at issue, a "fail-safe N" of 10 results for an analysis with 11 studies. The closing statement, "A few studies that are highly significant, even when their combined P value is significant, may well be misleading because only a few unpublished, or yet to be published, studies could change the combined significant result to a nonsignificant result." seems to capture what people must have in mind for the significance of the FSFD computation. Petticrew, M. (1998). Diagoras of Melos (500 BC): An early analyst of publication bias. The Lancet, 352, 1558. Radin, D. 1. (1997). The conscious universe: The scientific truth ofpsychic phenomena. NewYork: HarperEdge. A number of combined studies are displayed and offered as evidence for the reality of psychic phenomena. The possibility that these results are spurious and attributable to publication bias is considered and then rejected because the FSFD calculation yields huge values for the putative file drawer. It is my opinion that, for the reasons described here, these values are meaningless and that publication bias may well be responsible for the positive results derived from combined studies. Radin, D. I., & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random physical systems. Foundations of Physics, 19, 1499-15 14. Combined study of 597 experimental random number generator studies. An innovative method of dealing with the file drawer effect is used, based on fitting the high-end tail of the Z distribution. I question the use of an exponential model for this tail, justified by the comment that it is used "to simulate the effect of skew or kurtosis in producing the disproportionately long positive tail." Riniolo, T. C. (1997). Publication bias: A computer-assisted demonstration of excluding nonsignificant results from research interpretation. Teaching of Psychology, 24, 279-282. Describes an educational software system that allows students to perform statistical experiments to study the effects of publication bias. Rosenthal, R. (1978). Combining results of independent studies. Psychological Bulletin, 85, 185-193. Comprehensive and influential (158 references) analysis of the statistics of combined studies. Rosenthal, R. (1 979). The "file drawer problem" and tolerance for null results. Psychological Bulletin, 86, 638-641. This is the standard reference, cited in almost all applied research in which the file-drawer effect is at issue. Rosenthal, R. (1984). Applied Social Research Methods Series, Vol. 6. Meta-analytic procedures for social research. Newbury Park, CA: Sage Publications. The treatment of the file-drawer effect (Chapter 5, Section 1I.B) is essentially identical to that in Rosenthal(1979). Rosenthal, R. (1990). Replication in behavioral research. In Handbook of replication research in the behavioral and social sciences [Special issue]. Journal of Social Behavior and Personality, 5, 1-30.

J. D. Scargle
Overview of the issue of replication, including a brief discussion of the FSFD approach. Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin, 118, 83-192. A broad overview of combined studies. Rosenthal, R. (1991). Meta-analysis: A review. Psychosomatic Medicine, 53, 247-271. Comprehensive overview of various procedural and statistical issues in analysis of combined data. There is a section on the file-drawer problem (p. 260), more or less repeating the analysis in Rosenthal(1979) with an illustrative example. Sacks, H. S., Reitman, D., Chalmers. T. C., & Smith, H., Jr. (1983). The effect of unpublished studies (the file drawer problem) on decisions about therapeutic efficacy. Clinical Research [abstract], 31, 236. This abstract applies the FSFD calculation to a number of medical studies, dealing with anticoagulants for acute myocardial infarction and other treatments. The computed file-drawer sizes are not large in most cases, but the concluding statement, "This is a useful measure of the strength of the published evidence on any question," is wrong for reasons given in this paper. Sharpe, D. (1997). Of apples and oranges, file drawers and garbage: Why validity issues in metaanalysis will not go away. Clinical Psychology Review, 17, 881-901. This is an extensive review of a number of problems in combined analysis and includes an extensive critical discussion of the FSFD approach, commenting that the rule behind it is arbitrary, but falling short of saying that the method is wrong. Silvertown, J., & McConway, K. J. (1997). Does "publication bias" lead to biased science? OIKOS, 79, 167-168. Short commentary on publication bias in biology. The authors mention several techniques for studying the presence and scope of publication bias and generally downplay the seriousness of the problem. Smith, D. D., Givens, G. H., & Tweedie, R. L. (1999). Adjustment for publication and quality bias in Bayesian meta-analysis. In Berry, D., & Stangl, D. (Eds.), Meta-analysis in medicine and health policy (Chaper 12). New York: Marcel Dekker. Development of a data augmentation technique to assess potential publication bias. Stanton, M. D., & Shadish, W. R. (1997). Outcome, attrition, and family-couples treatment for drug abuse: A meta-analysis and review of the controlled, comparative studies. Psychological Bulletin, 122, 170-191. Combined analysis of a large number of psychological studies; mentions the FSFD problem but does not appear to do anything about it. Sturrock, P. A. (1994). Applied scientific inference. Journal o Scientific Exploration, 8,491-508. f A clear and incisive discussion of a Bayesian formalism for combining judgements in the evaluation of scientific hypotheses from empirical data, such as in investigations of anomalous phenomena. Sturrock, P. A. (1997). A Bayesian maximum-entropy approach to hypothesis testing, for application to RNG and similar experiments. Journal of Scientific Exploration, 11, x-xx. Cogent critique of the standard " p value" test in applied scientific inference. Shows that the Bayesian formalism, based on a clear statement and enumeration of the relevant hypotheses, is superior to and different from " p value" analysis. Taylor, S. (2000). Several papers available at http://www.stat.colostate.edu/-taylorl http:// and www.stat.colostate.edu/-duvall. Taylor, S., & Tweedie, R. (1999). Practical estimates of the effect of publication bias in metaanalysis. U. Colorado preprint. Tweedie, R. L., Scott, D. J., Biggerstaff, B. J., & Mengersen, K. L. (1996). "Bayesian meta-analysis, with application to studies of ETS and lung cancer. Lung Cancer, 14(Suppl. l), S171-194. A comprehensive analysis of publication bias, comparing a Markov chain Monte Carlo technique for implementing Bayesian hierarchical models with random effects models and other classical methods.

Journal of Scientific Exploration, Vol. 14, NO. 1, pp. 107-114,2000

0892-33 10/00 O 2000 Society for Scientific Exploration

Remote Viewing in a Group Setting

Interval Research Corporation Palo Alto, CA 94304 e-mail: radiant@pacbell.net

1010 Harriet St. Palo Alto, CA 94301

Abstract- Remote viewing (RV) is a perceptual ability whereby individuals are able to describe and experience objects, pictures, and locations that are blocked from ordinary perception, either by distance, shielding, or time. RV is usually carried out as a team effort, consisting of a viewer who is attempting to describe a target, and an interviewer who assists the viewer in exacting images and sensations from his of her subconscious process. We report a RV experiment carried out at a conference in Arco, Northern Italy, with a class of 24 participants, many of whom were healers and "energy workers." Based on previous work of the authors, great attention was given to creating a feeling of community and coherence of intention within the group during the threeday class. In the fourth of the five sessions of the class, a formal, RV experiment was conducted with class members working in pairs, wherein each person served alternately as viewer and interviewer. Viewers were asked to describe a picture of an outdoor scene, encased in an opaque, sealed envelope, which they would be shown immediately after the session. The interviewer then was directed to take the viewer's sketches and written impressions to the front of the room and rank order the material (from 1 to 4) against the four possible pictures from a preset target package. In this blind-ranking protocol, 6 first-place matches would be expected by chance from the 24 viewers. Instead, 14 first-place matches were achieved. The binomial probability of this outcome is 5 x with an effect size ~ l ( l \ r ) " ~0.64 =
Keywords: remote viewing-psi-ESP

Introduction
1

The remote viewing (RV) protocol that was developed in 1972 by scientists at Stanford Research Institute has now been in the public domain for more than 25 years. This perceptual processing technique pertains to the acquisition and description by mental means of verifiable information about the physical universe that is blocked from ordinary sensory perception by distance or shielding (Puthoff and Targ, 1976). The authors have many years of experience conducting RV studies, in which effect sizes Z I ( N ) ' ' ~ 0.6 and greater =

108

R. Targ & J. E. Katra

are not unusual. We often have attributed this degree of success to the energy and positive expectation that the experimenters bring to each session. This experimental ambiance and communicated expectation was described in detail in a 1990 Parapsychological Association Conference panel, "Increasing Psychic Reliability" (Targ et al., 1991). ESP experiments in group and classroom settings have traditionally had low effect sizes, 0.2 or less. This is attributable principally to a lack of attention, coherence of feelings, seriousness of purpose, and motivation in the group, combined with the use of unselected and untrained subjects and a lack of trial-by-trial or otherwise timely feedback to the subjects (Honorton and Ferrari, 1989). The purpose of the experiment described here was to determine if we could overcome these obstacles and carry out a successful experiment in a group setting with people who were previously unknown to each other.

The Arco Experiment
For a phenomenon thought in many circles not to occur (Hyman, 1996), we have learned a great deal about how to increase and decrease the accuracy and reliability of RV. Remote viewers often can contact, experience, and describe a hidden object or a remote natural or architectural site based on the presence of a cooperative person at the location, geographical coordinates, or some other target demarcation, which we call an address. Shape, form, and color are described much more reliably than the target's function or other analytical information. In addition to this vivid visual imagery, viewers sometimes describe associated feelings, sounds, smells, and even electrical or magnetic fields. Blueprint accuracy sometimes can be achieved, and reliability in a series can be as high as 70%. With practice, people become increasingly able to separate out the psychic signal from the mental noise of memory, analysis, and imagination. Targets and target details as small as 1 mm can be perceived. Again and again, we have seen that accuracy and resolution of RV targets are insensitive to variations in distance (Targ and Katra, 1998). With this goal in mind, the authors accepted an invitation to conduct a 15hour RV workshop at the 20th International Astra Meeting, called "Rights of Passage," in Arco, Italy (October 12-15, 1999). Astra publishes a widely read metaphysical magazine in Italy and conducts an annual conference on a variety of esoteric subjects in cooperation with residents and city officials of the town of Arco in the foothills of the Italian Alps. We accepted the invitation to introduce a class of 24 Italian students to spiritual healing and to teach them how to perform RV.

Outline of the Workshop
We had five 3-hour sessions with our 24 students. Everything that we wished to communicate to our students had to be translated into Italian, sentence by sentence. In the first morning session, we described our proposed program

Remote Viewing in a Group Setting

109

and introduced the students to the idea of remote viewing and spiritual healing. An overview of the material was presented, together with numerous slides from previous experiments, showing what can and cannot be expected from RV. We discussed the necessity of separating the so-called psychic signal from mental noise. We shared our belief that RV is a natural and widely distributed ability for which everyone, to a greater or lesser degree, has the potential. The emphasis of this session was on how to do the mental processing for real-time RV with immediate feedback. The session ended with each participant doing RV of a "small, interesting object" that the authors had brought for them to psychically observe and describe. This was, of course, not a double-blind trial because the person guiding the students in their efforts knew the object. The purpose of the exercise was to show the students the variety of questions that an interviewer can ask regarding the shape, texture, size, weight, type of material, color, possible use, and so forth, as he or she leads the viewer to look for surprising mental images. After the trial, the students were each given a small opaque paper bag, asked to put a small object into it, and bring it to the next morning's class. The afternoon session was experiential and dealt with meditation, group coherence, and spiritual healing. There was great attention given to building rapport and trust, both between the experimenters and the students and among the students themselves. To achieve this, we conducted a lengthy, guided meditation with music and a guided experience of "energy sharing" among pairs of students. In the third session, the second morning, the students divided themselves into pairs. They took turns being interviewers and viewers for the objects each had brought to the session. This activity also was not a double-blind trial, but it gave the students another opportunity to look for mental pictures that correspond to something outside their experience. We did not want to use pictures for this training, because we hoped to keep their mental slates "clean" for the pictures we would use in the formal experiment in the next session. The fourth session (held on October 14, 1999, from 3 to 5 p.m.) was a formal experiment, described below. The fifth and final session was carried out the next morning. It included a discussion of the experiment and the spiritual implications of psychic abilities. We asked, what do the spiritual healer, the mystic, and the psychic have in common? We proposed that they are all in touch with their nonlocal, interconnected mind and their community of spirit. In the spirit of the conference, we suggested that as we approach the millenium, in every area of human activity, we are experiencing a climax in which science and religion are finally becoming coherent in the exclamation of a single, unified truth. Recent research in areas as different as distant healing and quantum physics are in agreement with the oldest of spiritual teachings of the sages of India, who taught that "separation is an illusion," suggesting that we have an inner knowledge of time and space. In this final session, we observed that the in-flow of information, which is

110

R. Targ & J. E. Katra

the hallmark of RV, and the out-flow of intention, which plays a part in facilitating distant healing, are on either side of the quiet mind and the stillness that can arise between them. Perhaps narrowly focusing on the omniscience of ESP is simply a trap that prevents us from discovering who we really are and how we might direct our life's attention. Whenever any one person demonstrates an ability beyond the ordinary, it can be seen as an inspiration to the rest of us, indicating an immense and still largely undeveloped human potential.

Experimental Protocol
The formal experiment in the fourth session was a demonstration-of-ability test to determine if the students could actually show some RV capability. Before leaving for Italy, we had prepared 24 file folders, each with four target pictures. The 8 x 10 color pictures were carefully selected from the 20,000 Core1 Professional Photos available on a set of 200 CD ROMs, which were made available to us by Dr. Edwin May of the Laboratories for Fundamental Research. The pictures each contained a central focus, such as a mountain, waterfall, lighthouse, windmill, bridge, tall building, ruins of various descriptions, pyramid, trees, coastline, and so forth. Each group of four pictures was carefully assembled so as to have as few overlapping pictorial elements as possible. One picture from each group then was put into an opaque, tamper-resistant envelope, and then the envelope was sealed. These pictures were selected randomly, and then filtered to provide a representative mixture of possible targets to avoid any accidental stacking that could occur if, for example, we had an overrepresentation of waterfalls, or bridges. We keyed each envelope by number to the target folder to which it belonged. To carry out the experiment, the group again divided themselves into pairs. From each pair, the person who was to be the first interviewer came to the front of the large, dimly lit meeting room and was given a sealed envelope containing a picture. Each interviewer then proceeded to elicit from their partner his or her impressions of the picture that was in the envelope. They also could describe their impression of the same picture because it would be shown to them for feedback right after their session. The interviewers then asked their partners to draw sketches and to write down any key words, both of which were to reflect their mental image of the target picture. When interviewers felt that they had a coherent description from their partner viewer, they brought the remarks and sketches to the front of the room and gave their material to one of the two assistants. The sealed envelope was then carefully opened under the front table, out of sight, and the picture inside was randomized into the folder with the other three pictures of its set. Because these pictures had all been used previously, many of them had little wrinkles around the edges; any wrinkles caused by handling in this experiment was not thought to be a factor. The folder then was given to an assistant, who spread the four pictures out on a table. The interviewer was then asked to rank the

Remote Viewing in a Group Setting

111

four pictures from 1 to 4 in accordance with their estimation of best to worst match to their viewer's description. Neither of the assistants working with the interviewer had any knowledge of which of the four pictures was the target picture. After the assignment was made, the correct picture was identified by the independent scientist tracking the target pictures that were selected from the target folders. The interviewer then took the correct target picture (regardless of its rank) back to the viewer for feedback.

Results
The first group of 12 viewers received eight first-place matches (p = .0028, h = .863). The second group of 12 obtained six first-place matches (p = .0544, h
= .52). The overall result of the experiment found 14 first-place matches for the 24 students (p = .0005, h = .69), with a 58.3% hitting rate. There were two

second-place matches, four third-place matches, and four fourth place matches. In Figure 1, we show the sketch produced by Viewer 1 to finish the RV task. Viewer 1 was a highly regarded Italian energy healer; his wife was his interviewer, and she was known as a psychic practitioner in her own right. Within 1 minute of the interviewer receiving the target picture in its envelope, she returned with Viewer 1's sketch. After seeing the four possible pictures, it took the interviewer no time at all to identify the correct one, an image with pillars. In the illustration, the word cielo is Italian for sky. It is interesting to note that Viewer 1 was in no way limited in his drawing by the edges of the paper he was given. Figure 2 shows a drawing made by Viewer 2, a psychotherapist, who was interviewed by a good friend. This interviewer also had no difficulty choosing the correct picture from the four offered, which was a picture with the domed buildings and cross-hatched windows.

Discussion
Teaching RV is one thing, but teaching it entirely though a translator seemed like a daunting task because of our belief in the importance of intimacy and coherence in the process. We present this experiment here to function as a possible aid to other researchers who are called upon to demonstrate or teach psychic abilities in a group setting. We did not carry out a double-blind comparison of this approach with other possible methodologies. Nonetheless, what we describe here reflects many years of success in eliciting psi from inexperienced students. It was the success of this experiment that made us feel it was worthwhile to describe our approach. We believe that the success of this experiment can be attributed to several factors. Perhaps most important, all of the participants were self-selected to take part in a RV training program for which they had to pay in advance. Also, the 20 women and 4 men in the class all considered intuition at least a moder-

112

R. Tirg & J. E. Katra

Fig. 1. Sketch by Viewer 1 at top, together with actual target picture (lowcr left) and three decoys.

Rernote Viewing in a Group Setting

113

I:ig 2 Sketches by Viewer 2 at top, together with actual target picture (lower right) and three dccoy\.

114

R. Targ & J. E. Katra

physicians. We further believe it was helpful to have found a way to give the students practice in RV with an interviewer through the use of small objects, which did not contaminating their mental imagery with pictures resembling their target pictures. Thus, we were able to work with "first timers" who actually had some practice in RV. It is likely that the use of large, clear, colorful, easy-to-describe targets was an additional helpful element. Finally, we wish to point out that the effect sizes seen in this experiment are analogous with effect sizes seen in the recently published future forecasting experiment by the authors (Targ and Katra, 1998), and the 36 trial experiment carried out many years ago with six army volunteers at Stanford Research Institute (Targ, 1994). These intelligence officers achieved an overall effect size of 0.63, comparable to the 0.69 seen in this experiment. We consider these results typical for a wellconducted RV experiment. These experiments differed from many of the usual RV cases in that no one knew the correct answer at the time of the experiment. Therefore, this study would be considered one of the clairvoyance type, with only the final feedback providing a possible precognitive channel.

Acknowledgements
We wish to sincerely thank Dr. Dean Radin for his thoughtful help in the initial design of the formal experiment described here and also for his technical assistance in preparation of the paper for publication. If any measure of success is to be claimed for our approach to teaching remote viewing, an equal measure of credit must be given to our enormously talented and intuitive translator, Giorgio Cerquetti, who conceived and organized our participation in the conference. We also very gratefully acknowledge the generous support of the Astra Meeting, which underwrote the costs of the entire program.

References
Honorton, C., & Ferrari, D. C. (1 989). Future-telling: A meta-analysis of forced-choice precognition experiments. Journal of Parapsychology, 53, 281-209. Hyman, R. (1996, MarchIApril). The evidence for psychic functioning: Claims vs. reality. The Skeptical Inquirer. Puthoff, H. E., & Targ, R. (1976). A perceptual channel for information transfer over kilometer distances: Historical perspective and recent research. Proceedings of the Institute of Electrical and Electronical Engineers, 64, 329-254. Targ, R. (1994). Remote viewing replication evaluated by concept analysis. Journal of Parapsychology, 58,271-284. Targ, R., Broad, W., Schlitz, M., Stanford, R., & Honorton, C. (1991). Increasing psychic reliability: A panel discussion presented at the 33rd annual conference of the Parapsychological Association, Chevy Chase, Maryland (August 16-20, 1990). Journal of Parapsychology, 55, 59. Targ, R., & Katra, J., (1998). Miracles of mind: Exploring nonlocal consciousness and spiritual healing. Novato, CA: New World Library. Targ, R., Katra, J., Brown, D., & Wiegand, W. (1995). Viewing the future: A pilot study with an error-detecting protocol. Journal of Scientific Exploration, 9, 367-380.

Journal ofScientijic Exploration, Vol. 14, No. 1, pp. 115-120,2000

0892-33 10100 O 2000 Society for Scientific Exploration

GUEST COLUMN: THE SOVEREIGNTY OF SCIENCE
430 Kennedy Avenue Pittsburgh, PA 15214

The Pronouncements of Science
In today's world, most knowledgeable persons concede the authority of science. Since the findings of science are vast, knowledgeable persons, including scientists, accept the pronouncements of science in lieu of a personal mastery of its evidential structure. The "pronouncements of science" are the consensus generalizations expressed by the leaders of science, either as individuals or through their professional organizations or without dispute in most of the relevant textbooks. The pronouncements of science, thus understood, are accepted by persons who wield power in the First World. Being intelligent but often scientifically illiterate, the movers and shakers have little choice but to accept the sovereignty of science in all those areas where science makes pronouncements. In most areas of science, "pronouncements" are broadly based on empirical observations that have been widely and repeatedly made and that are often embodied in unifying theories of great eloquence. In some cases, however, the pronouncements of science may depend on observations that have been made by only a few specialists whose findings are tentatively accepted by the scientific community because of the credentials of those specialists. In other words, science allows itself some laxity in awarding its approval.

The Opposition of Science to Psi
With regard to the occurrence of psi (psychic) phenomena, however, science makes a clear pronouncement, namely, that such phenomena do not occur (McConnell and Clark, 1991). What needs explanation is why this negative consensus of science persists despite the volume and quality of evidence favoring the phenomena and the outstanding credentials of some of those who vouch for that evidence (Radin, 1997). What demands attention, however, is not the absence of a consensus that psi phenomena occur but the presence of a consensus that they do not! It isto say the least-unusual that science should make a pronouncement on the nonoccurrence of a phenomenon in any area of current observation (as opposed to theory, e.g., Creationism). Moreover, in view of the supporting evidence and the importance of psi, if real, it might have been expected that some

116

R. A. McConnell

1

of the leaders of science would have disassociated themselves from this knownothing attitude of their colleagues. The only living leader who has done so is Nobel Prize winner Brian D. Josephson, Professor of Physics at Cambridge University (McConnell, 2000, p. 364-365). Scientifically pretentious denunciations of belief in psi as a superstitious pox upon humanity usually end with the justification, "There is insufficient evidence to prove the occurrence of the phenomena." This suggests that the evidence, while currently inadequate, would be accepted if it were not deficient. My intention is to explain why the evidence can never be "adequate" unless, psychologically speaking, the modern world morphs to a very different future state. While positive evidence for psi occasionally is reviewed in a respected journal of science, it is usually done by an incompetent critic and, as a result, is generally scientifically misrepresented-often with ideological fervor (McConnell, 2000, p. 359-364). Any realistic examination of the literature will show that the competent spokesmen for the evidence favoring psi are denied access to the respected journals of popular science, such as Scientific American, and to the leading research journals of general science (Science and Nature), while at the same time there is an international organization devoted to the disparaging of "paranormal phenomena." This organization is called the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP). Among its 77 listed sponsoring Fellows are five Nobel Prize winners in science. CSICOP publishes The Skeptical Inquirer: The Magazine for Science and Reason with a circulation of 50,000. In the past, this magazine has attacked all psi phenomena. Recently, it has softened its attacks but remains generally unfavorable to parapsychology as the serious study of psi. If The Skeptical Inquirer were to adopt a neutral attitude, e.g., by fairly reviewing Radin (1997), it would offend most of its subscribers and risk economic collapse. To understand the subtle role of this journal in blocking the progress of science in this area, it will be necessary to go back a long way.

Religion and Psi
From the beginning of consciousness, man has encountered phenomena and situations whose cause or significance was not apparent to him. Fearing the unknown as a threat to his existence, he invented reassuring explanations to calm his mind and allow him to think constructively or even enjoyably about his immediate activities. Many of his feared experiences seemed related, and these he lumped together and explained by a variety of otherworldly myths. This led to organized religion, which reached the peak of its political power among Western European people during the Middle Ages. As men become more sophisticated in their thinking, beginning in the 16th and 17th Centuries, religion was rejected in favor of materialist science,

The Sovereignty of Science

117

which, however, was unable to encompass certain of the nonmaterial aspects of experience. These aspects, therefore, remained in the domain of religion. By the time of the 18th Century Enlightment, materialist science was in its ascendancy, while the beliefs of religion were increasingly recognized as fraught with self-contradictions. An intellectual movement was set afoot to dispense with religion. In our own time, the majority of educated people in the First World reject most of the superficial, irrelevant aspects of religion, which however continue to hold sway among the less educated. This has led to the interclass religious conflict in which we are embroiled today. The author believes that the ultimate solution for this dilemma is for the better educated people to recognize the reality of psi phenomena, and, under the guidance of scientific method, to make a place for them in their own belief systems. Then, having regained credibility with the masses, they might hope to eradicate truly superstitious beliefs. This is what your author believes should be the long-range goal of parapsychology.

Science and Psi
In this paper, the author will explain how the materialists, in their struggle to be free of anything smacking of superstition, have adopted a defensive ideology that denies reality and betrays their own intellectual principles. The author has subscribed to The Skeptical Inquirer for many years and regards CSICOP as the foremost instrument of the arch materialists in their fight against superstition. In their zealotry, they have opposed those few scientists today who are struggling with the difficult, perhaps impossible, task of discovering the nature of nonmaterialist phenomena and of explaining them in the materialist language that we speak. These visionaries seek no less than the reconciling, through the use of the scientific method, of the material and the nonmaterial: of science and religion! Parapsychologists have no quarrel with the antisuperstition program of CSICOP and The Skeptical Inquirer, but they are frustrated by CSICOP's implicit denial of the traces of psi phenomena that are all around us.

Spontaneous Psi: Its Characteristics
Psi occurs spontaneously, as well as in the laboratory, and the evidence for its spontaneous occurrence (as well as for its laboratory occurrence) overwhelmingly affirms its reality (McConnell, 2000). In this paper, spontaneous psi is where we shall focus our attention. From all accounts, it would appear that spontaneous psi occurs more often among less educated persons, and this is commonly used as an argument that the evidence supporting psi must be fallacious. On the other hand, it is possi-

118

R. A. McConnell

but are recognized more easily by the less educated who are not subservient to the sovereignty of science and are under little social pressure to accept the common wisdom of science, according to which, psi does not occur. Spontaneous psi occurs in many forms and with all degrees of intensity and frequency, depending on the person. The most common form of psi is extrasensory perception (ESP) in which a percipient acquires information through other than sensory-motor means. Spontaneous ESP is most obvious when it reveals information from a distance or about the future.

How Science Ignores Psi
If, indeed, psi occurs equally at all levels of education, why is its reality not obvious to the world? The answer is multifold. For most people, ESP occurs only as a weak and infrequent effect, readily deniable as coincidence if one so wishes or readily recognizable as anomalous if one has low evidential standards. Also, those professional psychics who make a living using psi, which is at best a thoroughly undependable phenomenon, must often supplement their performance by cheating to maintain their business reputations. This leads to the supposition that all exhibitions of spontaneous psi are mistaken or fraudulent. Add to this the treatment of psi as entertainment on television. Powers far beyond those scientifically demonstrated to be possible are daily fare on television, enhancing the disbelief in psi among educated people. Some parapsychologists believe that psi occurs more readily in minds that have a limited ability to think analytically but whose mental forte lies in their right cerebral hemisphere, i.e., in those who are artistically inclined rather than in those who are logical by nature. If so, this merely strengthens the prejudice of scientists against the reality of psi. The real puzzle is why ESP is not universally acknowledged when it occurs, as it occasionally does with some people, as a dramatic and undeniable effect. For high-powered psychics who happen to be highly educated, there is an escape hatch from the embarrassment they might otherwise cause to their professional-class friends. To avoid cognitive dissonance, these individuals usually join their less educated fellow psychics and quietly assign psi to a realm of spiritual belief that is commonly conceded to be outside the realm of science. Thus, as regards psi experience, religion serves two purposes. For those deeply devoted to an organized sect, psi may be considered either as a gift of God or as a manifestation of His presence. For those rare others, seeking escape from their own undeniable psi, religion provides a haven to put psi safely out of mind. In both functions, psi strengthens the legitimacy of religion. Quite aside from those who recognize themselves as psychic, the existence of a nonphysical realm-call it what you will-must ultimately strengthen the

The Sovereignty of Science

119

Rewriting the Textbooks
There is another and more encompassing explanation for the failure of the Scientific Establishment to examine carefully the empirical evidence for the occurrence of psi. To understand the undying opposition of scientists to psi, one must consider the hypothetical consequences of psi's universal acceptance. All general textbooks of psychology and physics would have to be rewritten. For physics, this might require no more than an acknowledgment that there exists a nonphysical realm with which the physical realm can interact, both spontaneously and experimentally. The exploration of these interactions would attract the interest of theoretical physicists even while experimental physicists might be frustrated in their misguided attempts to demonstrate psi as though it were a new form of electromagnetic radiation. In psychology, the fallout from a universal recognition of the reality of psi would be catastrophic. Most present and past psychological research would be recognized at once as trivial. Much of the rest would lose its claim to validity because of the probable confounding effects of ESP or of psychokinesis (e.g., by the direct mental action of "wishing" on measuring instruments). Experimental psychology as now practiced would be destroyed as a scientific enterprise. Psychiatry, for its part, would have to go back and start from the beginning.

A Revisionist Worldview
The foregoing practical considerations do not, however, come to grips with the philosophic heart of the matter, namely, the shattering of the materialist worldview of physics. This worldview governs not only the physical sciences, but also psychology and postmodernism, and it justifies the aggressive behavior of much of the First World's ruling class. It is not possible for the average layman to understand what the acceptance of psi would mean to a thoughtful scientist. The acknowledgement of the reality of psi by physics departments, implying as it would, philosophic dualism, would be comparable to the denying by Rome of the reality of God. Any attempt by a thoughtful scientist to reconcile the established facts of parapsychology with his understanding of his philosophic commitment to his profession would encounter an emotional block. Direct, uninferable knowledge of the future, for example, which is accepted matter-of-factly by parapsychologists, would be horrifying to the scientist who accepts the materialist view of reality.

Conclusion
Since the sovereignty of science extends to most persons who wield power in the First World, including most of those in middle management, its collapse could result in social chaos. The materialist scientist's adamant denial of the

120

R. A. McConnell

existence of psi can perhaps best be explained by his fear of the consequences that might follow in the event of its acceptance.

References
McConnell, R. A. (2000). Jo~rrcle irljr~lzf~l.re-rentrflt stlrd> of the rloo~il\cia\ liicrcrturc.. Washto A ington, DC: Scott-Town\end. McConnell, R. A,, & Clark, T. I .( 1 99 1). National Academy 01 Sciencc4' oplrlion on parapsyS chology. Jounlnl of thc Anrcrrt L ~ HS o c r e t ~for P S Jhicrrl Ke\cjrrrtIr, 8.5, 3331-365. ~ Radin, Dean I. (1997). 7 1 1c~ otzcc rolr s utlrllrr.\cJ: 7 ' 1 ~ ~rc~~ltifl( \c t111t11 of 1)')) hrc / ) / Z P I I O ~ ~ I ~San I . c I~C Francisco: Harpercolllns

In 1943, by visiting Harvard's library, Dr. McConnell ascertained that ESP occurs, although beyond explanation by known physics and psychology. After entering parapsychology full time in 1947, he devoted his efforts primarily to the question: "Why do scientists reject ESP?" This paper gives the answer to his 50-year search. To the pursuit of this question, Dr. McConnell brought an unusual breadth of experience. He holds a doctorate in physics. During World War 11, he led a radar development group at M.I.T. He is a life senior member of the Institute of Electrical and Electronics Engi~~cers. In 1957, he was the founding president of the Parapsychological A\sociation, which was admitted to aCfili;ltion with the American Association for thc Advanccrrlent of' Science (AAAS) in 1969. He is a Fellow of the American Psychological Society, Research Professor Eralcn itus tsf Bk~logical C ~ ~ L I C Ca, Fellow of the AAL2S S ~irid

Journal of Scientific Exploration, Vol. 14, No. 1 , pp. 121-1 38,2000

0892-33 10100 O 2000 Society for Scientific Exploration

BOOK REVIEWS
Best UFO Cases-Europe by Illobrand von Ludwiger. Las Vegas, NV: National Institute for Discovery Science, 1998. 173 pp. $19.95, paper. ISBN 09666077-0-8. Available from: The National Institute for Discovery Science, 1515 East Tropicana Avenue, Suite 400, Las Vegas, NV 89 119.
Although there is a plethora of books in English about unidentified flying objects (UFOs), the majority deal with the North American scene, with several covering phenomena in the United Kingdom and other English-speaking countries. Best UFO Cases- Europe helps fill a void in the libraries of investigators who are interested in central Europe. The author, Illobrand von Ludwiger, is well qualified to present this wide array of physical evidence, having been employed as a systems analyst at Daimler-Benz Aereospace AG-DASA in Ottobrunn, Germany, since 1964. He has participated in rocket launches (1966) as a member of the European Launcher Development Organization (ELDO) and was the project manager responsible for the simulation of traffic for new transportation systems. He has studied UFO phenomena for almost 30 years, and he founded the MUFON UFO Network, Central European Section, in 1974, now named the Society for the Scientific Exploration of Anomalous Atmospheric and Radar-Phenomena, MUFON-CES, Inc., which has employed groups of technically trained and equipped field investigators. Best UFO Cases- Europe begins with three forewords written by John F. Schuessler, Bruce Maccabee, and Richard F. Haines. Each provides different views of ufology in general and of this volume in particular. As Schuessler points out, this work "is especially valuable because it provides data showing a level of UFO activity comparable to what has been going on over the United States, South America, Australia, Japan and other parts of the world." The professional reader of Best UFO Cases- Europe will find much to analyze, but the book will interest even casual readers. Chapter One provides a historical look at UFO cases in central Europe from the 16th through the 18th centuries, as well as more recent phenomena such as the so-called Foo Fighters of WWII and "Ghost Rockets." There are several previously unpublished cases presented. Chapter Two presents five cases involving well-defined UFO shapes that have been reported, mostly from the 1970s through the 1990s. Readers discover that the same object shape names were reported in central Europe as were reported in America. The huge number of triangular-shaped object sightings merits its own chapter, and thus Chapter Three begins with the Belgian flap of November 1989 through April 199 1, when at least 3,500 separate UFO sightings were reported. All of these were seemingly of the same large, round-comered, triangular object. Another 13 new cases are also reviewed, three witnesses of which were MUFON-CES members. Numerous

122

Book Reviews

helpful drawings in both color and black and white are included. In Chapter Four objects with complex structures are reviewed (eight cases). These cases are important because such highly detailed yet unusual objects do not appear very often. They become more persuasive when citizens in one part of the world independently report seeing the same or similarly shaped objects. The author gives several such examples. The two main sections and 10 subsections of Chapter Five are perhaps the most troubling in this book, for they deal not only with landed UFOs but also their alleged "occupants." The author begins with four cases (1914 to 1954) from Germany and France in which none of the witnesses knew each other, yet the UFOs and "entities" still had the same basic shape. These "entities did not conform to the description of UFO occupants as handsome humans with long hair then in vogue." Three more occupant cases are presented in detail, with emphasis given to after effects, various physical and psychological tests given, and artist drawings and photographs of witnesses and locales. Chapter Six is devoted to a single CE-2 nighttime case in Vaddo, Sweden, on November 11, 1956. This case involved two men who saw a silent, flattened, elliptical, metallic sphere approach their car and illuminate the entire surrounding countryside as bright as day. The object seemingly caused their engine to sputter and die and their headlights to go out as it landed directly ahead of them in the roadway. The UFO (estimated at about 8 m in width) was wider than the two-lane road. After some 10 minutes, it took off rapidly. The two men searched the landing area and found that the grass had been flattened on each side of the road and also discovered a small chunk of metal the size of a matchbox. It was still very hot to the touch. Details of the eight separate laboratory chemical analyses that were performed on this sample are given, along with drawings and macrophotographs. It was found to consist of pulverized wolfram (tungsten) carbide and cobalt. Three major photographic cases (Greifswald Lights of August 24, 1990; Guiseppe Lucifora photos of June 19, 1987; Rudi Nagora photos of May 23, 197 1) are discussed in Chapter Seven. Thirty color plates present these mostly daylight disc images. Relatively little technical information is included about them. I consider the major contribution of this book to be found in Chapter Eight, "Traces of Unidentified Flying Objects on Military Radar Devices over Central Europe." Within these 34 pages of text and radar plot diagrams are found numerous examples of discontinuous flight paths, very sharp corners, ultrahigh-velocity flight, and other anomalous data. Von Ludwiger also includes the almost mandatory discussions of false returns and the various kinds of radar in use today. He makes the intriguing statement here: "During daily operating procedures, such points or short tracks are interpreted as disturbances and ignored. Every employee of military airspace control is familiar with them. But nobody can satisfactorily explain what they are. They are considered to be radar or the computer program mistakes or some kind of atmospher-

I

Book Reviews

123

ic phenomenon. They are not reported because of fear of being reproached by colleagues and superiors for (having an) insufficient working knowledge of the radar system." In short, European air-traffic controllers behave the same way as controllers do elsewhere in this regard. Also provided are almost 30 different radar traces of highly provocative aerial flight paths by UFO and unidentified aerospace vehicles. Nonetheless, not even America's F-117A Stealth Fighter can perform most of the maneuvers shown here. Anyone who claims that no radar traces of highly anomalous flight behavior are available should study this chapter in detail. One drawback is the lack of all background technical information that one would need to fully understand these traces. Of course, each radar contact case could fill a book of this size. In Chapter Nine, we are treated to a brief but informative discussion on magnetism and selected details of a "Fluxgate" magnetometer designed and built by two MUFON-CES members. Sample X, Y, Z channel output tracings are given for two 24-hour periods but are not related to UFO phenomena. The following 14 pages (Chapter Ten) are devoted to various data catalogues and statistical analyses, with special emphasis given to physical interactions between UFOs and the local environment. Results of a study of electromagnetic and "gravity" cases by Adolf Schneider is presented (1,319 total events for the period 1930-1982); these data are broken down in many useful ways for the interested reader. Donald Johnson's work published in the Journal for UFO Studies (1983) is also included for comparison. Three other cases are reviewed in moderate detail as well (August 13, 1970; August 14, 1973; and January 8, 1981). Chapter Eleven is, in my view, the second-most valuable contribution of this book. It presents nine proposed explanatory hypotheses for UFO phenomena, although it can be argued whether some actually nestle together in fewer categories. I suggest that only four basic groups are needed: (1) extraterrestrial origin (ET visitation, parallel universes), (2) intra-cranial origin (psychological, paranormal, psychic projection), (3) environmental origin (tectonic strain, earth lights), and (4) exotic origin (interdimensional, the author's "projector theory," which deals with hyperdimensional, "transcendental" manifestations, time travel). Of course, how one subdivides this theoretical "pie" is actually a matter of interpretation, definitions, and personal taste, rather than hard-andfast science. Nevertheless, such discussions are valuable because they force readers to make explicit what they expect of the data collected about UFO phenomena. In the final chapter, von Ludwiger perhaps may be forgiven his personal interpretations of earlier occupant evidence presented in which he appears either to go beyond the evidence or to disregard still other data. For example, he states, "The fact that the UFO occupants are very human-like speaks for the Earth as their home planet. Sometimes their appearance doesn't differ from that of a European, and witnesses will have heard that these occupants also

124

Book Reviews

UFO occupants were from a time that is still to come. In that case they must avoid any close contact to the people who are born in a different time period, because all actions could influence future events and therefore undesired reactions to the chain of events for the occupants" (p. 157). The author seems to have overlooked a large and continually growing number of cases describing (a) nonhumanoid creatures and (b) narrative interviews with people from around the world who claim to hear occupants speak in no earthly language but with other strange utterances. He also invokes the well-known "categorical imperative" from science fiction that one race should not interfere with the development of another, even itself seen from the future. These few difficulties notwithstanding, Best UFO Cases-Europe is a serious and positive contribution to the literature on a variety of UFO phenomena and is likely to remain so for many years to come. Its rather minor problems with English grammar, which is not the author's native language, are far outweighed by its emphasis and clear presentation of hard data from Central Europe. Illobrand von Ludwiger is to be commended on collecting, translating, and presenting these data for the benefit of many others around the Englishspeaking world. The National Institute for Discovery Science is also to be commended for bringing this work into print. Richard I;: Haines 325 Langton Avenue Los Altos, CA 94022

The Last Laugh by Raymond Moody. Charlottesville, VA: Hampton Roads Publishing Company, 1999, paper. 210 pp.
This well-written and entertaining book bears the subtitle A New Philosophy of Near-Death Experiences, Apparitions, and the Paranormal. The author proclaims that this book is an obligatory addendum to his original celebrated work Life After Life. As such, it consists of the thoughts that commercial publishers edited out of his works in a 20-year period. Indeed, the author claims that the publishers over the years hacked out so much of his work that he does not recognize his work anymore. He objects, for example, to covers of books stamped with untruthful exclamations such as "Scientific Proof of Life After Death!" Although he objected in various ways to such extravagant claims about his work at the time, he felt it nonetheless was important to have the work published, especially because publishers were not interested in the work without the addition of such hype. These publisher's tactics were a constant headache for Moody-and a continuing source of embarrassment. In fact, the author frequently claims in this book, and elsewhere, that the idea of proving by appeal to paranormal phenomena, scientific or otherwise, any form of life after death is a waste of time because it cannot, in his view, be done. He is emphatic, however, in insisting that it cannot be disproved either.

Book Reviews

125

Moody's skepticism on this score embraces the view that one can neither prove nor disprove the existence of life after death by appeal to the paranormal, and this is the story he has not been able heretofore to defend in writing. As he says, "The publishers rake in the cash, and I'm the one left to answer the critic's objections-the very objections I had anticipated and resolved in the passages editors cut out of what I wrote, or altered" (viii). To that end, the author claims that, in issuing this book, he is declaring null and void Life After Life. He will accept responsibility for Life After Life only insofar as it is read and interpreted in the broader context provided by this new book. By way of the implications of the subtitle of the book (i.e., A New Philosophy o f Near-Death Experiences, Apparitions, and the Paranormal), Moody declares early that this book challenges settled thinking about the supernatural and that it is, in fact, a secession from encrusted discussions on the paranormal. On this point, he is intent on dislodging three distinct sects of true believers who have long dominated learned discussion about these things, and he seeks to do as much by developing an alternative theory about the nature of the paranormal. The three distinct types of true believers are parapsychologists, members of CSICOP (i.e., the Committee for the Scientific Investigation of Claims of the Paranormal.), and fundamental Christians. After stating that it has been a grave mistake on the part of the media to typecast him as a parapsychologist, he proceeds to characterize parapsychologists as people who typically:
masquerade as scientists, alleging they can prove mind-reading, prophetic abilities or life after death by laboratory techniques or, more generally, by rational procedure. In fact, parapsychologists are pseudo-scientists, which means that they espouse a system of methods and assumptions they erroneously regard as scientific. (ix)

He classifies them with rhapsodists, pipe dreamers, lotus eaters, and wool gatherers; they are all pleasant enough, but their basic assumption is seriously in error. Their basic error, of course, is to think that there is some way of scientifically establishing their claims of the paranormal or of life after death based on the paranormal. He also thinks that those groups who think they have scientific proof of the falsity of the paranormal (groups such as those humanists who are members of CSICOP are equally problematic. Of them he says:
Many self-styled skeptics about the paranormal join a fringe social movement which advertises itself as a scientific organization while at the same time underhandedly representing itself as a para-law enforcement agency. Let me clarify, parenthetically, that I am not making this up! The members of this social movement call themselves the sigh-cops.. .. For now, suffice it to say first, that sigh cops aren't skeptics but believers in a particular ideology about what knowledge is and how it is acquired, and second, that there are good reasons for believing that their ideology is mistaken. The Last Laugh is an example of philosophical skepticism about the paranormal. I push skepticism far beyond the limits sigh-cops set for their inquiries. By comparison to mine, theirs is a weak-kneed and wimpy approach. (x)

126

Book Reviews

In addition to the parapsychologists and the "sigh-cops" (both of whom are wrong for thinking that one can prove or disprove anything at all about the truth of paranormal experience), we have a third group of true believers, the fundamental Christians. This latter group, as one might expect, garner's the lion's share of Moody's mirthful but deadly serious attack on true believers. Of this third bickering batch, he says: They are the goshawful deadfannies, stiffs, bores, nuisances, uptight dogmatists, broken records, and wet blankets, the fundamentalist Christians, Religious Right, Bible Brigade, "JAY-zus"-Sayers, Brimfire and Hellstoners, Swaggartists, Falwellers, Bakker-Boosters, Pat Robertsonians,or whatever you would like to call them. Moody claims that all three groups are full of nonsense and that The Last Laugh demolishes all three of these standard approaches to the paranormal and erects a better, more comprehensive and pragmatic system of thinking in their place. With regard to the latter, his positive approach is predicated on the thesis that while one can neither prove nor disprove any claims about the paranormal or life after death, the value of the paranormal and claims about it fall squarely into the realm of entertainment and that therefore, he advocates the view of "playful paranormalism." Under this new view, the paranormal is not to be made the battleground in metaphysics, but to be enjoyed and played with joyfully for whatever it might inspire. Apart from whether the paranormal has any epistemic significance in seeking the truth about life after death, for Moody there is something inherently charming, entertaining, enchanting, and funny about it all, and we should not look for anything more in it than a source of fun and entertainment. More on this shortly. The chapters of the book include the following: 1. The Experience of Dying; 2. Play and the Paranormal; 3. Breaking up the Logjam: Unriddleing the Controversy about the Paranormal; 4. Miracles, Meanings and Merriment; 5. Believing the Unbelievable Believably; 6. Knowing the Unknowable; 7. Classifying the Paranormal; 8. Justifying the Paranormal or Even the Study of It; 9. The Rhetoric of Dysbelief 10. The Only Way a Serious Study of the Paranormal Will be Legitimatized; 11. A Treasure Chest Waiting to Be Opened; 12. Coming Full Circle: Back to the Subject of Life After Life, 13. Having the Last Laugh. At every turn in the book, Moody criticizes, for what seem to be altogether persuasive reasons, the mental set and practices of the "sigh-cops" and the fundamental Christians. Indeed, it is difficult not to smile when his psychological characterizations hit squarely on the target in expressing his thinly veiled disdain for the epistemic and outdated ideology parading itself as serious science in the practices of the "sigh-cops." A serious critic of the book will note in passing the humorous ad hominems, but then again, Moody's thesis is that ruling against ad hominems seems to presuppose we should take all this very seriously. That is, according to Moody, just the sort of thing we should not do

Book Reviews because it suggests that we are committed to a logical analysis of things in the interest of proving something about the metaphysical import of the paranormal. Moody has lost his patience with what he sees as a fundamentally worthless debate about the significance of the paranormal for understanding the nature of human nature. He laments the fact that this debate has actually detracted from a relaxed examination of, and participation in, the paranormal simply as a source of entertainment and edification. For example, when describing the role of the paranormal discourse vis-a-vis its epistemic import for understanding human nature, he declares himself a playful paranormalist, and says:
"Dysbelieving" about the paranormal is a pastime for shut-ins and stay-at-homes; it is the paranormal as seen from an armchair. Our society is caught up in what playful paranormalists categorize as a couch-potato model. Viewers sit passively on a sofa and watch television panel discussions about the paranormal. There are a couple of people on the panel who describe their own, personal paranormal experiences of perimortal visions, apparitions of the deceased, dreams that came true, or whatever. Then two or more "dysbelieving" experts slug it out before the cameras, pretending to explain what the paranormal experiences means, i.e. whether or not they "prove" life after death, precognition and so on. Viewers are supposed to make up their minds by evaluating what the panel of experients and "dysbelieving" experts say. "Dysbeliever" experts aren't expected to be able to perform the paranormal, or to be able to enable others to experience the paranormal, as Ancient Greek experts were. Playful paranormalists say, to change the study of the paranormal for the better, change the model of participation. Operating under the unific principle that the paranormal is entertainment, it is possible to symphonize anomalistic psychology, social history, and clinical know-how to enable people to have their own, first hand paranormal experiences. Then they will be in a better position to make up their minds about what such experiences mean. That would be to resolve the pivotal dilemma. (1528.)

It appears that Moody believes having one's own paranormal experiences, and doing whatever is required to have them, will provide more illumination on their significance than any objective inquiry. This is not at all unlike the religious mystic's claim that one cannot, by the light of reason alone, prove the existence of God. If one lives as the mystic does, however, then there is a good chance he or she will have the experience of God. In that experience, the need to prove the existence of God dissolves and passes as a juvenile, futile, and worthless activity. Of course, Moody does not say explicitly that one will know or see the metaphysical significance of the paranormal as a legitimate and reliable source of belief in some form of life after death. Nonetheless, I sometimes suspect it is implied in the thesis of playful paranormalism. In addition, it is not far removed from the epistemological thesis that there are some things we know and are justified in believing even when we cannot say precisely what we know or how we are justified in believing it. Whether one should accept this Wittgensteinian thesis or not is a long story not to be taken

128

Book Reviews

phers have argued the same basic view-if not about the experience of the paranormal, then at least about experience in general. In a continuing set of comments clearly revealing what playful paranormalism amounts to, he says:
Many of those who study the supernatural are chronically distressed because neardeath experiences, ghosts, premonitions and the like are spontaneous and unpredictable happenings that cannot be reproduced under set conditions conducive to scientific examination. So investigators are reduced to sifting through a rubble of reports made sometime after the purportedly paranormal occurrences themselves - retrospective narratives the scientific skeptics belittle as anecdotes. If visionary reunions with the departed, perimortal visions, or apparent foreseeings of the future are what they purport to be, they would involve transaction between ordinary reality and what presumably would be other, alternate levels or dimensions of reality. Science has been extraordinarily successful in part, because of its steadfast and commendable determination to confine its deliberations to this ordinary reality in which we find ourselves. However, the performing arts routinely and reliably effect transactions between ordinary reality and intriguing alternate realities. Playgoers, moviegoers, and concertgoers, for example, regularly are transported, still in their seats, into seemingly different realms of being. They are made to feel that they are in the midst of an entirely different order of things. So by modeling themselves partly on the performing arts, playful paranormalists can entertain the prospect actually of reproducing experiences or phenomena that, when occurring spontaneously, often are deemed paranormal. (p. 153)

In the end, some of us are probably a bit more sanguine about scientific or empirical proof for some form of personal survival after death, even if we have not had such paranormal experiences. For that reason, we might remain difficult to persuade that there is no possible way to empirically confirm, via appeal to the paranormal, some form of personal survival after death. In other words, for those of us who agree with Moody's general characterizations of the three classes of true believer, there still may be some lingering suspicion that his new theory of the paranormal may be predicated on an assumption that needs a bit more persuasive defending, namely, that it is a waste of time try to prove empirically in any way that some essential aspect of human personality sometimes survives bodily death. His concept of "proof' could be discussed a bit more fully in this regard as well. Otherwise, the claim that it cannot be done could turn out to be another form of the "true believing" that Moody so richly criticizes. Even so, the interesting epistemological point here may be that there is nothing logically contradictory between knowing that something is so by directly experiencing it and knowing that something is so because one has empirical proof of it. At any rate, readers should see nothing wrong with Moody's suggested playful paranormalism as a possible source of reliable belief formation. Nonetheless, the evidence for the reliability of such beliefs, if they are to be items of public knowledge rather than religious belief, require some empirical confirmation. In the absence of the latter, we could all agree that there are things some people know privately that other people do not know and cannot

Book Reviews

129

be expected to know. This is so simply because some people have experiences that others do not have-and perhaps never will. But this is a long story for another time. In sum, this book is a fun read intended for a literate and large audience. It is replete with wonderful insights and on-target assessments from a person whose original playful paranormalism has contributed so much to an important discussion on human nature, a discussion that might never have taken place so forcefully otherwise. For that, we should all be grateful-even if only for the book's entertainment value. Although one could disagree with Moody's major points, there is much to recommend in this book to anybody interested in the topic of the paranormal and the merits of discussions on it. Robert Almeder Department of Philosophy Georgia State University Atlanta, GA 30303

The Discovery of the Cold Fusion Phenomenon by Hideo Kozima. Tokyo, Japan: Ohotake Shuppan, 1998.370 pp. $42.00 (in the USA). ISBN 4-87 186046-2.
Hideo Kozima's remarkable book is the first textbook describing cold fusion phenomena. With its more than 400 references and 70 diagrams of experimental results, thorough readers would have difficulty supporting the contention of university physicists that no such phenomenon exists. The author is careful to explain in the introductory chapter how cold fusion is a term based on misunderstandings in early work. Many of the phenomena do not involve fusion, but reactions with neutrons and protons within the solid material in which most of the phenomena occur. The book consists of 18 chapters. Only 10 of them explain experimental work. An usually large number of chapters, four in all, are about the author's theory and its detailed numerical application to the varied phenomena of socalled cold fusion: the neutrons, the tritium, the heat, and the gamma rays. (Incidentally, he does not mention x-ray emission from electrodes, which have been reliably reported.) Kozima's idea-and ideas of this kind are at the cutting edge of present theories of cold fusion-is connected with neutrons inside the solid lattice. His contention is that there are trapped neutrons. They originate in the atmosphere by the interaction of cosmic rays with nitrogen. They arrive on a solid lattice at about lo2 cm-2 sec-l, and thereafter they can take part in a large number of nuclear reactions inside the solid. In the later chapters, Kozima works out what would be the number of neutrons per cc to be consistent with the results that he examines and comes to the conclusion that, by and large, consistency is reached if the concentration of trapped neutrons is between lo8 and 1013 per

130

Book Reviews

The first four chapters are introductory and rather light weight. They talk about the infamous ERAB (Energy Research Advisory Board) Report in which the cold fusion researchers 1989 work was interrogated by a number of scientists appointed by DOE (U.S. Dept. of Energy) in a style that would suggest a prosecuting attorney's examination. They touch on the general idea of catalysts and how enzymes react in the body (the appropriateness of some sections here in Chapter 3 seems rather doubtful). Chapter 4 treats nuclear fusion reactions in a classical sense. In Chapter 5, we begin to take off and fly with a discussion of the rediscovery of the cold fusion phenomenon in modern times. The author presents Fleischmann, Pons, and Hawkins (1989) as the discoverers. Nonetheless, the book also makes clear-in later chapters-that several papers reported nuclear reactions in solids before Fleischmann, Pons, and Hawkins. Most remarkable of all, and something new to the reviewer, Kozima quotes a recent book by Kushi (1994) in which a U.S. Army report from the Material Technology Laboratories Report of 1978 is described. This report concerns Energy Development from Elemental Transmutation in Biological Systems. It is said to have validated the work done up to that time as proving the transmutation in the cold and also the production of nuclear energy. Other works carried out before that of Fleischmann and Pons include a study at U.S. National Laboratories of Nuclear Reactions, conducted by passing high currents through wires (producing neutrons), and the work of Borghi in 1943 in which neutrons were produced by a passage of high currents through a klystron. Chapter 6 has the essence of the phenomenological descriptions for systems involving deuterium, and Chapter 7 reports similar material for hydrogen-containing systems, although it is far less in extent. In Chapter 8, the idea of thermal neutrons is considered in a general manner. The summary of the experimental data is given in Chapter 9. Then, in Chapter 10, facts in respect to biotransmutation are given, and Kozima implies that it is a general phenomenon in nature. The ticklish subject of reproducibility is presented and discussed. The facts here are clear. If an investigator tries to start a cold fusion experiment on any given day, there is about one chance in five that he or she would see a positive result. If one is willing to wait a few days and try again, a successful experiment may be seen. At the same time, each of the results discussed in this book has been repeated many times in many laboratories in many countries. Therefore, the results are repeatable, but not reproducible in the normal sense. Scientists will have to get used to looking at phenomena like this. Chapter 11 contains the main testing out of the trapped neutron model. The author produces a noncontroversial equation for the rate at which the reaction of neutrons occurs but has an adjustable parameter, that is, the concentration of neutrons in the solid. As stated above, most of the phenomena can be accounted for numerically if the concentration of neutrons in the solid is between 1 0 ~ - 1 0 'cc-l. It is diffi~

Book Reviews

131

cult to understand how the concentration of neutrons in a given solid would vary so much. In fact, if one studies the table of all the results that have been matched, the range to get a fit is much greater, more like lo2-1013. The author is not forthcoming in commenting on this range, he seems to have an easygoing attitude toward acceptability. By Chapter 12, the reader is immersed in the trapped-neutron theory, and it is compared with various other theories using discussions of the corresponding Mossbauer effect and the role of the electrolyte. Chapter 13 further develops seven other theories and how they compete with the trapped-neutron theory; Chapters 14 through 18 are "postscript chapters." The book really ends after Chapter 13. Chapter 14 is about the energy crisis and how cold fusion might solve it. Chapter 15 is a general chapter about revolutions in paradigms. Chapter 16 presents the views on the field of a number of Japanese scientists. Chapter 17 is about symbols and units, and Chapter 18 is the reference list. To call this a book about cold fusion is perhaps too much. It is particularly oriented toward a presentation of the author's theory. Thus, the typical examples of the reactions of trapped neutrons with the constituents of the lattice are: n

+$

~= i ~ e ( 2 . 1 i MeV)
E

+ t(2.7 MeV),
E.

+ d = n(E1)+ d ( "), d ( ) + d = ;He(0.82 MeV) + n (2.45 MeV) +
n ( E)
E

t ( ~ + d = lHe(3.5MeV) +n(14.1 MeV) + E , )

The product particles of these trigger reactions create higher energies than does the thermal reaction and can induce successive nuclear reactions (i.e., breeding reactions). In a qualitative way, the trapped-neutron theory explains a great deal. Once one is convinced that free neutrons are inside the solid, one can see that several transmutation reactions might well take place and produce energy. The major problem is providing convincing evidence for the large number of trapped neutrons that the theory demands. It is difficult to measure in an independent way, and the author uses it as an adjustable parameter. Were Kozima able to establish agreement with the experiment at, for example, lo7-lo9 neutrons per cc, the reader might be able to believe in the model and swallow the discrepancies in the concentration of cold fusion present in palladium. Much greater ranges are needed to obtain a fit, however, and one has to ask why. Is this the origin of the famous irreproducibility? Could it be that various pieces of palladium have had various trapping times for neutrons? It is difficult to see that neutrons from the atmosphere, over the several years in which most of the pieces of palladium have existed, could build up to the necessary values. Thus, lo2 neutrons per cmv2and sec-' implies the need for lo4 years to build up 1013 neutrons cc-' (even if all were trapped). Another issue that raises doubts about the model is that Kozima always

Book Reviews stresses the importance of LiOD (or LiOH) electrolyte and the diffusion of lithium into the palladium. It is true that lithium does this, as was shown by Oliver Murphy at Texas A&M in 1990, but other works that have used sodium or hydrogen as cations in the electrolyte, and these have lead to cold fusion too. Other phenomena that the trapped-neutron theory would seem difficult to accommodate are those observed by Chien at Texas A&M in 1992. He found that whenever he added fresh D 2 0 to, LiOH, the production of tritium stopped and then started again spontaneously after some hours. Correspondingly, it is not easy to see why the potential of the electrode alters the rate at which tritium is produced on the neutron theory. Finally, the impact method of provoking nuclear change-little known but now verified-does not have an obvious interpretation in terms of neutrons. Nevertheless, one feels that Dr. Kozima has provided a useful text by placing his attention on neutrons. The details need further working out-particularly the origin of the neutrons and the concentration that he has to assumebut his work clearly strengthens the neutron case. It is increasingly necessary to consider the sociology of this new phenomenon, which has been lurking in the literature for more than 50 years but came to prominence in the 1990s. There are now more than 2,000 positive papers in the literature written after 1990. It is scandalous that one still has to go to specialist journals to obtain acceptance for publication (i.e., the "establishment journals" of physics and chemistry still refuse to publish cold fusion papers, and the U.S. patent office will not accept patents for devices based on cold fusion). This is a historically important fact because it indicates a frozen physics. New phenomena may not be accepted if they disagree with the present paradigm. Thus, the importance of Dr. Kozima's book goes beyond his providing a compendium of cold fusion facts with an interesting attempt at interpretation. It should act as a clanging bell to scientists in general that something is wrong with textbook perceptions in nuclear physics-and perhaps with the lack of perception by physicists that there is always a next step. John O'M. Bockris 4973 Afton Oaks Dr. College Station, TX 77845

The Truth in the Light by Peter Fenwick (with Elizabeth Fenwick). New York: Berkley Books, 1997.278 pp. $12.00, paper. ISBN 0-425-15608-7.
Light and Death by Michael Sabom. Grand Rapids, MI: Zondervan Publishing House, 1998.240 pp. $12.99, paper. ISBN 0-310-21992-2.
No one can complain about a dearth of books on near-death experiences (NDEs), but many of the authors of the books published on this topic have either written autobiographical accounts or declared themselves researchers and

Book Reviews

133

cobbled a book from the results of an informal survey. Some qualified physicians have even engaged ghostwriters to help them appeal to a wider audience, suggesting that writing a nonfiction bestseller seemed more important than searching for the truth in these unusual experiences. My disappointment with nearly all the books in this field made me both surprised and pleased to find the two I recommend to critical readers here. Both the authors are physicians, and both have a record of publishing serious scientific research. Peter Fenwick (of London, England) is a neuropsychiatrist, a psychiatrist who is also well informed in neuroscience. He based his book on the results of a questionnaire sent to persons who learned about his research on NDEs primarily through television programs. More than 350 persons completed the questionnaire. Although Fenwick had a large sample, its participants selected themselves, and he did not claim otherwise. He interviewed only a few of his respondents. He takes his readers, chapter by chapter, through the principal features of the near-death experience, such as the sense of serenity and enhanced vitality, engulfment by a bright but not blinding light, looking down on one's body from above, reviewing one's life, and losing one's fear of death. Earlier books on near-death experiences have made these features familiar to many readers. Yet such earlier books-I discount autobiographical ones, which too often betray missionary zeal or more venal motives-rarely contain lengthy accounts of individual cases. Fenwick includes many long extracts from his respondents' own accounts of their experiences. In the last four chapters, Fenwick exhibits his mastery of neuroscience in an excellent discussion of alternative interpretations of the near-death experience. He acknowledges our dependence on the brain for our perceptions and nearly everything else that we experience. "Nearly everything" is, however, not everything. How can some persons who are ostensibly unconscious, if not dead, nevertheless perceive their bodies from above and sometimes become aware of events occurring outside the reach of their ordinary senses? Above all, how can our memories depend solely on our brains when persons whose brains seem not to be functioning during these experiences afterward remember them much better than they remember most events that occurred when they were in their normal states? If this book sells as widely as it deserves, a second edition will provide an opportunity to correct some errors in the spelling of names. Those of Melvin Morse (an investigator of children's near-death experiences), Andrew Greeley (an American sociologist), and Richard Bucke (a Canadian psychiatrist and writer on mystical experiences) are misspelled. A philosopher, easily identifiable from the context as Thomas Nagel, is cited, both in the text and in the index, as Tom Nagal. In 1982, Dr. Michael Sabom, a cardiologist, published one of the few books on near-death experiences that I can recommend to other physicians. It is called Recollections of Death: A Medical Investigation (New York: Harper and Row). Now he has written a second admirable book.

134

Book Reviews

In his latest work, Sabom describes results from a smaller sample of persons who have had an NDE (47 in all) than the sample on which Fenwick based his work. Sabom's investigation, however, does offer three features that are lacking in Fenwick's research. First, most of the patients he studied were from his own practice, and he could appraise their physical conditions himself. Second, he compared the patients who had unusual experiences when critically ill with more than 100 cardiac patients who had comparable physical conditions, but had had no unusual experience. Third, he interviewed all 160 of the participants in his study. Michael Sabom particularly inquired about the after effects of the NDE on attitudes toward life and death. Using a Life Changes Questionnaire and the previously developed Religious Motivation Scale, he found significant differences between the patients who had had an NDE and those who had not. The life changes noted included increased faith in God, increased belief that life has meaning, and increased positive involvement with other persons. The scale of religious motivation showed an increase in the influence of religion on persons who had had an NDE as well, something quite different from mere attendance at church. Light and Death also describes some remarkable cases, including one that is quite extraordinary. The patient underwent a delicate neurosurgical operation for the ablation of an aneurism of the brain. The operation required her entire body to be chilled (hypothermia) and temporarily exsanguinated. She was as dead as anyone could possibly be unless they were "really dead." And yet she perceived events of the operation including details it seemed most unlikelybut not quite impossible-that she could have learned about normally. Michael Sabom would not mind if I describe him as an ardent Christian. He frankly states that his investigation derived from his religious beliefs as much as from his interest in science. Some readers may think that his religion may have interfered with his science. I am sure it did not. Everyone has religious beliefs, even atheists, and I find it refreshing that Michael Sabom told his readers explicitly about his. Near-death experiences remain anomalous in science because few scientists, including physicians, think that their investigation would result in any useful information warranting the efforts required. Why is this? Could the plethora of popular and "best-selling" books-many asserting extravagant claims-have repelled scientists by making the study of such experiences appear ridiculous? Why waste one's time, some scientists must have thought, in examining further the scanty evidence based on genuine scientific inquiries? Sherwin Nuland, in How We Die: Reflections on Life's Final Chapter (New York: Knopf, 1994), took less than two pages to reassure himself, and presumably his readers, that metabolic changes in a failing brain can satisfactorily account for the features of near-death experiences. To his credit, Nuland added a paragraph in which he mused about the inscrutability of God and concluded that perhaps, after all.. .. God, however, sometimes shows Himself in data ob-

Book Reviews

135

tained by scientists, and that is why I urge them to read these two books. Those who do so will minimally see the need for more investigations of near-death experiences. Ian Stevenson Division of Personality Studies Box 152, Health Sciences Center University of Virginia Charlottesville, VA 22908

Psychedelic Drugs Reconsidered by Lester Grinspoon and James B. Bakalar. New York: The Lindesmith Center, 1997 (originally published 1979). 385 pp. ISBN 0-9641568-5-7.
When this book was written in 1979, the intense debate and sensational rhetoric about psychedelic drugs was still recent memory. The time was surely ripe for a book such as Psychedelic Drugs Reconsidered, which struck a welcome note of sobriety after a decade or more of polemics. Psychedelic Drugs Reconsidered is a comprehensive survey of all aspects of the psychedelic question. All its subject matter is accessible to the lay reader, encompassing history, sociology, pharmacology, subjective and medical effects, and therapeutic use of psychedelic plants and synthetic drugs. Although the scope of this book is vast, its concise writing style, careful organization, and judicious editing make it an eminently accessible survey of the field-circa 1977. As a reprint and not a genuine new edition, some of the book is significantly dated. The pharmacological information it provides is easily updated for those so inclined and generally is consistent with contemporary knowledge. More serious is the sociological datedness. This book was written before the Reagan era, before the War on (some) Drugs and the concomitant mushrooming of the prisons and erosion of civil liberties, before the explosion of the rave scene. Psychedelic drugs occupy a different niche in society than they did in the 1970s. To understand the cultural significance of these substances in the 1990s, one will have to look elsewhere. Psychedelic Drugs Reconsidered scrupulously maintains at least a superficial objectivity throughout. The authors themselves recognize that impartiality is a problematic goal, as virtually all the language used in the psychedelic debate has become freighted with associations that embody the views of one side or another. Even the label "psychedelic '-meaning "soul manifestingwis controversial. Advocates of psychedelic drugs obviously prefer it to "psychotomimetic" (simulating psychosis), an earlier term with less-than-benign connotations. The authors mitigate this difficulty through consistently high standards of scientific integrity. They are not afraid to cite studies that go against the general direction of their conclusions, and they ruthlessly criticize flaws in stud7

136

Book Reviews

ies in their favor. If anything, they tend to be overcautious, offering only qualified conclusions when their evidence seems overwhelming. Such intellectual probity lends credibility to their reportage and conclusions, as do their careful citations and impeccable documentation. The bibliography can only be described as "compendious"; especially useful is the separate annotated bibliography of several hundred titles, which, although dated, would be an invaluable resource for those pursuing a deeper knowledge of the field. Despite the scrupulous impartiality of the authors in attempting to present only the facts, one sometimes suspects that they are concealing their true opinions. They go to great lengths to make the book palatable to a skeptical audience (according to the jacket cover, even Carl Sagan approved). The rigor of its arguments and its careful documentation make the book ideal for the openminded scientist with little understanding of the field. True believers will be disappointed, however, for Psychedelic Drugs Reconsidered assiduously avoids committing to any paradigm-shattering conclusions. For example, the work of Stanislav Grof is cited extensively, but even though Grof (who started his career as an orthodox psychiatrist) argues persuasively that the evidence of psychedelic psychotherapy demands a radical rethinking of the present scientific worldview, Grinspoon and Bakalar shy away from such conclusions. They prefer to sit on the fence with assertions such as, "When someone says that under LSD he has relived a past life as an ancient Egyptian embalmer and produces an accurate formula for constructing a mummy (Grof, 1975, p. 170), he has probably read the formula somewhere, even if he cannot remember doing so." Yet one detects an underlying sympathy for the views of Grof and countless others who believe that the psychedelic experience cannot be subsumed under the rubric of conventional science. They are less equivocal about advocating a resumption in psychedelic research and therapy. Thousands of studies were published in the 1950s and 1960s, many in mainstream scientific journals, only to disappear entirely by the 1980s. Today, when a handful of fledgling ibogaine (a powerful psychedelic alkaloid derived from the African shrub Tabernathe iboga) and LSD studies are fighting for survival, their arguments could well prove useful to advocates of these studies. They remain current, if only because virtually no new research has been conducted since the book was published. If the authors are right, then the politically impelled cessation of psychedelic research was profoundly unscientific. Although the overall tone of the book is scholarly and reserved, the authors introduce anecdotes and extended quotations that convey something of the power and wonder of the psychedelic experience. These users' stories animate an extremely sophisticated discussion of the subjective effects of psychedelics. Withholding judgment about the more mystical psychedelic revelations, they persuasively argue that "These phenomena deserve to be placed in a theoretical framework rather than dismissed with casual psychiatric epithets and ad-hoc explainings-away on the one hand or affirmed as self-evident reve-

Book Reviews

137

lations of the highest truth on the other" (p. 153). Any reader who conceives of LSD as a "hallucinogen" will find their old preconceptions untenable. For 25 years now, any hint of tolerance or advocacy for psychedelic drugs has been anathema to established political and public opinion. Therefore, in spite of the authors' carefully cultivated objectivity and balance, it is unlikely their work will have any greater impact on policy than it did in 1979. For most of the ideological spectrum, psychedelics are a closed issue. This is unfair. Today, even more than in 1979, aspects of the psychedelic experience are cropping up in fields as diverse as ufology, holistic medicine, death and reincarnation studies, transpersonal psychology, and anomalies research, demanding that we do as the book's title suggests-reconsider psychedelics.
Charles Eisenstein 104 Matthew Circle State College, PA 16801 chuck@statecollege.com

The Meaning of Consciousness by Andrew Lohrey. Ann Arbor, MI: The University of Michigan Press, 1997. 301 pp. ISBN 0-472-10821-2.
To be fair to the author, I may not be the right critic to provide an unbiased review of his book. I am, to put my cards on the table, an unreconstructed Cartesian dualist when it comes to the mind-brain problem and a follower of Popper and Eccles, who come in for criticism in this book. The author, on the other hand, as Katherine Hayles makes clear in her foreword, "develops a perspective radically different from a presumed separation between subject and object. Instead of seeing a world 'out there' separate from an observer, Lohry emphasizes the mutual construction of world, subject and discourse" (p. xii). And our author, in his own introduction, confirms what she says when he writes: "a science of consciousness should rely, not on reason or empiricism and their attendant conventions, which asks how consciousness arises from matter, but on a perspective paradigm which reverses the question to ask how matter arises from consciousness" (p. 2). Later, he adds that his book both develops and works from a new, non-rationalist and non-materialist framework or paradigm which challenges much in the scientific and Cartesian tradition of analysis. This paradigm, he points out, owes more to the new discipline of discourse analysis. Having thus layed his cards on the table, the author presents a glossary so that we should be in no doubt as to exactly how he is using the 22 key words or phrases that will crop up in the text. The book is scholarly, and the author knows the literature and is familiar with the controversies it has generated. He has read his Sheldrake, although Sheldrake is taken to task for "his lack of a definition of and a lack of structural content for his notions of habit and morphic resonance."

138

Book Reviews

It is unfortunate that there is no reference to the work of David Chalmers, author of The Conscious Mind, but, as that was published only in 1996, it presumably came too late for the present book, which appeared in 1997. As far as this reviewer is concerned, the parting of the ways comes with Chapter 9, "The Unity of Mind and Body." It is here that Lohrey challenges the pessimism of Karl Popper and John Eccles, who expressed doubts as to whether we shall ever be able to understand "the relation between our bodies and our minds." The author has no such doubts:
I maintain that an isomorphic paradigm, which predisposes to recursive symmetry and therefore unity, can provide a positive answer to this relationship. The nature of this positive answer is that neurological activity in the form of excitation and inhibition are the actual semantic relationships of symmetry, nonsymmetry, and asymmetry and that through these the recursive systems in general of subjectivity are established. The unity of "mind" and body is therefore complete and absolute. (p. 180)

The quotes around the word "mind" say a great deal about the way the author approaches the problem, which is to downplay the distinction, which to many of us seems so unbridgeable, between mind and consciousness on the one hand, and body and matter on the other. But the author is even willing to summon to his aid, U. T. Place, who famously argued that consciousness is just a brain process. It is noteworthy that there is nowhere a discussion of the "paranormal," so we do not know whether the author acknowledges or denies the reality of parapsychological abilities. Yet ever since J. B. Rhine, the existence of such abilities has been cited as demonstrating the independence of mind from material constraints. It may be that given my own philosophic allegiances, I could not be expected to see matters from the author's point of view. But, as I said at the beginning, I may not be the reviewer that the author deserves. John Beloff 6 Blacket Place Edinburgh EH9 1 RL Scotland UK

Journal of Scientific Exploration, Vol. 14, No. 1 , pp. 139,2000

0892-3310100
0 2000 Society for Scientific Exploration

ERRATUM

rality, N o w , edited by Harald Atmanspacher and Evan entific E x p l o r a t i o n , 1 3 , 4 , 1999, pp. 695-703. The

The references accidentally were omitted for the book review Time, TempoRuhnau, J o u r n a l of Scireferences, cited by reviewer Karl Gustafson, are listed below.

References
Atmanspacher, H., Primas, H., & Wertenschlag-Birkhauser, E. (1995). Der Pauli-Jung-Dialog und seine Bedeutung fiir die modern Wissenschaft. Berlin: Springer. Bernasconi, J., & Gustafson, K. (1998). Contextual quick-learning and generalization by humans and machines. Network: Comput. Neural Syst., 9, 85. Brent, J. (1993). Charles Sanders Peirce: A life. Bloomington, IN: Indiana University Press. Calude, C., Casti, J., & Dinneen, M. (Eds.). (1998). Unconventional models of computation. Berlin: Springer. Davies, E. (1976). Quantum theory of open systems. London: Academic Press. Gustafson, K. (1990). Reversibility in neural processing systems. Statistical mechanics of neural networks (L. Garrido, Ed.). Lecture Notes in Physics 368, 269. Gustafson, K. (1997a). Lectures on computationalfluid dynamics, mathematicalphysics, and linear algebra. Singapore: World Scientific. Gustafson, K. (1997b). Operator spectral states. Computers Math. Applic., 34, 467. Gustafson, K. (1998). Internal sigmoid dynamics in feedforward neural networks. Connection Science, 10, 43. Lahti, P., & Mittelstaedt, P. (1985, 1987, 1990). Symposium on the foundations of modernphysics. Singapore: World Scientific. Pauli, W. (1953). Die Klavierstunde. ETH, Zurich. Unpublished manuscript. Richmond, B., Optican, L., Podell, M., & Spitzer, H. (1987). Temporal encoding of two-dimensional patterns by single units in primate inferior temporal cortex. I. Response characteristics. 11. Quantification of response wave form. 111. Information theoretic analysis. Journal of Neurophysiology, 57, 132, 147, 162. Schwartz, D. (1981). Isomorphism of Spencer-Brown's laws of form and Varela's calculus for self-reference. International Journal of General Systems, 6, 239. Spencer-Brown, G. (1969). Laws offomz. London: Allen and Unwin. Webster's third new international dictionary. (1971). Springfield, M A : G. C. Merriam, Publishers. Wiener, N. (1948). Cybernetics. New York: Wiley.

In addition, the correct keywords for "Experimental Systems in Mind-Matter Research" by Robert Morris, Journal of Scientific E x p l o r a t i o n , 1 3 , 4 , 1999, pp. 561-577 should be: statistical evidence, p-values, meta-analysis, and re-

Journal of Scientijic Exploration, Vol. 14, No. 1, pp. 141-148,2000

0892-3310100 O 2000 Society for Scientific Exploration

SSE 19th Annual Meeting at King's College in London, Ontario, Canada June 8-10,2000

Announcement and Call for Papers Program The Society for Scientific Exploration will hold its 19th Annual Meeting in London, Ontario, Canada, on the King's College campus, June 8-10, 2000. The SSE Local Host is Imants Baruss. A number of distinguished scholars have accepted invitations to speak on themes that embody SSE's focus on the frontiers of science. This year's program, at the beginning of a new millennium, will give special attention to the future of science and the contributions made by researchers investigating unusual, elusive, and anomalous topics. Contributed Papers Contributed presentations from SSE members and from researchers sponsored by members will be welcomed by the Program Committee, which consists of Roger Nelson (Chair), Brenda Dunne, Patrick Huy ghe, Wayne Jonas, and Marcel Kuijsten. We especially encourage contributions from young investigators. Abstracts no longer than 400 words should be submitted by April 15, preferably by e-mail and in plain ASCII text, to: Roger Nelson, C- 131, School of Engineering, Princeton University, Princeton, New Jersey 08544, USE. Email: rednelson@princeton.edu; Tel: 609-258-53709; Fax: 609-2581993 Invited Speakers A number of distinguished scholars have accepted invitations to address themes embodying SSE's purpose as a forum for frontier science. This year's program, at the beginning of a new millennium, will focus on the future of science and the special contributions that can be made by researchers investigating unusual, elusive, and anomalous topics. Talks by our invited speakers fall into the following overlapping categories:
I. The Evolution of Science: Philosophy, Technology, Purpose R. G. Jahn, Princeton University: Plenary, A Future Science John Peterson, Arlington Institute: A Futurist Assesses Engineering Seth Shostak, SET1 Institute: Search for signals from the Universe 11. Biosciences and Medicine: Anomalies in a Pragmatic Framework Wayne Jonas, Univ. Uniformed Services: Homeopathy, Research Strategies for Hard Problems Elisabeth Targ, U. C. San Francisco: Distant Healing in AIDS Patient Population

142

SSE News

Alexander Berezin, McMaster University: High Dilutions: Isotopicity, Quantum Coherence 111. Language and Consciousness: Window and Mirror Imants Baruss, King's College: Alterations of Consciousness Evan Pritchard, Marist College: Comprehensivism: The Way of the Future (and Past) IV. Forum: Creative, Constructive, Critical Anomalies Research

Logistical Information: See registration form. The following information gives a brief orientation. Further information is available on the Society website: www.scientificexploration.org. For questions and detailed information, please contact the local host, Imants Baruss. Tel: 519-433-3491; Fax: 519-433-0353; Email: baruss@julian.uwo.ca. Airport International airport with flights from Toronto, Detroit, and Pittsburgh. Ground transportation from Detroit and Toronto. Hotel Station Park All Suite Hotel; Group reservation number 30214; 242 Pall Mall Street, London, Ontario; Telephone: 1-800-561-4574; Fax: 519-6422551. Canadian $1 10 (about $78 U.S.), parking included. Station Park is located downtown at Richmond Row, London's premier business, shopping, dining and entertainment district. It is 10 minutes away by car from the University campus. Shuttle to campus. Make reservations by April 28. Accommodations at King's College Campus rooms, spacious, shared bathrooms, hot breakfast included. $35 (U.S.) per day single occupancy; $30 per day per person for double occupancy. To reserve rooms in the student residences, use the MEETING REGISTRATION FORM. For questions about the campus and campus rooms, contact baruss@julian. uwo.ca. Field Trip Friday afternoon: Pinery Provincial Park on Lake Huron, one hour away, near Grand Bend, box lunch, kilometres of hiking and beach. Bring swimwear AND rainwear, sweater or windbreaker, if you stay on to see the sunset, considered among the most beautiful in the world. Alternate trip: One hour to Stratford Shakespeare Festival. Society Banquet Saturday evening at Sunningdale Gold Club (established in 1901 and ranked as one of the top 20 golf courses in the world), outdoor patio, lounge, formal dining room is all ours, grand piano entertainment and singing after dinner, beautiful grounds, woods, view of creek. Dress code: no jeans or halters.

SSE News

143

REGISTRATION & PAYMENT INFORMATION FOR SSE 2000 MEETING
Please Print
Name: Address:

All Amounts are given in US Dollars

I

Telephone: E-mail:

Work:

Home: FAX:

I I

Name for Badge: Badge Affiliation (or City): Number in Party:
(Please copy this form as needed and fill out for each additional person.)

I

Early Meeting Registration Fee: $100 Registration Fee After May 5,2000: $125 Daily Registration Fee: $35 early; $45 after May 5,2000

$ $ $

I

I

Accommodation in King's College Residence (with shared bathrooms and hot breakfast) One person in room: $35 per night Two people in room: $30 per person per night Indicate which nights: Wednesday Thursday Friday
$-$

Saturday

Lunch, with some choice, in King's College dining room: $5 per day $ (otherwise you hike 2.5 km to downtown restaurants) n vegetarian Indicate if vegetarian: Thursday U Friday Indicate which days: Friday evening picnic at Pinery Provincial Park on Lake Huron: $25 (includes one hour bus trip, park entrance fee and box lunch) egg vegetarian Choose: fl meat rl fish
$

Saturday

Saturday evening banquet at Sunningdale Gold Club: $30 $ (includes 15 minute bus trip, pre-dinner bagpipes on terrace and after dinner lounge piano) Choose: U prime rib 1-1 puff pastry salmon en crouti O vegetarian Total Paid
$

Please return with check payable to SSE or with VisaJMatercard information to: Charles Tolbert, PO Box 3818, Charlottesville, VA 22903. FAX: 804-924-3104 Card Number:

-

-

-

Expiration Date

I

SSE News

145

Fifth Biennial SSE European Meeting

Announcement and Call for Papers
The Fifth Biennial SSE European Meeting will be held from Oct. 20-22,2000, at the University of Amsterdam in the Netherlands. Dick Bierman and Chris Duif are hosting the meeting and coordinating registration. Ezio Insinna is the chairman of the program committee.

Program A number of distinguished scholars have been invited to speak on the theme: Unorthodox Science: Past, Present, and Future. The following speakers have accepted at the time of publication:
H. B. G. Casimir (Netherlands): On the Nature of Vacuum (video interview). S. J. Doorman (Dutch Skeptical Society): On the Future of Unorthodox Science. F. H. van Lunteren (University Utrecht, Netherlands): On the History of Unorthodox Science. W. Peschka (Germany): Kinetobaric Effects and Bioinformational Transfer. Pyatnitsky (Moscow Institute for High Temp., Russia): Effects of Consciousness on the Structure of Water. S. E. Shnoll (Moscow State University, Russia): Realization of Discrete States during Fluctuations in Macroscopic Processes. R. Sheldrake (UK): Animal Communication. R. van Wijk (University Utrecht, Netherlands): High Dilutions Research. K. Zioutas (University Thessalonike, Greece): Evidence of Dark Matter from Biological Observations. Further information on the program will be announced on the SSE website (www.scientificexploration.org/meetings/euro5 .html)

Call for Papers All interested European SSE members are encouraged to submit abstracts for short contributed presentations to the program committee consisting of Ezio Insinna, Brenda Dunne, and Dick Bierman. Abstracts no longer than 500 words must be sent before August 15, preferably by e-mail to emi2Qworldnet.fr. Address: Ezio Insinna, 18 allee des Freres Lumiere, 77600 Bussy Saint

146

SSE News

LOGISTICAL INFORMATION Traveling The International Schiphol Airport can be reached easily from almost everywhere. Train connections from the airport to the city (Central Station) are running every 15 minutes. Travel time from the airport to Central Station is about 15 minutes. If you are traveling by automobile, please note that there is no free parking around the conference location. Check with your hotel or the local organizer Dick Bierman about how to arrange for parking. Hotels Main Hotel (-$90): Lancaster Hotel, Plantage Middenlaan 22, Amsterdam, the Netherlands. Phone: +31 (0) 20 5356888. Email: res.lancasterhote1 @edenhotelgroup.nl. Mention group code UVA-SSE when making reservations. Budget hotel, backpackerstyle (-$45): Arena Budget Hotels, Gravezandestraat 51, 1092 AA Amsterdam, the Netherlands. Phone: +31 (0) 20 6947444. Email: infor @ hotelarena.nl. Information on other smaller hotels within walking distance will be provided on the website. The hotels are situated in a relatively green area near the Zoo and can be reached by taking either a cab from the Central Station or tram nr. 9 (get off at the Zoo for Lancaster or Tropenmuseum for Arena). Social Activities The University of Amsterdam is situated near the center of the city and the program will be set up in such a way that attendees have opportunities to enjoy the cheerful atmosphere and picturesque canals in Amsterdam. A reception and registration will take place on Thursday evening (October 19) at a nearby Cafe (to be announced), all within a two-minute walk from the Lancaster Hotel. The Society banquet is planned for Saturday evening at the Zoo restaurant with a splendid view over the animals and gardens. The banquet will be preceded by a trip through the Amsterdam canals. The dress code is informal. Sightseeing Friday evening is free for exploring the city. Sunday afternoon might be used to visit the museums, which have large collections of Rembrandt and van Gogh paintings. The Anne Frank house is open for visitors on Sunday afternoon. For more information on local arrangements contact: Dick Bierman, email: bierman@psy.uva.nl, fax: +31-20-639 1656. Registration Copy this registration form and send it to: Dick Bierman University of Amsterdam Roetersstraat 15 1018 WB Amsterdam The Netherlands Or: Register using the web registration form at our website.

SSE News

Registration Form European SSE 2000 Conference

Contact Information

Affiliation for Badge: Street Address: City/State/ZIP/Country: Phone: E-mail:

Registration Fee
Select Registration Type: Early Fee (before Sept. 1): $85 / Dfl. 190,Late Fee (after Sept. 1): $100 / Dfl. 220,n Daily Registration (1 day): $40 / Dfl. 100,n Daily Registration (2 days): $75 / Dfl. 170,U 50% DISCOUNT. Check this if you are an enrolled student.
U

Lunches and Banquet

n On Site Lunches ($20 for 3 days)
Friday Night Banquet ($35) (includes trip through canals) fl Check this if you are a vegetarian

Payment Method
Total amount (the sum of the registration fee, lunches, and banquet): Select Payment Method: Credit Card CardNumber: Signature:
fl

$

-

-

-

Expiration Date

1

Check (make check payable to Euro SSE 2000) Bank transfer to PostBank Account 8417584 of Euro SSE2000, the Netherlands

Please mail registration form to Dr. Dick J. Bierman, SSE 2000, University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands

Journal of Scientijic Exploration, Vol. 14, No. 1, pp. 149-161,2000

0892-3310100

02000 Society for Scientific Exploration

Corrected Index for Volume 13
The following is the revised set of comprehensive subject and name indices for Vol. 13, No. 1 through Vol. 13, No. 4 of the Journal. The indexers apologize for the accidental omission of JSE Vol. 13, No. 3 in the version which appeared at the back of JSE Vol. 13, No. 4. The Name Index includes, by author, all articles and book reviews that have appeared during 1999, as well as books by authors that have been reviewed by someone else. Substantive citations to, and quotes from, an author's work have also been indexed; routine references have not. The Subject Index includes a list, sorted alphabetically by author, of all books reviewed in the Journal, under the header "Book reviews." The name of the reviewer is included in parentheses at the end of each entry. The Editors wish to thank Dawn Hunt and James Matlock for compiling the indices.

Name Index
Adams, John Couch, 259,260 Ajello, Elena, 463 Akers, Charles, 627 Alcock, James E., 317,318 Altschuler, M. D., 201 Amann, Anton, 559,696, Introductory Remarks on Large Deviation Statistics, 639-64 (co-author) Anderson, Jane, The Shape of Things to Come, 552-53 (book review) Antoniou, Ionnis, 696 Arechi, F. Tito, 696 Arp, Halton, book review by, 341-43 Arvidson, P. Sven, Intuition: The Inside Story, 337-4 1 (book review) Atmanspacher, Harald, 559,696-97 Guest Editorial: Data Analysis in Mind-Matter Research, 557-60 Introductory Remarks on Large Deviation Statistics, 639-64 (co-author) Time, Temporality, Now, 695-703 (co-author) (book review) Babinski, J., 465 Bacon, Francis, 266 Banting, Sir Frederick, 500 Barbour, Ian G., Religion and Science: Historical and Contemporary Issues, 333-35 (book review) Barbour, Julian, 696 Batcheldor, Kenneth, 247 Bayer, Resat, 190, 193 Beloussov, L. V., book review by, 346-48 Bendernagel, John A., North America's Great Ape: The Sasquatch, 549-51 (book review) Benveniste, J., 146 Bernard, Claude, 266 Birkhoff, George, 594,597 Black Elk, 243,255 Blasband, Richard A., book review by, 706-9 Blofeld, J., 10-11 Bockris, John O'M., book review by, 343-46 Bodner, A. G., 495 Bohm, David, 106 Bohr, Niels, 104-5,500 Boller, Emil, A Rescaled Range Analysis of Random Events, 25-40 (co-author) Bondi, Hermann, 282 Boole, George, 586 Born, Max, 582

150

Name Index: Volume 13
Family History: Pedigree and Segregation Analyses, 351-72 Colby, William, 72,73 Comfort, A., 489 Condon, Edward U., 422,454 Conte, 344,345 Coon, C., 551 Costa de Beauregard, O., Comment on "The Timing of Conscious Experience" by F. A. Wolf, 323-25 Cournot, Antoine Augustine, 581,585 Cox, Edward, 232,234,243,247,248, 250,252 Cram&-,H., 642,643 Crater, Horace W. Mound Configurations on the Martian Cydonia Plain, 373-96 (co-author) Reply to P. A. Sturrock's Criticisms, 398-400 (co-author) Response to "Geomorphology of Selected Massifs on the Plains of Cydonia" by David Pieri, 413-19 (co-author) Creegan, Robert F., book review by, 333-35 Curran, Pearl, 250,255 Dalenoort, Gerhard, 696 Darwin, Charles, 262,346-48,349-50, 551 David-Neel, Alexandra, 247 Davis, Elizabeth, 340 Davis-Floyd, Robbie, Intuition: The Inside Story, 337-41 (book review) Davy, John, The Marriage of Sense and Thought: Imaginative Participation in Science, 335-37 (book review) de Broglie, L., 45 de Finetti, Bruno, 584 Dean, D., 147 DeGracia, Donald J., 152-53 Report of Referee on "The Effect of 'Healing with Intent' on Pepsin Enzyme Activity," 149-52 DeMeo, James, The Orgone

Boschinger, Christoph, 116 Boucouvalas, Marcie, 338 Bourron, Michel, 117 Bradish, G. J., Response to Stanley Jeffers, 329-31 (co-author) Brame, E. G., 147 Brandenberg, J. E., 418 Braud, William G., 516-17,518 Braude, Stephen, 3 18 Bridgman, P. W., 557 Broughton, Richard S., 232 Bunnell, Toni, 149-52 Bunnell's Response to deGracia, 152-53 The Effect of "Healing with Intent" on Pepsin Enzyme Activity, 139-48 Burneko, Guy, 339 Bush, George, 80 Butler, Alisa, 236,237,244 Calder, N., 282-83 Calvin, Steve, 232,234,235 Cann, R. L., 496 Carlotto, Mark J., 378 Response to "Geomorphology of Selected Massifs on the Plains of Cydonia" by David Pieri, 413-19 (co-author) Carnap, Rudolf, 584-85 Carpenter, James, 575 Cartier, Th&reseJosephine, 480 Castaneda, Carlos, 10 Chai, C. K., 494 Charcot, Jean-Jaques, 464 Chauvin, Remy Le darwinisme ou la$n d'un mythe [Darwinism, or the End of a Myth], 346-48,349-50 (book reviews) Psychological Research and its Alleged Stagnation, 317-22 Childerhose, Robert J., 207-8 Church, Alonzo, 597-98 Ciochon, R., 551 Clarke, J. M., 494 Clayton, Craig, book review by, 122-24 Cohn, Shari A., Second Sight and

Name Index: Volume 13
Accumulator Handbook, 706-9 (book review) Diaconis, Percy, 557,563 Dibble, Walter E., Jr., Electronic Device-Mediated pH Changes in Water, 155-76 (co-author) Dirac, P. A. M., 47, 185-87,287,324 Dobbins, Yvonne H., Response to Stanley Jeffers, 329-31 (co-author) Dorit, R. L., 496 Dumas, G., 461 Dunne, Brenda J., 105,108,339-40,569 Response to Stanley Jeffers, 329-31 (co-author) Dyson, Frank Watson, 273,275,276, 278,282

151

State-of-the-Universe(s) Report, 341-43 (book review) Feynman, 397 Fithian, Elly, 235,236,237,244,249 Flammarion, Camille, 303-4 Fleck, Ludwik, 266 Florshiitz, Gottlieb, Swedenborg's Hidden Influence on Kant: Swedenborg and Occult Phenomena in the View of Kant and Schopenhauer, 545-49 (book review) Flournoy, Thomas, 267 Fogarty, Quentin, 433 Fonkin, V. A., 146 Fox, Roy, 543 Freud, Sigmund, 93-94,3 19

Earman, John, 582 Eddington, A. S., 260, 271,273, 275-78,280-82,286,287,288 Edelglass, Stephen, The Marriage of Sense and Thought: Imaginative Participation in Science, 335-37 (book review) Edelstein, Stuart J., book review by, 349-50 Edmonds, James D., Jr., Variations on the Foundations of Dirac's Quantum Physics, 177-88 Ehlers, Jiirgen, 696 Eibesfeld, Eibl, 3 19-20 Einstein, Albert, 182, 183,260,264, 271,272,278,279-89,328,342 Eldridge, Don, book review by, 552-53 Endler, 541 Erjavec, James L., Response to "Geomorphology of Selected Massifs on the Plains of Cydonia" by David Pieri, 413-19 (co-author) Eshleman, Von R., 423-24,448 Estebany, Oskar, 145 Euler, Manfred, 696 Everret, Hugh, 553 Falconer, D. S., 490 Feder, J., 26,28,31 Fernandes, Marco, 697 Ferris, Timothy, The Whole Shebang: A

Galle, Johann Gottfried, 259 Gebert, Hans, The Marriage of Sense and Thought: Imaginative Participation in Science, 335-37 (book review) Gebser, Jean, 297,298 Gehrcke, Ernst, 283-84 Gemmeli, A., 466 Gerber, P., 283-85,289 Gibbs, Josiah Edward, 580 Gillmor, D. S., 454 Goerres, J. J., 466 Goethe, J. W. von, 335-36 Gonzales, B. M., 494 Gould, Stephen J., 342 Grad, Bernard, 147,321 Grattan-Guiness, I., Real Communication? Report on a SORRAT LetterWriting Experiment, 231-56 Graudenz, Dirk, 696 Griffin, David Ray, Unsnarling the World Knot - Consciousness, Freedom, and the Mind-Body Problem, 121-22 (book review) Gross, Paul R., Higher Superstition: The Academic Left and Its Quarrels with Science, 554-56 (book review) Guest, J. E., 407 Gustafson, Karl, book review by, 695-703

152

Name Index: Volume 13
tor Intention: A Review of a 12Year Program" by R. G. Jahn, B. J. Dunne, R. D. Nelson, Y. H. Dobbins and G. J. Bradish, 327-29 Jeffrys, Harold, 583 Johnson, Gregory R., book review by, 545-49 Jonas, Wayne B., 541-42 Non-locality and Homeopathy, 539-41 Josephus, Flavius, 488 Jung, Carl Gustav, 94,292,298,301-5, 308,572,603 Kamber, Franz, 601 Kant, 667 Kant, Immanuel, 545-49 Kaplan, Joseph, 687 Katra, Jane, Miracles of the Mind: Exploring Non-local Consciousness and Spiritual Healing, 129-31 (co-author) (book review) Kauffman, Stuart, 347 Keil, Jiirgen, Do Cases of the Reincarnation Type Show Similar Features Over Many Years? A Study of Turkish Cases a Generation Apart, 189-98 (co-author) Kelleher, Colm A., Retrotransposons as Engines of Human Bodily Transformation, 9-24 Kelvin, Lord, 261 Kenyon, K., 485 Kepler, J., 262-63 Keyhoe, Donald, 686,690 Keynes, John Maynard, 584 Khrennikov, Andrei, 559 p-adic Information Spaces, Infinitely Small Probabilities and Anomalous Phenomena, 665-79 Kiefer, Claus, 696,699-700 Klass, Philip, 443-45,447,448 Klass, Phillip, 201,209 Klose, Joachim, 695 Koestler, Arthur, 89 Kolmogorov, Andrei Nikolaevich, 559, 588-89,590-92,594,595,597, 598,599,600,601,602

Guth, Alan, The InJEationaly Universe, 341-43 (book review) Hahnemann, 540 Halley, Edmund, 258-59,260 Hamilton, Jennifer L., book review by, 703-4 Hamilton, R., 178 Hansen, George P., 232 Harbort, Bob, 340 Harman, W., 500 Harman, Willis W., Biology Revisioned, 703-4 (co-author) (book review) Head, J. W., 411 Hemenway, Donald, 455 Hetherington, N. S., 264 Hiley, Basil, 697 Home, D. D., 318 Honorton, Charles, 626-27,628,629 Horrobin, D. F., 267 Houran, James, "The Jim Ragsdale Story" Revisited: An Appraisal of Early Unpublished Testimony, 113-16 (co-author) Howe, Elias, 500 Hoyle, Fred, 341 Towards and Understanding of the Nature of Racial Prejudice, 681-84 (co-author) Hunt, Dawn Elizabeth, book review by, 124-29 Hurst, H. E., 25-26,31,32 Hyman, Ray, 617,626-27,628,629, 632,634-35 Hynek, J. A., 116,117 Imbert-Gourbeyre, A., 462,466 Ireland, W., 447,448 Jahn, Robert G., 105, 108,569 Response to Stanley Jeffers, 329-31 (co-author) Jawer, Michael A., Human Factors and Anomalous Experience: A Survey Investigation, 542-43 Jeffers, Stanley, Comment on "Correlations of Random Binary Sequences with Pre-Stated Opera-

Name Index: Volume 13
Konig, H., 593 Koopman, Bernard Osgood, 584-85, 596 Krengel, U., 557 Kress, Kenneth A., 89 Parapsychology in Intelligence: A Personal Review and Conclusions, 69-87 Targ response, 87-90 Krippner, Stanely, book review by, 554-56 Krishna, Gopi, 20 Kronz, Frederick, 695 Kuhn, Thomas, 265 Kull, Andreas, 695,698 Kura, Ertan, 190 Lakotas, I., 266 Laplace, Pierre Simon, 266-67,579-80, 584 Laszlo, E., 142-43 Lateau, Louise, 463 Laughlin, Charles D., 338 Leibnitz, G. W., 297,298-99 LeLorier, J., 631 Leverrier, Urbain, 259,260,272 Levin, Michael, Commercial Funding for Frontier Science, 537 Levitt, Norman, Higher Superstition: The Academic Left and Its Quarrels with Science, 554-56 (book review) Linde, Klaus, 540,541 Links, Jonathan M., book review by, 335-37 Lorentz, Konrad, 182, 183,285,319-20 Lyre, Holger, 695 Maccabee, Bruce Atmosphere or UFO? A Response to the 1997 SSE Review Panel Report, 421-59 Optical Power Output of an Unidentified High Altitude Light Source, 199-211 Maddox, J., 271-72,273,287,288 Mahler, Giinter, 697 Maier, Georg, The Marriage of Sense

153

and Thought: Imaginative Participation in Science, 335-37 (book review) Malin, Shimon, 695 Mandelbrot, B. B., 27 Manning, Matthew, 142, 145,321 Margnelli, Marco, An Unusual Case of Stigmatization, 461-82 Marnard Smith, J., 494 Martin-Lof, Per, 599,659 Mathews, Olga, 237,244 Matthews, Robert A. J., 630 Significance Level for the Assessment of Anomalous Phenomena, 1-7 Mauge, Claude, Response to the Pocantico Report, 116-20 Maxwell, J. C., 179, 181, 182, 183, 184 McCausland, Ian, Anomalies in the History of Relativity, 271-90 McClenon, James, 232,234,235,243, 254 McClintock, Barbara, 9-10, 11, 19 McCraty, R., 146 McDaniel, Stanley V. Mound Configurations on the Martian Cydonia Plain, 373-96 (co-author) Reply to P. A. Sturrock's Criticisms, 398-400 (co-author) Response to "Geomorphology of Selected Massifs on the Plains of Cydonia" by David Pieri, 413-19 (co-author) McDermott, Zachary, Exploring the Limits of Direct Mental Influence: Two Studies Comparing "Blocking" and "Co-operating" Strategies, 515-35 (co-author) McDonald, James, 209,689 McKague, Lee, Methuselah: Oldest Myth, or Oldest Man?, 483-97 McLaughlin, Robert, 687-88,691-92 McMahon, John, 88 McNally, 205 Mehra, Jagdish, 697 Mendeleev, 259-60 Menzel, Donald, 689,691

154

Name Index: Volume 13
Pauli, Wolfgang, 178,298,301-2,304, 318,586,603,698,700 Pearl, R., 493-94 Peierls, Rudolf, 697 Peirce, Charles Sanders, 305-7 Perrin, Jean Baptiste, 595 Pert, Candace, 553 Pieri, David C., 413-18 Geomorphology of Selected Massifs on the Plains of Cydonia, Mars, 401-12 Pineault, Ann, 338 Poher, C., 118 Polanyi, Karl, 265 Polat, Can, 190 Pollard, Frank G., Comments on Reincarnation, 116 Poppel, Ernst, 695-96 Porter, Stephen, "The Jim Ragsdale Story" Revisited: An Appraisal of Early Unpublished Testimony, 113-16 (co-author) Post, Emil Leon, 598 Price, Pat, 72-73,74-79,82,83-84,87, 88-89,130 Primas, Hans, 696,697,698,699,700 Basic Elements and Problems of Probability Theory, 579-614 Prince, Walter Franklin, 250 Puthoff, Harold E., 70,71,75, 80, 87-88 Putigny, Thkrese Putigny, 481 Pyatnitsky, L. N., 146 Ragsdale, Jim, 113-15 Randall-May, Cay, Analysis of Technically Inventive Dream-Like Mental Imagery, 499-513 (co-author) Randle, Kevin D., 114, 115 Ratnoff, 0 . D., 466 Ravenscroft, John, Exploring the Limits of Direct Mental Influence: Two Studies Comparing "Blocking" and "Co-operating" Strategies, 515-35 (co-author) Rees, Martin, Before the Beginning, 341-43 (book review)

Methuselah, 483-96 Miley, George, 344,345 Mill, John Stuart, 260 Miller, A. I., 288 Miller, R., 146, 147 Milton, Julie, 629-30 Miollis, Madame, 480 Mizuno, Tadahiko, Nuclear Transmutation: The Reality of Cold Fusion, 343-46 (book review) Monsay, Evelyn H., 339 Moore, Charles, 687-88 Morgan, T. H., 265 Morris, Robert L., 559 Experimental Systems in Mind-Matter Research, 561-77 Murphy, Michael, 10 Napier, J., 551 Narlikar, Jayant, 342 Neihardt, John G., 231-32,240,243, 250,252,254-55 Nelson, Roger D., Response to Stanley Jeffers, 329-31 (co-author) Neppe, Vernon M, 353 Neumann, Therese, 463 Newton, Sir Isaac, 183,259,298 Nordberg, Robert B., book review by, 121-22 OIKeefe,Eleanor, 236 Ockham, William, 296 Oster, Eileen F., The Healing Mind: Your Guide to the Power of Meditation, Prayer, and Rejlection, 122-24 (book review) Owen, A. R. G., 247 Pais, Abraham, 279-80,283,288 Palladino, Eusapia, 318 Pallikari, Fotini, A Rescaled Range Analysis of Random Events, 25-40 (co-author) Palmer, John, 369 Parker, S. L., 493 Parker, T. J., 411 Partridge, Michael D., book review by,

Name Index: Volume 13
Reich, Wilhelm, 706-9 Reichenbach, Baron von, 59-60 Rein, G., 142, 145, 146,321 RCnyi, AlfrCd, 589 Reverdit, 480 Rhine, J. B., 232,318,321 Rhunke, Lothar, 455 Richards, John Thomas, 232-33,234, 235,236,243,244,248,252,254, 255 Richet, Charles, 557 Ring, Kenneth, 19 Robertson, Cloretta, 463 Rodeghier, Mark, 114, 115 Roll, W. G., 542-43 Roncalli, Lucia, 340 Rose, Charles, 89 Ruhnau, Eva, 695 Time, Temporality, Now, 695-703 (co-author) (book review) Ruppelt, Edward, 686 Russek, Linda G. S., Registration of Actual and Intended Eye Gaze: Correlation with Spiritual Beliefs and Experiences, 213-29 (co-author) Russell, Bernard, 583 Ryle, Martin, 342 Sagan, Carl, 413,693 Sahtouris, Elisabet, Biology Revisioned, 703-4 (co-author) (book review) Saint-Laudy, 541 Savage, Leonard J., 583 Schlaug, G., 352 Schlitz, Marilyn J., 516-17,518 Schmitt, Donald R., 114 Schnorr, Claus-Peter, 599 Scholem, Gershom, 292,304 Schopenhauer, A., 298 Schopenhauer, Arthur, 548 Schuessler, J. F., 118 Schwartz, Berthold E., 232,237 Schwartz, C. J., 150,153 Schwartz, Gary E. R., Registration of Actual and Intended Eye Gaze: Correlation with Spiritual Beliefs and Experiences, 213-29 (co-author)

155

Schwartz, Stephen A., book review by, 129-31 Sciama, D. W., 272,273,282,287 Shattuck, Roger, Forbidden Knowledge: From Prometheus to Pornography, 124-29 (book review) Sheldrake, Rupert, 538,555 Sheridan, Joe, 338 Shklovskii, I. S., 692-93 Silberstein, Ludwig, 279-81 Simpson, C. J., 467 Smith, Cyril, 543 Smith, J., 145, 146 Smith, W., 117 Spencer-Brown, G., 557,695,698 Sprague, Roderick, 551 Stanford, Rex, 572 Steiner, Rudolf, 335-36 Stevenson, Ian, 116 book review by, 705 Do Cases of the Reincarnation Type Show Similar Features Over Many Years? A Study of Turkish Cases a Generation Apart, 189-98 (co-author) What are the Irreducible Components of the Scientific Enterprise?, 257-70 Stokes, Douglas M., The Nature of Mind: Parapsychology and the Role of Consciousness in the Physical World, 705 (book review) Stone, Marshall Harvey, 588 Strange, J. F., 400 Strasenburgh, Gordon, book review by, 549-51 Stringfield, Len, 691 Sturrock, Peter A., 5, 116-20 Referee Report on "Mound Configurations on the Martian Cydonia Plain," 397-98 Suchanecki, Zdaslaw, 696 Swedenborg, Emanuel, 545-49 Swords, Michael D., Clyde Tombaugh, Mars and UFOs, 685-94 Szelag, Elzbieta, 696 T., Anna Maria, 467-81

156

Name Index: Volume 13
Vandenberg, Hoyt, 686 Velasco, Jean-Jacques, 421-22 Venn, John, 581,585 Verbrugghe, G. P., 488 Virchow, R., 463 von Ludwiger, Illobrand, 421,422 von Mises, Richard, 582-83,586,597 von Neumann, J., 596 von Weizsacker, Carl Friedrich, 585, 695,697,698 Waismann, Friedrich, 582,591 Walach, Harald, 5 39-40 Magic of Signs: A Nonlocal Interpretation of Homeopathy, 291-315 Reply to Jonas, 541-42 Watt, Caroline Exploring the Limits of Direct Mental Influence: Two Studies Comparing "Blocking" and "Co-operating" Strategies, 5 15-35 (co-author) Rupert Sheldrake and the Objectivity of Science, 538-39 (co-author) Whitehead, Alfred North, 279,299-300 Wickersham, J. M., 488 Wickramasinghe, Chandra, Towards an Understanding of the Nature of Racial Prejudice, 681-84 (co-author) Wiener, Norbert, 594,595,596-97 Wiseman, Richard, 232,563,629-30 Rupert Sheldrake and the Objectivity of Science, 538-39 (co-author) Wolf, F. A. Reply to Costa de Beauregard, 325-27 Wright, S., 493 Yost, Casper, 250,255 Ziman, J. M., 263-64

Targ, Russell, 70-7 1,75,80 Comments on "Parapsychology in Intelligence: A Personal Review and Conclusions," 87-90 Miracles of the Mind: Exploring NonLocal Consciousness and Spiritual Healing, 129-31 (co-author) (book review) Tart, Charles T., 297,515 Tattersal, I., 551 Teller, Edward, 687 Tesla, Nikola, 499 Thomson, J. A., 495 Thurston, H., 11 Thurston, Herbert, 466 Tiller, William A. Electronic Device-Mediated pH Changes in Water, 155-76 (co-author) Towards a Predictive Model of Subtle Domain Connections to the Physical Domain Aspect of Reality, 41-67 Tinel, J., 463-64 Tombaugh, Clyde, 686-93 Tornier, E., 557 Torun, Erol, 378 Towe, Bruce C., Analysis of Technically Inventive Dream-Like Mental Imagery, 499-513 (co-author) Trizna, Dennis, 455 Truzzi, Marcello, 571 Turing, Alan Mathison, 598 Turner, Stanfield, 89 Uexkull, Thure von, 306-7 Ullman, Montague, Dreaming Consciousness: More than a Bit Player in the Search for Answers to the MindIBody Problem, 91-112 Utts, Jessica, 557,558 The Significance of Statistics in Mind-Matter Research, 615-38 Valentich, Frederick, 432-33 Vallee, Jacques, 117 Van Allen, James, 687,691-92

Index: Volume 13

Subject Index
Academy of Consciousness Studies, 337 Altered states of consciousness crcativity and, 499-513 Alternative healing, 122-24, 129-31, 215 in clinical setting, 123 pepsin enzyme activity and, 139-48, 149-52,152-53 research on, 130-31 Anomalous phenomena. See also Orgonomy causality and, 292-97 significance testing of, 1-7 Astronomy, 258-59,264 alternative cosmological models, 271-90 UFO sightings and, 685-94 Atmospheric phenomena. See under UFOs and ufology Bigfoot, 549-51 Bilocation transposons and, 20 Biology, 703-4. See also Genetics Darwinism, 34648,349-50 DNA, spiritual healing and rewinding of, 146 Book reviews Anderson, J., The Shape of Things to Come, 552-53 (D. Eldridge) Atmanspacher, H. and Ruhnau, E., Time, Temporality, Now, 695-703 (K. Gustafson) Barbour, I. G., Religion and Science: Historical and Contemporary Issues, 333-35 (R. F. Creegan) Bendernagel, J. A., North America's Great Ape: The Susquatch, 549-51 (G. S trasenburgh) Chauvin, R., Le darwinisme ou l a j n d'un mythe [Darwinism, or the End of a Myth], 346-48 (L. V. Beloussov), 349-50 (S. J. Edelstein) Davis-Floyd, R., and P. Sven Arvidson, Intuition: The Inside Story, 337-41 ( M . D. Partridge) DeMeo, J., The Orgone Accumulator Handbook, 706-9 (R. A. Blasband) Edelglass, S., The Marriage of Sense and Thought: Imaginative Participation in Science, 335-37 ( J . M . Links) Ferris, T., The Whole Shebang: A State of the Universe(s)Report, 341-43 ( H . Arp) Florshiitz, G., Swedenborg's Hidden InJEuenceon Kant, 545-49 (Gregory Johnson) Griffin, D. R., Unsnarling the World Knot - Consciousness, Freedom and the Mind-Body Problem (R. B. Nordberg), 121-24 Gross, P. R. and N. Levitt, Higher Superstition: The Academic Left and Its Quarrels with Science, 554-56 (S. Krippner) Guth, A., The Injationary Universe, 341-43 ( H . Arp) Harman, W. W. and Sahtouris, E., Biology Revisioned, 703-4 ( J . Hamilton) Mizuno, T., Nuclear Transmutation: The Reality of Cold Fusion, 343-46 ( J . O'M. Bockris) Oster, E. F., The Healing Mind: Your Guide to the Power of Meditation, Prayer, and Rejection (C. Clayton), 122-24 Rees, M., Before the Beginning, 341-43 (H. Arp) Shattuck, R., Forbidden Knowledge: From Prometheus to Pornography (D. E. Hunt), 124-29 Stokes, D. M., The Nature ofMind: Parapsychology and the Role of Consciousness in the Physical World, 705 (I. Stevenson)

158

Subject Index: Volume 13
random event generators, 327-239, 329-31 Entropy, 639-64 Evolution, 258,265,346-48,349-50 creationism and, 495-96 racial prejudice and, 681-84 role of retrotransposons in, 13,21 Experimenter effect, 558,573 in mainstream scientific research, 538-39 Extra-sensory perception (ESP), 262, 562,565,566,570 belief in, and eye gaze, 222-24,227, 228 Einstein on, 328 experimental designs, 568-70 Eye gaze spiritual beliefs and experiences and, 213-29 Faith healing. See Alternative healing Fund for UFO Research, 114 Genetics. See also DNA aging studies and, 494-95 inbreeding depression and life span, 489-94 inheritance of psi ability, 351-72 racial prejudice and, 681-84 retrotransposons, and enlightenment, 9-24 Homeopathy, 263,291-315 non-locality and, 539-41,541-42 Hypnotism, 553 Intention. See also Engineering anomalies research; Parapsychology eye gaze and, 213-29 pepsin enzyme activity and, 139-48, 149-52,152-53 pH changes in water and, 155-76 Intuition, 337-41 J. Allen Hynek Center for UFO Studies, 114 Journal of ScientiJic Exploration, 268

Targ, R and Katra, J., Miracles of the Mind: Exploring Non-Local Consciousness and Spiritual Healing (S. Schwartz), 129-31 Central Intelligence Agency parapsychology and, 69-85,87-90 Chaos theory, 595-600 Clairvoyance, 552-53 of Swedenborg, 545-47 Cold fusion, 343-46 College of Psychic Studies, 232 Consciousness, 17-18. See also Alternative healing dreaming, 91-112 intuition and, 337-41 mind-brain relationship question and, 121-22,557,705 parapsychology and, 705 pervasiveness of, 703-4 quantum mechanics and, 104-6, 109 self-healing and, 122-24 spatial-temporal boundaries and, 323-25,325-27 spirituality and, 553 Consciousness-related phenomena. See also Psychophysiology effects of thoughts, prayers and wishes on, 139-48 eye-gaze and, 213-29 human-machine interactions, 155-76 quantum physical correlations of, 92, 104-7 Cosmology, 341-43 Cryptozoology. See Bigfoot DNA mitochondrial, and human ancestry, 495-86 Dreams and dreaming consciousness and, 91-112 precognitive, 552-53 Engineering anomalies research Princeton Engineering Anomalies Research (PEAR) lab, 108, 327-29,329-3 1

Subject Index: Volume 13
Kant, Immanuel Swedenborg and, 545-49 Koestler Parapsychology Unit, 562 Levitation transposons and, 20 Life spans Old Testament. 483-97 Mars "face" on, 401-2,404,407-8,413-18 Cydonian plain formations, 373-96, 397-98,398-400,401-12,413-19 UFOs and, 691-93 Meditation spiritual healing and, 122-24 transposons and, 20,22 Mediumship physical, 231-56 Mental imagery scientific invention and, 499-513 Mind-brain relationship question, 9 1, 121-22,557,705 non-Boolean logic and, 557 Mind-matter interactions alternative interpretations of, 563-65 defined, 562 Mind-matter research, 557-60,56 1-77. See also Parapsychology information spaces and, 671-72 statistics and methodology in, 558-59,572-75,615-38 study design in, 565-71 Near-death experiences (NDEs) retrotransposons and, 19,20,22 Neurolinguistic programming (NLP), 552-53 Orgonomy, 706-9 Parapsychology, 1, 129-30,228,557, 562. See also Mind-matter research consciousness and, 705 criticism of, 634-36 fraud in, 563,568 ganzfeld, 626-30,63 1-33, 636

159

intelligence work and, 69-85,87-90 limits of direct mental influence, 515-35 replicability in, 539-40,630-3 1 state of research in, 317-22 Philosophy psi phenomena and, 545-49 Physics de Broglie pilot waves, 45-48,49,54 dual four-spaces, 48-65 electromagnetism, 41,43-45,54-62, 65-67 magnetic anomalies, 43-45 multiple universes, 553 non-locality, 300-301 quantum mechanics, 42,43, 104-6, 109 quantum theory, 177-88,300-301, 337,552,602-3,672-75 theory of relativity, 271-90 Precognition dreams and, 552-53 Princeton Engineering Anomalies Research (PEAR) laboratory, 108, 327-29,329-31 conference sponsored by, 337,339, 341 Psi ability inheritance of, 351-72 Psi phenomena. See also Clairvoyance; Precognition dreaming consciousness and, 103-4, 107-8,109 environmental illness and, 542-43 philosophy and, 545-49 Psi research. See Engineering anomalies research; Parapsychology Psychokinesis (PK), 28-39,562,565-66 experimental designs, 570-7 1 large-scale, 567 macro, 231-32, 233 micro, 537 Psychological tests Minnesota Multiphasic Personality Inventory, stigmatization and, 479 Psychology state of research in, 317-22

160

Subject Index: Volume 13
Service dlExpertisedes Phenomenes des Phenomenes de Rentrees Atmospheriques (SEPRA), 422 Sitter groups, 231-56 Society for Planetary SETI Research (SPSR), 399,413,414,417-18 Society for Psychical Research (SPR), 232,234,236 Society for Research in Rapport and Telekinesis (SORRAT), 231-56 Society for Scientific Exploration (SSE), 265,267-68 Pocantico workshop, 421-22,455 Solar energy, 502-9,5 11 SORRAT. See Society for Research in Rapport and Telekenesis SRI International, 633 Stanford Research Institute (SRI) CIA-sponsored psi research at, 7 1-74,79,80-8 1,129,130 Statistics and methodology, 149-52, 152-53,557-60 Bayesian analysis, 2-6,559 Boolian logic, 586-89,600-603 fractional Brownian motion, 27-28, 39 Hurst effect, 25-40 information spaces, 665-79 large deviation statistics, 639-64 meta-analysis, 531,540,541,559, 624-25 Percentage Score Index (PSI), 518-19 probability theory, 557-58,559, 579-613,665-79 range analysis, 25-40 replication, 539-40,541-42 segregation analysis, 363-66 simple radar equation, 456 Stigmata and stigmatization, 461-82 Subtle energies, 41-67 Survival after death, 10, 11. See also Reincarnation belief in, and eye gaze, 223,228 Swedenborg, Emanuel Kant and, 545-49 Synchronicity, 572,603 homeopathy and, 301-5

Psychophysical Research Laboratories (PRL), 627,629,633 Psychophysiology limits of direct mental influence on, 515-35 stigmata and, 461-82 Racial prejudice color perception and, 681-84 Random event generators, 228 Reincarnation, 116,262 cases in Turkey a generation apart, 189-98 Religion Old Testament life spans, 483-97 stigmata and stigmatization, 461-82 Remote perceptionlviewing, 129-30, 228,569,624,632,633,636 Royal Society, 267 1919 joint meeting with Royal Astronomical Society, 278-82 Sasquatch. See Bigfoot Science censorship of discordant results, 267, 27 1-90 commercial funding for frontier, 537 definitional problem, 257-70 epistemology, 125-29,335-37 experimenter effects in research, 522, 525,530,538-39 history of, 557 intuition in, 339-40 limits on research, 125-29 mental imagery and invention in, 499-513 philosophy of, 257-70,335-37,557, 703-4 religion and, 333-35 sociology of, 554-56 Search for Extraterrestrial Intelligence (SETI), 402,414. See also Society for Planetary SETI Research (SPSR) Second sight inheritance of, 351-72 Semiotics homeopathy and, 305-10

Subject Index: Volumes 13
precognitive dreams and, 552-53 Time conscious experience and, 323-25, 325-27 philosophy of, 695-703 quantum mechanics and, 695-703 UFO sightings, 685-94 New Zealand, 422,431-58 Roswell (New Mexico), 113-16 Switzerland, 422,424-31 Ubatuba (Brazil), 1 17 UFOs and ufology abductions, 551 atmospheric phenomena, 421-59 Colorado UFO report, 422 experiencers, 22 GENAP, 117 GEPANISERPA project, 1 17 Hessdalen Project, 1 19 photographs, image analysis of, 199-211 Pocantico workshop, 116-20, 421-22,455 retrotransposons and, 20,22 Unicat Project, 117

Society for Scientific Exploration Officers
Prof. Peter A. Sturrock, President Varian 302 Stanford University Stanford, CA 94305-4060 Ms. Brenda Dunne Executive Vice President for Education C 131, School of Engineering &Applied Science, Princeton University, Princeton, NJ 08544-5263 Prof. Robert Jahn, Vice President D334, School of Engineering & Applied Science Princeton University Princeton, NJ 08544-5263 Prof. L. W. Fredrick, Secretary Department of Astronomy P. 0.Box 3818 University of Virginia Charlottesville, VA 22903-08 18 Prof. Charles R. Tolbert, Treasurer Department of Astronomy P. 0.Box 3818 University of Virginia Charlottesville, VA 22903-0818

Council

Dr. Marsha Adams 1100 Bear Gulch Rd. Woodside, CA 94062 Prof. Henry H. Bauer Chemistry, 234 Lane Hall VPI & su Blacksburg, VA 2406 1-0247 Dr. Stephan Baumann fMRI Project Director Psychology Software Tools, Inc. 2050 Ardmore Blvd. Pittsburgh, PA 15221 Prof. John Bockris Department of Chemistry Texas A&M University College Station, TX 77843 Dr. John S. Derr Albq. Seismology Center Albuquerque, NM 871 15

Dr. Roger D. Nelson C 131, Engineering Quad. Princeton University Princeton, NJ 08544 Dr. Harold E. Puthoff Institute for Advanced Studies-Austin 4030 W. Braker Ln., Suite 300 Austin. TX 78759-5329 Dr. Beverly Rubik Institute for Frontier Science 6114 LaSalle Avenue, #605 Oakland, CA 94611 Dr. Robert M. Wood 1727 Candlestick Ln. Newport Beach, CA 92660

Journal o Scientific Exploration, Vol. 14, No. 2, pp. 163-194,2000 f

0892-3310100
O 2000 Society for Scientific Exploration

Overview of Several Theoretical Models on PEAR Data

Princeton Engineering Anomalies Research Princeton University, Princeton, NJ 08544

Abstract-Existing PEAR datasets on microelectronic experiments are reviewed for their consistency with several theoretical models. The analysis includes a comparison of the observed data with predictions from bitwise-effect (BIT), Decision Augmentation Theory (DAT), time-normalized (TIM), and teleological (TEL) models for the phenomena. Methods for constructing a model comparison are discussed, together with the merits of relative versus absolute tests and the sensitivity of various test representations. Final results are presented in both frequentist (p-value) and Bayesian (hypothesis odds) formats. Keywords: humanlmachine interaction-Decision Augmentation Theorymodeling methods-analysis methods

1. Terminology The following terms will be encountered frequently: Z score. This is a normalized score used as a statistical figure-of-merit for the degree to which an observation differs from a theoretical prediction. In general, a Z score is defined by Z = (x, - xt)lat, where x, is the observed value, x, is the theoretically predicted value, and a, is the theoretical uncertainty (standard deviation) of the measurement. If the measurement error is normally distributed and the theory correctly describes the experimental situation, one expects Z to follow the standard normal distribution with mean 0, variance 1. A large value of Z is evidence against the theoretical model under which it is calculated. When different models make different predictions for the same observation, a Z score can be calculated for each model. EfSect size. If some number N of identical, independent observations is made, the mean of those observations has a theoretical standard deviation of/*, where at is the theoretical standard deviation of a single observation. proTherefore the Z score of the collected database will tend to grow as vided the same effect (i.e., the same quantitative degree of departure from the theoretical prediction) is present in each observation. This means the Z score is not directly useful for comparisons between experiments because differences in Z may simply reflect different amounts of data in situations where the same phenomenon is observed. To get around this problem one customarily defines

n,

I

164

York H. Dobyns

an effect size c = Z I ~ It .is obvious that E will remain constant as N increases, provided the repeated observations remain mutually consistent. Dataset and database. In general database refers to the entire collection of experimental data under consideration, while a dataset is a subset of the total database extracted according to some analytical criterion. For example, a parameter relevant to a particular theory, which takes on several discrete values in the database as a whole, may be used to segregate the data into several datasets for analysis.
2. Introduction

The current analysis focuses on a subset of data from PEAR published in the Journal of Scientific Exploration (Jahn et al., 1997). Of those data, this analysis will examine only those obtained from microelectronic random sources; in the interests of a consistent database, we shall not include the data from deterministic sources or those from the macroscopic, mechanical experiments. A partial reproduction of Table 2 in Jahn et al. (1997), presenting the relevant data, is given as Table 1 below. The primary consideration in an anomalies experiment is, of course, the question of whether an anomaly is present. Our main tool for examining this will be a Z score against the null-hypothesis prediction. However, different models of the nature of the phenomenon lead to different formulas for calculating the composite Z score across several datasets, for reasons that will be discussed below. Thus, when there are competing explanations, it may be necessary to evaluate them against each other before attempting to make a definitive, quantitative evaluation against the null hypothesis. We shall examine four hypotheses that have been proposed as models for the observed phenomena. Any database of this sort is necessarily a combination of efforts from many individual experimental sessions. There are variables that cannot be controlled effectively in experiments in which the data must be generated by human volunteers at their own convenience. A single experiment combines data from many individuals with different personal characteristics, at different times of day and year, and so forth. In attempting to compare overall models of the phenomena, one must assume that such uncontrollable variables are not conse-

TABLE 1 Data by Experiment Experiment Original Remote Alternate source Co-operator REG2000 REG20 Series 522 212 46 45 44 20 Bits 3.35 x 1.83 x 4.94 x 3.62 x 3.25 x 1.64 x lo8 lo8 lo7 lo7 lo8 lo6
Z

3.809 2.214 2.765 1.635 2.7 18 -0.956

Models on PEAR Data

165

quential to the phenomenon, or at least that their effects are averaged out over the accumulation of a large database. When we say that a model predicts that some measure or other should be "constant across all experiments," we naturally mean this to include the assumption that there are no consequential factors that systematically change between one dataset and another. In fact, we will illustrate one instance in which the presence of such a confound can lead to an incorrect model evaluation if it is not properly taken into account. All of the models discussed here are phenomenological; that is, they discuss what takes place under certain experimental conditions, without addressing the question of how it takes place. For example, the bitwise-effect model explains the anomalous observation as the result of a change in binary probabilities; that change is assumed, and no explanation is offered for how a human consciousness could alter the probability of a physical outcome. It seems advisable to compare such proposed models, even though they offer no deep theoretical insight, because it is unlikely that the deep questions can be addressed adequately while we are still uncertain about the phenomenology.

3. The Models
Bitwise effect. The observed phenomenon consists of a shift in the mean output value of a binary random number generator, in accordance with the human operator's prestated intention. This can be modeled as a change in the probability of the random process, which favors the outcome in accordance with the operator's intention. The bitwise effect model simply states that this is in fact what happens; the probability of a binary outcome is changed by some small amount in accordance with intention. The natural unit for the bitwise model is the bit. The effect size measure, under the bitwise model, is Z I ~ where Nb , is the number of bits. The bitwise model predicts that this measure should be constant across experiments. For brevity, we will use the mnemonic BIT to refer to this model. Decision Augmentation Theory. This theory, commonly abbreviated DAT even by its authors, suggests that no change to the performance of the experimental apparatus actually takes place (May, Utts, and Spottiswoode, 1995). Instead, they suggest that the human operator is able to foresee, unconsciously and by anomalous means, the outcome of an experimental run. The operator thus can choose a moment to start the run when random fluctuations will produce a favorable outcome. The natural unit for this model is the "DAT event" of a single start-of-data-collection by the operator. The effect size measure is Z I ~where N D is the number of DAT events in a single dataset; DAT pre, dicts that this should be constant across experiments. Time normalization. This model suggests that the critical variable in an anomalous experiment is the amount of time, or equivalently the amount of effort, contributed by the human operator (Nelson, 1994). The natural effect size measure is where N, is the number of seconds spent generating data.

~/a,

166

York H. Dobyns

The time-normalization model predicts that this should be constant across experiments. The three-letter mnemonic we will use for this model is TIM. Teleological. Several models have been discussed internally at PEAR that predict a decline over the course of a single experiment, specifically that the effect size should decline as l / f l , where N, is the total amount of accumulated data in the experiment to date. These models are collectively called "teleological" since the first such proposal was based on teleology at the level of individual experiments: It proposed that any experiment rapidly attains a "characteristic" overall Z score, which then holds more or less steady no matter how much further accumulation of data takes place. Unlike the previous models, this one does not have a "natural" effect size associated with it; rather, it predicts that the locally measured effect should decline in a specific way as data accumulate. It also does not make predictions across experiments, but it can be evaluated only by examining the sequential evolution within a single experimental dataset. The mnemonic for this model is TEL.
4. Comparing Models

It is easiest to compare models in pairwise fashion. In general, such a comparison requires identifying a variable such that (a) data with different values of this variable are present in the database and (b) the models being compared make different predictions for the functional dependence of the effect on the variable. We shall make all our pairwise comparisons between theoretical models by comparing the BIT model against another model. This is primarily a matter of convenience and historical preference: The original data collection systems used an implicit bitwise model in their assumptions, and this is still the most convenient representation for analyses. For the comparison of BIT against DAT, the key variable is the sequence length, or number of bits per DAT event. The data under consideration have eight different sequence lengths, ranging from 20 to 200,000. We shall refer to this sequence length variable as n; it is related to Nb, the number of bits in a given dataset, and ND,the number of DAT events, by the obvious Nb = nND. As noted above, BIT and DAT identify different increments of data as fundamental, and the specific functional dependence upon n will depend on which of these is used. BIT predicts that the bitwise effect size, tb= Z I ~ is constant across datasets, while DAT predicts the same of the DAT effect A comparison of these formulas will immediately show size tD= that cb = tD/l/i;. Thus, if we adopt ebas our measure, BIT predicts that the effect is constant, and DAT predicts that the effect varies as I/+. Conversely, if we measure effects by ED,then DAT predicts constancy and BIT predicts an effect that increases as &.We will have some further comments on this duality of representation later. A precisely similar situation applies between BIT and TIM except that the key variable is now B, the number of bits per second, forming the connection

,

~/m.

Models on PEAR Data

167

Nb = BN,. As above, each model predicts that its own measure is constant; TIM predicts that rb oc 1 / a , while BIT predicts c, oc For the comparison of BIT versus TIM, we have three available bit rates: 20, 200, and 2,000 bits per second. The data in the first and last of these categories are somewhat scanty, so we may expect that this issue will be resolved much less clearly than the previous comparison. TEL is somewhat different in that it does not present an alternative fundamental unit for the local interpretation of data; rather, it predicts a specific course of historical change over the accumulation of data in a particular experiment. This prediction applies to any of the effect size measures discussed in the previous paragraphs. For a pairwise test of BIT versus TEL, the obvious choice is to use rb; then BIT predicts that the value should be constant, while TEL predicts rb oc I/&. Finally, there is the question of a null-hypothesis comparison. The null hypothesis simply predicts that there is no effect: rb = ED = c, = 0 . Nonetheless, because the different models weight contributing datasets differently, the confidence with which one can assert E # 0 differs from model to model. The procedure we will develop for the pairwise comparison between models includes, as a byproduct, the generation of a Z score against the null hypothesis for each model involved. Our ultimate candidate for an interpretive test against the null hypothesis should be the preferred model emerging from the pairwise comparisons.

a.

5. Comparison Procedure
In each of the pairwise comparisons, we are confronted with a set of empirical data and a key independent variable: One model predicts that the data should be independent of that variable, and the other predicts that there should be a specific functional dependence. The simplest evaluative procedure at this point is a goodness-of-fit test. We know, for example, that BIT predicts rb = kb, while DAT predicts eb = k D / f i , where kb and kD are undetermined constants of proportionality. We can find empirical values for these constants by minimizing the total squared error between the model and the observed data, and then calculate the x2 of the residual differences between the observations and the fitted model. This is not, however, the most sensitive test available in the current situation. A standard and often-used procedure for such tests is to linearize the model and then perform a linear regression to identify the slope. In this particular case, "linearizing" means not discarding higher-order terms, but constructing a new variable such that the model predictions are linear functions of that variable. We will continue to use the BIT versus DAT comparison as an initial example. If we work in the eb representation, then the DAT model predicts a dependence on 1/fi. Therefore, let us define x = I/&; then the BIT prediction

168

York H. Dobyns

described above remains unchanged, and the DAT prediction becomes Eb = k D x . These models then can be compared directly with a weighted linear regression performed on the observations of cb versus X . Indeed, because we must perform a least-squares fit to find the empirical constants of proportionality, each of the model fits is a special case of a constrained linear regression. The general linear regression will find parameters b l , a slope, and bo, an intercept, such that the line y = bo + b l x minimizes the overall squared error C ( c - y ) 2 . The constant model is a special case of a linear least-squares fit with the slope constrained to be 0, and the model proportional to x is a special case of a linear model with intercept 0. The parameter values found by the linear regression, as optimal fits to the data, can be compared with the values found for the constrained models to establish which model is the better fit for the data. To compare the sensitivity of the linear-regression test with the goodnessof-fit test, a Monte Carlo procedure was used comparing the efficacy of both tests on synthetic data generated from both constant and linear models. For the linear model, a Z score for the slope against a nominal value of 0 was used. This was found to be correctly distributed in constant models and to grow with the size of the effect in linear models. The x2goodness-of-fit test also correctly identified which model had been used to generate the data but generally achieved a lesser degree of statistical significance than the regression test on the same data. This loss of significance was equivalent to that produced by a 17% deflation of the Z score in the regression test. It therefore was decided that the linear regression tests would be used as the primary evaluator for these hypothesis comparisons. The x2goodness-of-fit tests, however, retain a secondary utility. The regression test must always report some values for its parameters, whether the data are actually linear or not. Comparing these with the model predictions can identify a preferred model but will not give a warning if neither model is a good fit to the data. The x2tests, on the other hand, can identify cases in which neither model fits the data well. If x2is large for both models, some further source of variation, not well described by either model, is present.
5.1 Formulas

Each subdivision of the overall database produces a group of datasets, each with its own Zi and Ni, where the Ns may count bits, DAT events, or seconds as appropriate to the effect size measure in use. The ith dataset thus has an effect size observation ci = Z i / f l . In the description of Z scores above, it was noted that Z is always normalized to the standard deviation, or measurement uncertainty, of the observation in question. Thus, a Z score must by construction have a standard deviation of 1. The uncertainty in the effect size ci is therefore ai = 1 I f l . In addition, there is a set xi of values of the key model-discriminating

Models on PEAR Data

169

parameter. After suitable definition of x, we are always comparing two models, one of which predicts ri constant, while the other predicts ri cx xi. The constant model is

E" = kc

where kc = -

1

(1)

Here o(kc) denotes the overall uncertainty on the value of kc that propagates from the individual measurement uncertainties of the ri. The alternative model always predicts a linear dependence E" cx x; we will call this the slope model to distinguish it from the linear model produced by the empirical two-parameter regression. The formulas for this model are:

i = ksx

where k, =

c xi yi
x+;

1 , 0'

and o(ks) =

1

The general two-parameter regression, on the other hand, involves formulas that are rather complicated and can be understood more readily in an incremental presentation. If we first define the intermediate quantities,

we then can simply define the linear parameters:

E" = bo
where
i

+ blx,
bl=Ccl,i~i;
i

bo=C~o,i~i and

o ( b o ) = J m ;

o ( b l ) = J m .

5.2 Problem with the Simplest Approach It might seem that the linear regression provides a direct and immediate test between the two models; because one predicts a slope and the other does not, we need only examine the slope parameter (bl) for statistical significance. This approach has even been used in other work comparing linear models (May, Utts, and Spottiswoode, 1995). However, Figure 1 illustrates a disconcerting feature of such a direct comparison. In Figure la, the eight datasets at different values of the DAT sequence length n are plotted against I/&; the vertical axis is the bitwise effect size rb. Figure l b uses the dual representation, plotting rd against &. The two dotted lines in each graph show the two models; the solid slanting line shows the regression fit. In the first graph, the slope of the regression line seems rather am-

170

York H. Dobyns

0

100

200 300 400 Square root of sequence length DAT=Constant, BIT=Slope

Fig. 1 . Model comparison in two representations.

biguous between the two predictions; in the second, it is extremely close to one of the predictions. It would seem that the representation in the first graph allows no very strong preference between models, while the representation in the second graph definitely supports one. But it seems very odd to arrive at such different conclusions, when one considers that we are examining the same two models on exactly the same data; only the representations differ. Some thought will demonstrate that a simple comparison, whether visual or numeric, between the slopes of the regression line and the slope model is suspect. Both slopes are obtained from a least-squares fit to the same data; a com-

Models on PEAR Data

171

parison between them, using the error figures in the formulas above, is liable to overestimate the variance of the difference and thus underestimate the significance of that difference. That this is indeed the case was established by a systematic Monte Carlo evaluation of the slope-comparison procedure. The normal statistical interpretation of a Z score requires that it be normally distributed with mean 0 and variance 1 when the model it is testing is true. So to determine the validity of a slope-comparison Z score, the Monte Carlo program constructed synthetic datasets according to the requirements of a slope model. Figure 2 shows the result of a 10,000-iteration Monte Carlo evaluation of the slope-comparison Z score (bl - k , ) / a (bl) for data spaced as in the BIT versus DAT comparison. The points with error bars show the empirical population density, the solid curve shows the corresponding normal fit, and the dotted curve shows the standard normal probability density. We can see that the Z score remains normally distributed, but its standard deviation is much too small, about .48 rather than 1. 5.3 Vector Formulation How, then, can we obtain reliable tests of the two models? A certain amount of analytical development is needed to demonstrate the proper tests. Let us, first, reprise the situation: For any model comparison, after identifying the key discriminator variable in the database, we have a set of observations ci of the effect size, each with uncertainty ai, obtained at a key variable value xi. We have, in essence, four models to consider. The null hypothesis contends that

-2

-1

0 Z score

1

2

Fig. 2. Monte Carlo population density for slope-comparison Z score.

172

York H. Dobyns

there is no effect and that all of the observations differ from 0 only by noise. The constant and slope hypotheses have been discussed in the previous section, and the linear regression formulae were given in Equation 3 amount to the assumption of a two-parameter model. By explicitly adding a noise term v i , assumed to be normally distributed with mean 0 and standard deviation ai, we may express the models in terms of predictions for individual observations: (Null) (Constant) (Slope) (Regression)
:
€i

= Vi = kc = kSxi
Vi

: :
:

Ei Ei
€i

+ vi = bo + blxi + vi.

(4)

Now let us change notation by dividing each of these equations by ai. Defining the new variables,

allows us to re-express Equation 4: (Null) (Constant) (Slope) (Regression)
: zi = ~i
: zi = kc(c,i

:

Z i = ks(s,i : zi = bo(c,i

+

+ Ei
Ei

(6)

+ bl(s,i+ E i .

Note that the new noise terms, c i , all have variance 1, from their definition in Equation 5. Also, because we assumed that the initial noise terms, v i , were normally distributed, the normality of the error terms will continue to propagate through subsequent analyses. (This assumption of normality is quite accurate for the empirical data presently under consideration.) To alleviate an excess of subscripts, we may take advantage of the fact that the model equations can readily be written in terms of vectors rather than individual indexed equations. Thus, for example, 2 is the vector whose individual elements are the zi corresponding to individual observations: 2 = { z i ) = {zl, z2, . . . , z m ) ,where there are a total of m observations. With similar notations for the other indexed variables we may write (Null) : (Constant) : (Slope) : (Regression) :
=

i = kc& + 2 i = k,& + 2
2 = bogc + bl gS+ 2.

(7)

Before proceeding further, let us recall the basic operations of vector arithmetic. If we have two vectors x' = { x i } and y' = { y i } ,the vector sum, x' + y' = {xi + y i ), is the vector obtained by adding the components of the two vectors.

Models on PEAR Data

173

The product of a scalar (simple number) with a vector, as in kx' = {kxi),is the vector obtained by multiplying each element of x' individually by k. The inner product, -? . 9 = Cixiyi, is the result of multiplying each element of 2 by the corresponding element of y' and adding up all the resulting products. The length of a vector (x'( can be found by taking the square root of its inner product with itself: -? - x' = 1-?12. more useful result is not a vector identity but One follows from applying these formulas. Because the zi defined in Equation 5 have been normalized to the observational uncertainty, they all have a variance 1. The vector 2 used in Equation 7 therefore has a variance 1 in each element: 02[i] . Such a vector has the property that the variance of its inner =i product with any constant (i.e.,nonrandom or zero-variance) vector x' is

This can readily be verified by explicitly writing out the inner product and applying the standard rules for variances of sums and products. All of the formulas in Equations 1 through 3 can be reexpressed in this vector notation, with exactly the same values for the model parameters, kc, k,, bo, and bl. Let us begin by considering a least-squares fit to the constant model. We note from Equation 7 that both the constant model and the slope model have exactly the same functional form, 2 = kg + Z , differing only in a change of subscript; therefore, a single derivation will do for both. Because the noise tern, 2, is unknown, we can solve for it as the error between the model prediction and the empirical value: 2 = 2 - k$. The optimal value of k is the one that minimizes the length, or equivalently minimizes the squared length, of this error. This squared length is

To find the minimizing value of k, we take the partial derivative a(l;12)/ak and set it to zero:

By substituting the definitions of Equation 5 for zi and (c,i, we find that this gives us Equation 1 for kc; by instead substituting {sti, we recover Equation 2 for k,. By applying the variance identity (Equation 8) to the last line of equation 10 above, we find

York H. Dobyns

which because l$cl = and I$,1 = appropriately reproduces Equation 1 for (kc) or Equation 2 for o(ks), depending on whether we , insert f C or $ for $ in the last line of Equation 11. The vector formulas for bo and bl are likewise identical, whether one rewrites Equation 3 in terms of 2, gc, and or performs a least-squares-fit calculation directly on the model formula in Equation 7. Because the vector notation avoids the tedious enumeration of indices and summation signs, we present the vector forms directly rather than through the intermediate step of the co,1 coefficients:

,/= ,/6

= y' . x'), we may note from Because the inner product is commutative (x' Equation 12 that bo and a(bo) differ from bl and o(bl) solely through the systematic exchange of c and s subscripts. There is one more set of quantities of interest that can be computed directly from Equations 10 through 12. These equations give values and uncertainties for the four parameters kc, k,, bo, and b l . For each parameter, we can therefore calculate a Z score against the hypothesis that its value is 0. From Equations 10 and 11, we can calculate that

which can be applied to either k simply by attaching the appropriate subscript. The values of Zo and Z1 corresponding to tests of nonzero value on bo and bl are from Equation 12:

Models on PEAR Data

175

, As we might expect from the earlier observation on Equation 12, Zo t Z1 upon substituting & t* &.

5.4 Abstract Modeling Problem
Let us temporarily take our leave of the complicated least-squares-fit formulas and consider a very abstract representation of modeling data in multiple dimensions. We start with the assumption that we have a set of observations, which have been normalized such that each observation has unit variance. We summarize the set of observations as a vector ? with as many dimensions as we have observations. Because ? has been constructed to have unit variance, we can express it as two components, Z = (2) + 2, with a deterministic part (?) and a random c_omponent 2 defined as in the previous section: (2) = 0 , a2[2] = a2[1] = 1. Let us presume further that we have two vectors, G1 and G2, which are selected on theoretical grounds as being plausible models for the phenomena. The observed results might be explained by the first model, by the second, or by some linear combination of the two. We can summarize all three possibilities, along with the null hypothesis, by these four formulas for (2): Ho : (Z) = 0; HI : (2) = alG1; H2 : (1) = a2G2; H3 : (2) = BiGi

+ 8232.

Figure 3 illustrates a two-dimensional example of this formalism, with the observation vector ? and the theory vectors GI and 32 shown in solid black, while one speculative possibility for the model vector (2) and the noise vector 2 are drawn in gray. The equations (Equation 15) form a hierarchy, in the sense that lower-numbered models are contained within higher-numbered ones as special cases: Ho is a special case of either H I or H2, while either of the latter is a special case of H3. In each case the simpler model is produced by setting a parameter of the more complex model to 0. The questions we wish to answer are all of the form of asking whether this in fact holds true. Within the context of HI or H2, is a = 0 so that Ho is actually true? Within the context of the two-parameter model H3, is one of the parameters actually 0 so that the model reduces to HI or H2? It is relatively straightforward to find an appropriate test for the null hypothesis Ho within the context of either H I or H2. First, noting that H I and H2 are

176

York H. Dobyns

Fig. 3. Example of theory and observation vectors.

identical in functional form, differing only in subscript, we may for purposes of the derivation drop the subscript and simply work with (2) = a;. Since the observation vector 2 is constructed to have unit variance, a properly distributed Z score for Ho can be constructed by taking the parallel projection of ? along the hypothesis vector: that is, the component of the length 121 in the direction defined by the hypothesis vector v' . This projection is

We can readily verify that this formula gives us a valid Z score for Ho. The expectation and variance of Equation 16 are given by:

making use of the variance identity (Equation 8) and the model substitution (2) = a;. We see that the variance of Z is 1, as required, while the expectation of Z is proportional to a, so that it is 0 when Ho is true and non-zero when Ho is false. The question of evaluating whether the general two-parameter model H3 reduces to a one-parameter model is more involved. If one simply looks at the projection of along one of the hypothesis vectors, one is repeating Equation

Models on PEAR Data

177

16 and computing a test statistic for Ho within H 1 or H2, rather than computing the desired test statistic for H 1 or H2 within If3. The fundamental problem is that we have no guarantee that the vectors S1 and G2 are orthogonal. (Indeed, in our intended application, they are not.) Given nonorthogonal hypothesis vectors, simple projections o f ? along and G2 will both have nonzero expectation if either has a nonzero coefficient in reality. If, let us say, the actual model is H I , so that (Z) = alGl by hypothesis, the expected value of Z i = 2 51 / 1 GI I is (Zl) = a 1 I GI I. The expected value of Z2, on the other hand, is (2) . ;2/1521 = a 1 Gl ;2/1 s 2 1 ; this vanishes only if GI - G2 = 0, i.e., if the two hypothesis vectors are orthogonal. For the general case with nonorthogonal hypotheses, both (Z1)# 0 and ( 2 2 ) # 0 regardless of which hypothesis is true. Clearly neither Z 1 nor Z2 can be a satisfactory test against the hypothesis that one of the p parameters in H3 is 0. For definiteness, let us consider the case in which we wish to test whether p2 = 0, so that H3 reduces to H I . (Since the notation is completely symmetric, we can do our derivations once and then get the formulas for the other case by interchanging 1 t 2.) To obtain a Z score against this possibility, we need to , know the extent to which the observation ? requires a component not simply proportional to G1 ; that is, we need the projection of ? not along G2, but along that component of s2orthogonal to 3 1 . Let us label this orthogonal component 6 2 ; its value is given by

s1

One can readily verify that Gl . 6 2 = 0, so that G2 is indeed, as we require, orthogonal to G l . The Zscore against p2 = 0 is then just the projection of? in the G2 direction:

Let us verify that this does in fact have variance 1 and expectation 0 if, and only if, p2 = 0.

We see that the proposed Z score against p2 = 0 has a variance 1 always and has an expectation value proportional to p2; therefore, it is Z-distributed if and only if /32 = 0, as required. If we choose to expand 6 2 in terms of its definition (Equation 18), so as to express all quantities in terms of the original hypothesis vectors and s 2 , the form of ZB2from Equation 19 becomes

s1

178

York H. Dobyns

zp2= --

2 I"'

ij2

i

[ G ~.(
+

(
[G2 -

-

)

-

Hjl)

il.i2

+

-2

If we multiply the numerator and denominator of the last line of Equation 21 by (GI 12,we finally obtain

1
I

with a corresponding Zp obtained by interchanging 1

t ,

2 in Equation 22:

Equations 22 and 23 should look very familiar in form. The equations in Equation 15 were deliberately laid out to suggest the form of the equations in Equation 7; Ho, HI, H2, and H3 are identical to the null, constant, slope, and regression models, respectively, if one identifies the symbols:

These identifications make it very clear that Equation 16, giving the Z score for a one-parameter model against the null hypothesis, is the same as Equation 13 derived from least-squares parameter evaluation, and that likewise Equations 22 and 23 for the Z scores of the parameters of the two-component model are identical to Equation 14 for the Z scores of the regression parameters. These identities, particularly those equating Equations 22 and 23 with Equation 14, are critically important. We have already derived and demonstrated the fact that Zgl and Zp2 are correctly distributed, properly normalized, and independent Z scores for the evaluation of their respective hypotheses. We now see that the least-squares method of parameter evaluation automatically generates the extra, orthogonalizing terms required to make sure that Zgl tests only the parameter Dl regardless of the value of P2, and vice versa. Because we made no particular assumptions about the values of or G2 in the derivation from Equations 15 through 23, this derivation of the validity of the Z formulas is entirely general; in particular, it holds despite the fact that our regression formulas (Equation 3) use the origin of coordinates, rather than the mean of the xi values, as the origin of the regression. One small subtlety needs to be kept in mind when evaluating these models.

Models on PEAR Data

179

The Z scores of Equations 22 and 23 are Z scores against the null hypothesis that their respective parameters are 0. To consider Zg2 (which is Z1 in the notation of Equation 14), this is a Z score against the hypothesis that p2 = 0 (bl = 0), in which case H3 (regression) reduces to H1 (constant). Thus, the Z score of the slope is the test that can refute the constant model, by demonstrating a nonzero value of the parameter that must be 0 if the constant model is true. Likewise, the Z score of the intercept (Zgl, Z o ) is the Z score against H2 (slope model). Despite the power of the vector formalism, clarity of visualization requires that the original notation of €i, xi, and a be retained for illustrative purposes. i Because graphs on paper are inherently two-dimensional, and most readers are three-dimensional, it grows very difficult to render graphically vectors in spaces of four or more dimensions. It is true that any two arbitrary vectors can define a plane, so that a representation such as Figure 3 can show the relations of the theory vectors to each other and to the projection of the data into their common plane. However, such a compressed representation makes it impossible to visualize the actual data for most analyses. Therefore, the analyses in subsequent sections will retain the familiar graphs in xi, Ci space as well as the vector representation developed here.

5.5 Resolution of the Quandary
With this new approach to calculating the Z score between the linear fit and each model, we find a fully symmetric duality between the two representations of the data in any model comparison. If we switch from one model's definition of the fundamental effect size to the other, the role of constant model and slope model are exchanged. But all of the statistical parameters also change places in a symmetrical fashion. The Z scores of the model parameters against the null hypothesis, of the linear regression against each model, all change places so that the same statistical figure-of-merit is associated with the same model. Even a X 2 goodness-of-fit test against the residuals from the model displays the same behavior. All of the statistical measures are associated with a specific model, regardless of the data representation used (see Table 2).
TABLE 2 Duality of Representations

I

i

Parameter

z/A effect
l/fi
BIT DAT 5.80 3.97 1.30 4.43

z

effect I

~

X axis Constant model Slope model Constant Z vs. null Slope Z vs. null Regression Z vs. constant Regression Z vs. slope X$ vs. constant x;vs. slope

DAT BIT 3.97 5.80 4.43 1.30

fi

180

York H. Dobyns

This exact equivalence between representations is, of course, further evidence that we are justified in using the techniques discussed in the preceding section: The comparison between two models and a set of data should be independent of the choice of representation. Having established that both representations are exactly equivalent when properly analyzed, we are now free to choose either representation without concern that our choice biases the conclusion. The remaining figures therefore will display the BIT representation exclusively.

6. Model Test Outcomes
Although we have already seen some of the DAT versus BIT comparison results, since it was the example used in the previous section on developing the comparison methodology, we will cover the matter in more detail here. First, there is the matter of the raw data. The experimental summary in Table 1 is inadequate, because several of the experiments enumerated collected data at more than one sequence length, while in other cases more than one experiment used the same sequence length. Table 3 shows the result of collecting the data according to sequence length, the key variable for the DAT versus BIT comparison. When one calculates the statistical parameters for the comparison between models-which, as noted in the previous section, are the same whether one
TABLE 3 Data for DAT vs. BIT n
20 200 1,000 2,000 10,000 20,000 100,000 200,000
Nb

ND lo5 lo7 lo6 lo7 lo8 lo7 lo8 10' 1,600 163,350 1,200 24,340 15,182 3,522 2,040 2,106

Z

3.2 x 3.267 x 1.2 x 4.868 x 1.5182 x 7.044 x 2.04 x 4.212 x

Note: DAT = Decision Augmentation Theory; BIT = bitwise effect. TABLE 4 BIT versus DAT Comparison Comparison BIT vs. null (2) DAT vs. null (Z) BIT vs. regression (2) DAT vs. regression (2)
2 X7 on BIT 2 X7 on DAT

Score

P

Note: BIT = bitwise effect; DAT = Decision Augmentation Theory.

Models on PEAR Data

181

uses z/A Z or J as one's measure of effect size-one finds the values ~ presented in Table 2, represented here in Table 4 for more convenient model identification. These outcomes are summarized graphically in Figures 4 and 5. Figure 4a shows the familiar regression graph, and 4b shows the various model parameters directly so as to facilitate comparison. Figure 4b plots model slope against model intercept, in units of the empirical uncertainty associated with each measurement. In other words, it plots the

Z vs Constant: 1.303 ;

Z vs Slope: 4.427

0

2 4 6 Intercept Z score

8

Fig. 4. Model comparison: BIT = constant and DAT = slope.

182

York H. Dobyns

(b): Normalized Residuals

Fig. 5. BIT constant and DAT slope as vectors.

Z scores of the two models. The null hypothesis is represented by the filled dot at slope 0, intercept 0. The horizontal dotted line represents the family of all constant (zero-slope) models. Recall that in the current data representation, these are BIT models. The open circle on this line shows the actual model in this family that is the best fit to the data. The vertical line shows the family of all DAT-like, zero-intercept models. The open diamond on this line shows the actual DAT model that best fits the data. The two-parameter linear regression fit also is shown, but not directly as a Z score. To provide a valid visual comparison, its slope and intercept are plotted to the same scale as the slope of the slope model and the intercept of the con-

Models on PEAR Data

183

stant model. Because the two-parameter regression in general has larger uncertainties on each parameter than the one-parameter models, this means that the position of the regression fit cannot simply be read off the axes to show a Z score. To provide a visual cue to the degree of uncertainty in the regression, 1 o and 2ocontours have been drawn around the fit point. We can clearly see that the general linear regression is quite close to the constant, BIT model, and very different from the entire DAT family of models. Figure 5 illustrates the vectorial representation of the problem. As noted in section 5, any two (nonparallel) vectors define a plane, however high the dimensionality of the space in which the vectors themselves exist. Figure 5a presents a view of this "theory plane" defined by for BIT and & for DAT. The two theory vectors themselves are drawn at unit length (the absolute scaling of this space is irrelevant to the theoretical comparison), with dots drawn at every unit increment of the vector length to help the eye project the _vector's direction. Shown as a closely dotted line is the observation vector Z, or rather its two-dimensional projection into the theory plane. This representation, free of the visual biases induced by a regression plot, allows an immediate appreciation of how much more closely the observation falls along the direction than the direction. Finally, Figure 5b deals with the residual vector 2 = ? - (bogc bigs), that part of the observation vector ? that is not accomodated by eitker model at all. A few moments of vector algebra will allow one to verify that R is perpendicular to both and and is therefore perpendicular to the entire theory plane. The viewer is assumed to be sighting along the theory plane, so that it forms the dotted line at y = 0.The solid line embedded in that dotted line is the projection o f ? into the theory plane, while the vertical line and the vector .? itself are shown extending slightly into a perpendicular dimension. If one pretends, for a moment, that Figure 5a is a three-dimensional object rather than a flat plot on paper, the view in Figure 5b is what one would achieve by rotating the page slightly clockwise, so as to make the vector ? shown be level rather than inclined, and then tilting the page away from oneself so that one is sighting along the paper, rather than looking down on it. It must be noted, however, that this lower graph is labeled as showing the "normalized residuals," rather than simply the "residuals." The reason for this is to avoid a visual deception arising from our accustomed three-dimensional experience. In the three-dimensional space of our everyday experience, a given plane has exactly one perpendicular axis. This is not so in more dimensions. The eight-dimensional space that Figure 5 attempts to summarize permits six mutually perpendicular directions all of which are also perpendicular to the theory plane. Under the hypothesis that (?) lies in the theory plane, the actual observation Z is expected to depart by a Z-distributed amount in each of t_hos_esix orthogonal directions, so the squared length of the residual vector, R R, is clearly a with+a number of degrees of freedom equal to the number of dimensions in which R exists. Therefore, the visual presentation normalizes this length. Figure 5b reports the x2 value, and its degrees of freedom, explicitly:

tc

ts

tc

+

zc

ts

x2

184

York H. Dobyns

but, because the eye only expects one perpendicular to a plane, rather than six, the vertical distance shown is the Z score that has the same two-tailed p-value as the X 2 calculation. The vertical axis can be read off directly as a Z score; it represents the length that the residual vector would have if the entire vector space were confined to the three dimensions we are accustomed to visualizing. Turning now to the comparison of BIT with TIM, Table 5 presents the raw data when segregated according to the key variable of bits per second. These figures result in the model comparisons summarized in Table 6. Figures 6 and 7 illustrate the model comparison between BIT and TIM in the same way as Figures 4 and 5 do for BIT versus DAT. In Figure 6a, the x axis is, of course, 1 / a . As it happens, we have only three data points for this plot, and one of them has rather large error bars. This is the reason for the very wide error contours visible on the lower plot; despite the fact that both the BIT and TIM models are almost 6 0 away from the origin, the ability of the regression to discriminate is very much weaker, so that the 2 0 contour covers almost the whole plot. Nevertheless, the regression fit is visibly much closer to the BIT model than to the TIM model. Figure 7a illustrates the same point: The acute angle between the two theory vectors shows why ? has little power to distinguish between them, despite being much closer to one. Figure 7b once again shows no consequential residual vector. It is worth noting that because we actually have only three dimensions in this vector space, the normalization of Figure 7b is one-to-one and we see the residual vector at its true length. The TEL model refers to a historical evolution within a single experiment, rather than across experiments. The model parameter becomes the amount of

TABLE 5 BIT vs. TIM Comparison Data

20 200 2,000

1.64 x lo6 6.0409 x lo8 3.246 x lo8

8.2 x lo4 3.02045 x lo6 1.623 x lo5

-0.9558 5.2477 2.7179

Note: BIT = bitwise effect; TIM = time normalization. TABLE 6 BIT vs. TIM Comparison Results Test BIT vs. null (Z) TIM vs. null (2) BIT vs. regression (Z) TIM vs. regression (Z) Score
P

Xivs. BIT

2.27 5.57

.32 .062

X; vs. TIM

Note: BIT = bitwise effect; TIM = time normalization.

Models on PEAR Data

185

Z vs Constant: 0.515 :
0.0015 0.0010 Z0.0005 0

Z vs Slope: 1.887

(a)

~0.0005 -0.0010 -0.0015 0.0
1 1 1

Q)

1)

A

1

0.05 0.10 0.15 0.20 Inverse square root of (bits per second)

- 4 - 2 0 2 4 6 Intercept Z score

8 1 0

Fig. 6 . Model comparison: BIT = constant and TIM = slope.

data accumulated in the experiment to date. To test this model, we therefore must perform evaluations within single experiments. Furthermore, because the hypothesis predicts a progressive diminution in effect from series to series, a tabular presentation of the raw data would be excessively cumbersome, with as many lines as the total of series enumerated in Table 1. Therefore, Table 7 presents only analysis results: the Z score versus the null hypothesis, the Z score against the linear regression, and the x2 fit test, for both BIT and TEL models, for the six experiments summarized in Table 1.

186

York H. Dobyns

(b): Normalized Residuals

0

2

4

6

Fig. 7. BIT constant and TIM slope as vectors.

Here we have a rather disconcerting lack of unity. Some of the experimental datasets show a strong preference for TEL and some show a modest preference for BIT. The result cannot be attributed entirely to statistical power issues, as the two largest datasets show oppositely inclined preferences. The largest of these datasets, furthermore, contains a known confound. As was noted in section 2, an analysis of this sort proceeds on the implicit assumption that conditions affecting the experimental yield, which are not related to the hypothesis under consideration, are uniformly or at least randomly distributed across the subsets defined by the hypothesis parameter. This is

Models on PEAR Data
TABLE 7 BIT versus TEL Summaries
-

Experiment

Znb

znt

zrb

z,
-0.626 1.534 0.072 0.628 -0.523 -1.762

X: (df
571.6 (521) 200.5 (21 1) 57.12 (45) 39.68 (44) 45.41 (43) 18.31 (19)

X: (df )
555.9 (521) 202.8 (21 1) 54.49 (45) 40.03 (44) 41.17 (43) 19.17 (19)

Original 3.809 Remote 2.214 Alternative source 2.765 Co-operator 1.635 REG2000 2.7 18 REG20 -0.956

5.405 1.615 3.205 1.525 3.410 -0.229

3.886 -0.244 1.624 0.218 2.125 1.498

= Note: BIT = bitwise effect; TEL = teleological; Znb BIT Z score versus null; Zn,= TEL Z score versus null; Zrb= BIT Z score versus regression fit; Z, = TEL Z score versus regression fit; &df) = Goodness-of-fit x2 for BIT (degrees of freedom). &df) = Goodness-of-fit X 2 for TEL (degrees of freedom).

known not to be the case for a specific subset of the "Original REG" data with respect to the TEL parameter. As has been discussed in fuller detail elsewhere (Dobyns and Nelson, 1998), the earliest REG experiments were run with a slightly different experimental protocol, which distinguishes itself by a much larger effect size than the subsequent continuation of the experiment. Because the change in protocols is a historical one and distinguishes part of the earliest data from the remainder of the dataset, it is directly confounded with the measure of historical progression that defines the TEL hypothesis. Merely identifying the presence of a confounding factor, however, cannot by itself determine the status of the two hypotheses. The fact that the difference between the early data (designated as the X protocol) and the subsequent data was well established long before the TEL hypothesis was considered prompts an inclination to consider the support for TEL a confounding result of the difference in protocols, but this reasoning is spurious. It is equally plausible that the unexplained difference in the protocols is driven by the hitherto-unrecognized effects of teleological decline. To resolve this ambiguity, we would like ideally to compare data generated under the two protocols at the same teleological status. This unfortunately is impossible, because "teleological status" is a historical parameter that is distinct for any two series. We can do the next best thing, however, by segregating the two conditions and examining each independently for teleological effects. There is a small subtlety involved in this segregation, however. When we remove the X protocol, we are taking away the earliest series in the Original REG dataset. Did the protocol change "restart" the teleological "clock," so that the new data represented a new beginning? Or should the subsequent data be evaluated under the assumption that they are continuing a teleological decline started in the early data? We do not know enough about the constraints of the TEL model to answer this question, so we have no alternative but to ana-

York H. Dobyns
TABLE 8 BIT vs. TEL Summaries, Reduced Datasets Experiment X only Continue Restart
Znb
znt Zrb zrt

~@f)
46.19 (15) 516.9 (505) 516.9 (505)

X (df :
34.69 (15) 519.0 (505) 521.3 (505)

3.519 3.102 3.102

4.886 2.734 2.286

4.331 0.034 -0.124

-2.696 1.466 2.101

Note: BIT = bitwise effect; TEL = teleological.

Z vs Constant: 4.332 ;

Z vs Slope: -2.696

-10

-5 0 5 10 Intercept Z score

15

Fig. 8. BIT constant versus TEL slope for X protocol only.

Models on PEAR Data

189

1

lyze the "remainder" data in both ways. The results of such analysis are presented in Table 8. It is plainly evident that while the X protocol shows a preference for TEL internally, neither model is a very good fit to the data in this unique dataset. Both x2 values are quite large, withp-values of 4.96 x and 2.72 x lo-', respectively. Neither Z score is within the range that would be considered plausibly consistent with model variation. This strongly suggests that this dataset contains some structure other than that explained by either model. Conversely, the data without the X protocol no longer show any support for TEL, regardless of the status of the "restart" assumption. Figures 8 and 9 display the X protocol data in the format used for Figures 4 and 5 and Figures 6 and 7. In Figure 8a, we note what appears to be almost a bifurcation in the experimental data, with the data points below x = 0.0004 seeming at least visually to cluster into two vertically separated populations. It is inevitable that even two-parameter linear regression fails to fit this dataset well, because no single-valued functional fit can possibly reproduce such a feature. In Figure 8b, the regression fit is conspicuous for its wide departure from both families of theoretical model. Figure 9, illustrating the vector space representation, drives home the same point. In the theory plane of Figure 9a, the observation vector is well outside the acute angle subtended by the two theory vectors, differing more from either theoretical prediction than the theories differ from each other. The residual graph in Figure 9b shows us a vector lying far outside the theory plane, with an equivalent Z score well over 2. Figures 10 and 11 give the same reports for the non-X protocol data, using the "restart" assumption. We can see that the results are now a very good fit to BIT, whether we look at the parameter plot in Figure 10 or the vectors of Figure 11. We also see a completely inconsequential residual in Figure 11b. The overall conclusion remains ambiguous. Computing a weighted Z across all of the datasets reported in Table 7, with the "Original REG" further broken down as summarized in Table 8, leads to the unenlightening results presented in Table 9. As in Table 6, two results are reported, depending on whether the transition from the X protocol to the standard protocol is considered a continuation of the Original REG experiment, or a transition to a new experiment. We are left with the conclusion that we have no good grounds for preferring either BIT or TEL on the basis of the current data.

TABLE 9 Overall BIT vs. TEL

X Treatment
Continue Restart Note: BIT = bitwise effect; TEL = teleological.

Z vs. BIT

Z vs. TEL

190

York H. Dobyns

(b): Normalized Residuals

Chi-Squared: 27.424 (14 d.f.)
4-

I
z

2-

0 --.--t

---.
I
t

1-

0

2

4

6

Fig. 9. BIT constant and TEL slope, as vectors, X protocol data only.

7. Conclusions We noted above that the choice of model can affect even the the primary conclusion about the existence of an anomaly. One reason for this follows from the differences in the definition of the fundamental data unit. Let us suppose that we have several datasets, each of which contains Ni observations, and has a Z score Zi, where i = I,... ,m for m = the number of datasets. What is the overall Z score of the aggregate database? It is provably correct that this ~ifl)/dm, in the sense that this reproduces composite score is Z =

(x

Models on PEAR Data

191

0.0041

Z vs Constant: -0.1 ; 24

Z vs Slope: 2.101
1

0.002

0.0

.0.002

.0.004 0.0

0.0002

0.0004

0.0006

0.0008

0.0010

Inverse square root of accumulated data total

-2

0

2

4

6

Intercept Z score
Fig. 10. BIT versus TEL with X data removed, restart version.

the Z score that would be calculated from pooling all of the raw data. We have noted, however, that different models of the phenomenon may disagree on the definition of the fundamental data unit and therefore may assign different values of Nito the various subsets. In general, this changes the value of the composite Z. This is the reason for the different Z scores against the null hypothesis displayed by the different models in section 6. Therefore, when competing models of the effect have been proposed, it seems advisable to select the model that is the best fit to the data before at-

192

York H. Dobyns
(a): Projection of Z In Theory Plane

(b): Normalized Residuals

Fig. 1 1 . BIT constant and TEL slope as vectors, X protocol data removed.

tempting an absolute test for the existence of the effect. We have done this, above, for the four models under consideration and find in each case that either the BIT model is preferred, or that no preference can be established. If the BIT model is thus taken as the best candidate model of a real effect, we note that its Z score against the null hypothesis is 5.8, with ap-value of 3.3 x We may be confident that the database examined here shows a real effect and that the hypothesis that this effect comprises a constant bitwise probability shift serves at least as well as does any alternative thus far considered.

Models on PEAR Data

7.1 Bayesian Calculations
The foregoing material has employed a "frequentist" approach to the interpretation of the statistical scores, by the presentation of p-values. It would be inappropriate to reprise the entire theoretical debate between frequentist and Bayesian statistics here, and rather than take sides in this dispute, PEAR has found it more productive to present results in both formalisms. The Z scores calculated above can be put to work immediately in a Bayesian hypothesistesting framework. Where possible, we prefer to present results as an odds ratio, or odds adjustment factor, which represents the relative support that a piece of empirical data provides to two well-defined hypotheses. Because this describes the proportional change that must be made in going from prior to posterior probabilities on the hypotheses concerned, it can be computed without reference to the prior probabilities. Table 10 summarizes the odds ratios for the pairwise comparisons made above. We see that we have strong grounds for supporting BIT against DAT, and much weaker but still positive support for preferring it to TIM. The two conflicting Bayes factors, both close to 1, for the "continuation" versus "restart" interpretation of the X protocol transition, reflect the lack of conclusive information discussed in the previous section. The Bayesian evaluation against the null hypothesis becomes slightly more complicated. In the pairwise comparisons above, the two-parameter linear regression is simply compared with two families of one-parameter models; no prior information about the model parameters is required, since they are handled in a perfectly symmetric fashion. However, once one of these models (in this case, BIT) is chosen as the favored alternative for comparison with the null, the actual odds adjustment one computes will depend on the prior assumptions one makes for the actual parameter values, rather than merely their constraint equations. Because we are using a BIT model for our comparison, the variable under consideration is the probability that a binary decision chooses the option targeted by human intention. The null hypothesis predicts that this probability

TABLE 10 Pairwise Odds Adjustments Comparison BIT vs. DAT BIT vs. TIM BIT vs. TEL (continue) BIT vs. TEL (restart) Note: BIT = bitwise effect; DAT = Decision Augumentation Theory; TIM = time normalization; TEL = teleological. Odds Adjustment Favoring BIT

194

York H. Dobyns

physical processes and that the probability might take on any value between 0 and 1. In other words, the effect of intention is a probability shift Ap in the range -?h < Ap < %. This is in many ways a maximally conservative evaluation. Nonetheless, when this is used as the alternative hypothesis, one finds an odds adjustment factor of 1658 in favor of BIT. More informed priors produce better odds in favor of the effect. For example, one can almost casually constrain the effect of human intention on random events by making the observation that casinos make steady profits despite the presumptive (and often quite obvious) intention and desire of gamblers to win. We may therefore presume that whatever the typical scale of intentional effects on binary probabilities, it is smaller than the half-percent house advantage, which seems to be the smallest value for a commonly played casino game. (Blackjack, played by optimal strategy but without card counting, has this house advantage level.) Taken as the bitwise alternative, this means a uniform prior probability density -0.005 < Ap < 0.005, which produces an odds adjustment of 1.66 x lo5 in favor of BIT. Finally, and most apropos, a previous meta-analysis of microelectronic PK experiments (Radin and Nelson, 1989) surveyed a substantial body of prior work in the field. Unfortunately this survey includes some of the data in the current analysis; however, when the overlapping data are removed from the previous observation, the remainder provides a prior probability for the bitwise alternate that leads to an odds adjustment of 5.5 x lo6 in favor of the effect. Acknowledgments The author gratefully acknowledges the advice and assistance of Werner Ehm of the Institut fiir Grenzgebiete fiir Psychologie und Psychohygiene (IGPP) in developing the analytical formalisms of section 5. Please direct reprints requests to York H. Dobyns, C-131 E-Quad, Princeton University, Princeton, NJ 08544-5263; e-mail ydobyns@princeton.edu. The PEAR laboratory is funded by donations from the IGPP, the Lifebridge Foundation, the Ohrstrom Foundation, and Laurance Rockefeller. References
Dobyns, Y. H., & Nelson, R. D. (1998). Evidence against decision augmentation theory. Journal of Scientific Exploration, 12, 23 1-258. Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of random binary sequences with pre-stated operator intention: A review of a 12-year program. Journal of Scientific Exploration, 11, 345-367. May, E. C., Utts, J. M., & Spottiswoode, S. J. P. (1995). Decision augmentation theory: Applications to the random number generator database. Journal of Scientific Exploration, 9,453-488. Nelson, R. D. (1 994). Effect size per hour: A natural unit for interpreting anomalies experiments. Technical Note PEAR 94003. Radin, D. I., & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random physical systems. Foundations of Physics, 19, 1499-1 5 14.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 195-216,2000

0892-33 10100 O 2000 Society for Scientific Exploration

The Ordering of Random Events by Emotional Expression

1

Center for Functional Research 2392 Mar East Street Tiburon, CA 94920 e-mail: RABlasband@aol.com

Abstract- The purpose of this experiment was to see whether any correlation could be found between the expression of emotions of patients in Reichian biopsychiatric therapy and the output of an electronic random event generator (REG). Videotaping and coding of therapy sessions were conducted in synchronization with the operation of an REG. Comparisons were made of REG output while patients spoke with neutral emotion and during the spontaneous expression of crying with anxiety, frustration or sadness, and anger. Statistical analysis revealed anomalous REG outputs during periods of emotional expression compared to periods of neutral talking. Periods of the expression of anger compared to those of crying with sadness andlor anxiety were significantly correlated with marked elevations and depression of REG output, respectively. Based on the concepts developed by Jahn and Dunne, as well as Reich, it is hypothesized that the observed effects are due to the establishment of resonance between the therapistlinvestigator, the emotionally expanding andlor contracting patient, and the REG. Keywords: REG-emotion-resonance

Introduction
In their experiments on human-machine interactions, Robert Jahn and Brenda Dunne (Jahn and Dunne, 1987; Jahn et al., 1997) demonstrate that the distribution of impulses generated by a random event generator (REG) can be anomalously, marginally shifted from normal either locally or at a distance by active mental intention. They also found that many operators appeared to impose a signature on the REG output, i.e., the pattern of output when a particular operator was operating displayed similar directional trends on most runs on the electronic REG and other experimental REG devices. The effect appeared to be equally strong even if the operator was thousands of miles away from the device. The experimenters also found that the REG output during baseline (null-intention) runs was anomalous in that the obtained variance of the REG output was significantly narrower than the theoretical expectation (Jahn and Dunne, 1987). Later studies indicated that the baseline anomalies were strongly correlated with operator gender (Dunne, 1998). These findings indicate that something other than conscious intention can apparently affect the REG and do so at a distance.

196

R. Blasband

In their model, Jahn and Dunne define consciousness as all that one identifies as oneself: thought, emotions, physical substance, etc. They account for their anomalous finding by hypothesizing a nonelectromagnetic field, through which a state of resonance is established between the operator and the machine. Given the above findings, and assuming that Jahn and Dunne were correct in including emotions in their definition of consciousness, we reasoned that if other-than-conscious intentions can order the output of an REG, then it was possible that spontaneous, undirected, emotional expression might, in some way, do the same. In the form of psychiatric therapy employed by the author (Reichian biopsychiatry), intense emotional expression by the patient occurs fairly regularly. It was therefore hypothesized that an REG set up in the therapy office 10 feet distant from a patient would be anomalously affected during those moments when the patient would express emotions compared to those times when patients would be emotionally neutral. The present study is an initial survey designed to see whether correlations can, indeed, be found between REG output and emotional expression.

I

Literature Review
Other than the quantum mechanical theoretical formulations of Jahn and Dunne (1987), there was no reason based upon the findings and theories of classical biophysics to expect any influence on the output of an REG by a spontaneously emoting individual many feet distant from the device. Although it was true, as stated above, that Jahn and Dunne had amply demonstrated that the output of the REG could be correlated with the conscious intention of operators, there was nothing in the literature at the time of beginning the experiments described in this paper that suggested effects of either local or nonlocal emotional output on such a device. Since then, the only relevant findings involve the effects of group expression on the REG (Nelson et al., 1996, 1998; Radin, Rebman, and Cross, 1996; Rowe, 1998; Schwartz et al., 1997). Further, a review of the mainstream literature on the biophysical basis of emotions shows that although one can demonstrate myriad physiological parameters that correlate with feelings and expressed emotions, all involved measurements are made either within or on the surface of the body (Panksepp, 1998). Indeed, except for experiments performed by Wilhelm Reich and Harold Burr, I could find no research involving biophysical and physiological parameters of biological or psychobiological functions where measurements were made nonlocally. Burr observed that an electrical field measured a small distance away from the surface of an unfertilized, biologically undifferentiated salamander egg appeared to have a determining effect on the establishment of the pattern of the future axis of the central nervous system (Burr, 1972). On the basis of his experimental observations, Burr, with F. S. C. Northrop, formulated an electrodynamic theory of life, maintaining that an electrical field was a primary prop-

.

Ordering Random Events by Emotional Expression

197

erty of protoplasm, sustaining pattern in the organism in the midst of physiochemical flux (Burr and Northrop, 1935). The work of the contemporary biophysicist Mae-Wan Ho at the Open University, London, complements many of Burr's findings. Her analysis of her own and other's experimental work supports the hypothesis that the "organizing embryonic field is global in character, right down to individual macromolecules, and that its major axis is electrodynamic in character" (Ho and Saunders, 1994). The radiation effects of electromagnetic (EM) fields generated by the acceleration of electrically charged inorganic, organic, and macromolecular ions within the body have been investigated experimentally and theoretically. The waves resulting from the movement of these ions through the body are variously emitted, decay, and are refracted and reflected at organ interfaces, producing interference patterns, which combine according to quantum theory's superposition principle. According to C. Zhang and F. A. Popp, these interference patterns create stable standing waves, which may have a great deal to do with holographic effects seen in the global functioning of the organism and its nonlocal treatment by such modalities as acupuncture (Rubik, 1995). But even in these studies involving EM fields, all measurements were made on or beneath the surface of the body. Thus, we see that there is at least some basis for the existence of a biofield effect based upon electromagnetism. However, although it is possible that emotional expression could affect an REG through perturbing an EM field emanating from the body or be some component of the elements that generate such a field, the findings and theory of electromagnetism provide no basis for expecting that such a field would extend much further than a few inches from the body. Further, and most important, as Jahn and Dunne note, the experimental phenomena documented in their work with the REG and remote viewing, especially those involving anomalies of time, cannot be explained by electromagnetic theory (Jahn and Dunne, 1987). A non-EM biophysical basis for the emotions was elucidated by Wilhelm Reich, the Austrian psychoanalyst, who devoted the latter part of his life to the investigation of the nature of life and life energy. In his clinical psychoanalytic and later vegetotherapeutic and orgone-therapeutic work, Reich proposed that emotions were a function of the patient either bioenergetically expanding toward or contracting away from the outer world (Reich, 1949). This amoeboidlike behavior confirmed an earlier postulate of Freud that never took serious root in later psychoanalytic theory. When Reich moved from the classical psychoanalytic technique of free association toward the more confrontive kinds of intervention of the method he called character analysis, he noted that patients more readily gave in to the expression of their previously blocked emotions. When this happened he observed spontaneous pulsatile (clonisms) movements of the patient's body, which at times included the entire torso. These movements were greatly amplified when Reich added massage directed at physical release of the patient's chronic muscular tension (muscular armoring) to his

198

R. Blasband

therapeutic armamentarium. Reich found that as the characterological and muscular armoring softened in the course of therapy, patients reported feeling electrical currents and sensations of something streaming through their bodies. This was usually associated with an increase in general vagotonic tone, flushing of the skin, brightening of the eyes, contraction of the pupils, slowing of the heart, and an increase in pleasurable sensations at the skin. The opposite of this state of bioenergetic expansion was one of bioenergetic contraction, usually brought on by fear or anxiety, and characterized by a general autonomic sympathetic tone with pallor of the skin, narrowing of the eyes, dilatation of the pupils, acceleration of heart rate, and sensations of inner tension (Reich, 1942). In order to objectify these observations, Reich measured bioelectric charge on the skin surface of subjects in a variety of emotional states. He found that the subjective perception of anxiety or sadness was directly correlated ("functionally identical," to use Reich's term) with a contractive movement of bioelectricity away from the skin surface toward the bioenergetic core of the organism-autonomic neural plexes deep in the abdomen and pelvis. Anger, pleasure, and longing were correlated with an expansive movement of bioelectricity from the core out to the skin surface (Reich, 1937). These directions of movement were understood by Reich to be mediated by opposing domination of the two different branches of the autonomic nervous system, the parasympathetic in bioenergetic expansion and the sympathetic in bioenergetic contraction. A more recent attempt to replicate Reich's study using modem equipment confirmed in many respects Reich's findings (Braid and Dew, 1988). Reich found that a bioelectrical interpretation was not, however, sufficient to explain adequately all the phenomena observed in his bioelectrical studies. He then undertook a series of experiments on the sources of energy sustaining life, which ultimately suggested a nonelectromagnetic basis for living processes. In the course of his research, Reich reported experiments in which he postulated a field of bioenergy, orgone energy, surrounding and interpenetrating all living things. Reich's principle device for detecting this field, the orgone energy field meter (Reich, 1948, p. 125) could apparently detect the energy field of a lively human at distances up to 6 feet.' The effects of the spontaneous expression of emotions on the meter were not undertaken, as far as I know, although Reich did report that subjects who were more vegetatively alive (capable of the expression of intense emotions) could more readily affect the meter, compared to those who were vegetatively dead, a catatonic schizophrenic or a heavily armored obsessive compulsive neurotic.

'This device consisted of moveable facing metal plates, one of which was connected to the different pole of the secondary coil of an induction apparatus. A 40-watt bulb connected between the plates glows when the primary current is at a certain intensity. The proximity of something living to the upper plate affects the intensity of glow of the bulb. The more alive the object the more intense the glow.

Ordering Random Events by Emotional Expression

199

Confirmation of Reich's psychiatric, biophysical, and physical findings have been reported in those few contemporary journals devoted to his work (The Journal of Orgonomy, Annals of The Institute for Orgonornic Science) but never seriously challenged by reports in the mainstream literature.
Methodology

Reichian biopsychiatry is a so-called depth therapy, whose aim is to free the patient from his characterological and muscular armoring, or blocks, thus permitting the free flow of life energy through the organism (Reich, 1949). In the process, emotions are spontaneously released. The depth of the emotional release is a function of many things, including the layer of the personality or character being addressed at that moment in the therapy, the rigidity of the character structure of the patient, whether hysterical or compulsive, etc., and the energetic charge of the patient in general and on that particular day. Patients usually engage in therapy for several to many years, motivated by the continuing improvement they experience in their sense of internal freedom and well-being, their increased capacity to feel pleasure, and a growing sense of inner strength, personal independence, and the capacity to accept greater responsibility for their lives. The technique of therapy involves attention to and interventions in the process of verbal and nonverbal interchange between the therapist and the patient plus detailed and consistent attention to the patient's characterological and muscular armoring. Characterological armoring is treated by the therapist's informing the patient through either verbal interventions or mimicry the artificial ways in which the patient appears and behaves. Muscular armoring is the functional somatic counterpart of characterological armoring. Chronic spastic tensions in the striated and smooth musculature are released by deep massage combined with encouraging the patient to express any emotion bound by the arrnoring. The first part of each session usually involves talking by the patient as helshe describes whatever is on hislher mind and verbal responses and interventions by the therapist. This is often, but not always, followed by having the patient, prone on the therapy couch, deeply sigh in order to build up a bioenergetic charge. This may, by itself, without any further interventions, be enough to trigger the overt expression of blocked feelings. When the therapist sees that there is no or little energetic movement, he may intervene by describing to the patient a characterologic attitude or state of bodily constraint or by direct systematic work on the musculature to release armoring. In therapy sessions during the experimental REG periods, the therapist intentionally took a more passive role than usual in order, as much as possible, to avoid adding an unnecessary variable to the experiment. This meant fewer verbal interventions relating to the patients' character and much less work than usual on the musculature. Patients participating as subjects in the experiment were rarely physically touched, and, in the few situations where it was

200

R. Blasband

deemed necessary in order to advance therapy, not more than once during the session.

Study 1
The experiment was conducted in the therapist's (R.A.B .'s) office, a renovated trailer located 30 feet away from his house, in a semirural setting in Northern California, at least 118 mile away from the nearest neighbor. Patients were videotaped during each session, while a computer time-synched to the camcorder collected REG data. Patients were informed only that the therapist wished to conduct an experiment with a random event generator, and that part of the experiment involved videotaping of their sessions. All patients gave their permission to proceed and accepted the experimental conditions with no observable inhibition throughout the course of the experiment. Twelve patients, ranging in age from 25 to 60 years, were initially selected from the therapist's full caseload because they had been coming to treatment regularly, most of them weekly, for at least 1 year prior to the study, and their superficial resistances, including distrust of the therapist and the therapeutic process, had been well resolved. Of the 12 patients three were men. Since it had been the author's clinical experience that emotional expression by men is much more difficult than for women, and in view of the fact that only three men were available for the study, it seemed best to limit the current study to women to optimize the possibilities of seeing some kind of correlation of emotional expression with the REG. It was anticipated that when the therapist's caseload included more men who could qualify for the experiment at a later time, that the current experiment would be repeated using only men.2Owing to a technical problem, the data from one of the two therapy sessions of a female patient was not recorded, so she was dropped from the study. Of the remaining eight patientlsubjects, characterological diagnoses included hysterical, phallic, and nonpsychotic catatonic schizophrenic character types, using Reich's (1949) and Baker's (1967) character typology. Videotaping was done with a Sony 8-mm camcorder unobtrusively placed in the office. The camcorder was set to record the time and date of the beginning of the session and to continuously record elapsed time. A portable random event generator, similar to those used in their field REG experiments (Nelson et al., 1996, 1998), was provided by the Princeton Engineering Anomalies Laboratory (PEAR) along with software to provide continuous REG recording to hard disk with built-in statistical and graphing capabilities. According to PEAR, in this device "the random event sequence is based on a low-level microelectronic white noise source which is amplified, limited, and ultimately compared with a precisely adjusted DC reference level. At any instant of time the probability of the analog signal equaling or exceeding the ref-

2Dunne(1 998) later reported gender differences in studies involving conscious intention on the REG.

Ordering Random Events by Emotional Expression

201

erence threshold is precisely 0.5. This white noise signal is sampled 1,000 times per second, and the output of a comparator stage is clocked into a digital flip-flop, yielding a stream of binary events, 1 or 0, each with probability 0.5. This unpredictable, continuous sequence of bits is then compared with an alternating template using a logical XOR in hardware, and the matches are counted, thus precluding first-order bias of the mean due to short or long-term drift in any analog component values by inverting every second bit. The resulting sequence is then accumulated as bytes that are transmitted to a serial port of the computer, where they are read and converted to REG data by dedicated software" (Nelson, 1996). Built-in, fail-safe, and calibration components guarantee the device's integrity against technical malfunctions and environmental disturbances. Whereas the original PEAR experiments with conscious intention used a tripolar protocol, the field REG version used in this experiment had a single null-intention protocol. Data were fed into the computer by the REG in continuous 13-minute, 1,000-trial segments. Except for an indication that the computer was recording data, the screen was blank. Thus, investigator and patient were blind to any results emerging during the session. The device plus attached Zeos laptop computer was located 10 feet away from the patients, out of their line of sight. The camcorder and REG were started within 5 seconds of each other and within 15 seconds of the beginning of the therapeutic sessions. This permitted two or three REG segments to be recorded over the course of a 30- or 45-minute session. Calibration runs were made at intervals during the several months of the experiment when the office was unoccupied during the day and, at times, through the night. The task of the data analysis was to determine whether any correlation existed between the patient's overt emotional expression and the REG output. For these purposes we considered an emotionally neutral period of talking as one where there was no obvious elation, anger, anxiety, or depression being expressed by the patient as she spoke with the therapist at the beginning of each session. Segments registered as containing emotional expression were those where emotion was actually expressed by either spontaneous overt crying, screaming with fear, reaching out and/or sobbing with longing, or yelling and/or hitting in anger, all of which were seen during the course of the experiment. That is, we distinguished patients' subjective perceptions of emotional states from their overt expression of the state. Patients might feel like they would like to or were going to cry, for example, but this was distinguished from the actual expression of crying with overt sobbing. The former was not considered overt expression, but the latter was. Where the patient was not overtly expressing an emotion such as crying or anger while talking, but the therapist could clearly sense a strong undercurrent of sadness or anger, the segment was excluded from the data analysis. The therapist was highly experienced at making such estimations, having done so for over 25 years of clinical work. Periods of remarkable pleasure or joy were not seen in any patientlsubject during the experimental periods, although most patients reported considerable

202

R. Blasband

relief following the full expression of a blocked emotion. It is usual in this form of therapy for a single emotion to dominate long periods of the session, and this held true for our experimental sessions: In nearly all therapy sessions the time period corresponding to a given 13-minute REG segment was clearly dominated by a single form of expression. This made it fairly easy to label the emotional qualities of most segments. For example, during an initial 5 to 10 minutes of discussion, a patient might evidence neutral affect, then, when lying down and gently sighing, might spontaneously begin to cry. This is not unusual in women in this kind of therapy. Usually sobbing (indeed, any emotion expressed in therapy) is expressed in pulses, that is, several periods of two to three minutes of sobbing separated by one or two minutes of simply sighing or verbalization without overt sobbing. Being constrained by the software to compute in 13-minute segments, we labeled the segment as one of crying despite the fact that the patient did not cry during every minute of the segment. The available software at the time of Part 1 of this study provided only cumulative results of output for each segment; therefore, we could not precisely extract that REG output that correlated with each emotional period during segments where mixed emotions were expressed. In those segments where mixed emotions, such as anger and sorrow, were sequentially expressed we counted the minutes associated with each emotion and labeled the segment according to the dominant emotion. Such segments were rare during the experimental periods. To minimize the possibility of feedback-driven effects on the REG by the investigator's intentionality, the data from all subjects were analyzed only at the completion of the experiment. The evaluation and labeling of the kinds of emotions expressed during each segment were made by the author. The procedure for matching independent and dependent variables was as follows: The times of the beginning and end of each REG segment were noted, then the dominant emotional expression corresponding to this interval was paired with the REG output during that segment. By observation, we established in Study 1 that we were dealing with five main, easily differentiated categories of behavioral expression-emotionally neutral talking; talking with emotion; sighing without emotional expression; overt crying with fear, anxiety, frustration, or sadness characterized by sobbing with tears; and anger characterized by yelling, and (often) hitting the couch, andlor kicking. One patient also expressed longing through part of a segment of a session: This was characterized by her crying with reaching out with the arms and verbalizations of "wanting mother." We limited our quantitative analysis to a comparison of REG outputs for segments of emotional expression that were most prevalent and clear cutcrying with sadness, fear, or anxiety (all labeled below as anxcry), and anger. REG outputs during segments of these emotional expressions were compared to outputs during neutral talking at the beginning of the sessions, and to each other, using one-way analysis of variance (ANOVA).

Ordering Random Events by Emotional Expression Results

203

A. Calibration. The calibration sample totaled 979 13-minute segments over the test period. The mean number of counts per 200-sample trial over all the runs was 99.99 with a standard deviation (SD) of 7.06 indicating that the REG-computer setup was operating within the range expected theoretically (mean = 100, SD = 7.07) at a probability against chance of .40, consistent with previous calibrations at PEAR. A graph of 10,000 calibration trials (10 segments) is shown in Figure 1. The parabolic line above and below the baseline indicates displacement from the mean at the one-tailed 5% level of probability. One can see here that the REG takes a "random walk" about the baseline. Note that the graph crosses the parabolic boundary early on but quickly returns to and remains within the expected boundaries as the trials proceed. B. Experimental. The eight patientlsubjects selected for the study had a total of 39 therapy sessions during the experimental period in the spring of 1993.The number of recorded therapy sessions per subject varied considerably, ranging from one to eight. The 39 sessions yielded 76 13-minute REG segments. Of these, 33 were during periods of neutral talking, that is, talking without obvious emotion or emotional expression (neutalk on charts), 13 were during periods of anger, and 30 were during periods of crying. The following details from a few therapy sessions will give some idea of the

-1414l

I

1

I

Number of Trials

0

3333

6666

10000

Fig. 1. Calibration.

204

R. Blasband

kinds of behavior observed in therapy, the REG results, and patterns of movement in the REG output. We will begin with data from two subjects who expressed a single emotion throughout most of their sessions. Subject 6, a woman in her thirties, had a relatively unarmored character structure and was fluid and open in her expression of emotion. During her six experimental sessions she expressed essentially two kinds of behavior, unemotional (neutral) talking, and, after a few minutes of sighing, fear with screaming and deep sobbing. Figure 2, a cumulative graph of all of her sessions, shows an accumulating downward shift in the trial counts emitted by the REG, breaking through the border of the 5% probability parabola in several places, and terminating outside the envelope. In contrast, Subject 8, who was also relatively unarmored, spent most of her experimental sessions spontaneously raging, with loud yelling and hitting the couch. Figure 3 shows the REG output for her six test sessions. The raging of sessions 1 to 5 is associated with a cumulative rise in REG output until the sixth session when her anger gave way to crying. At this point the REG took a sudden drop. The data in Figure 4 are from a subject (11) who expressed mixed emotions during the sessions. The cumulative graph is for a single segment of 1,000 trials. It is important to note that in the graphical representation, we are most

-1723 1
0

I

I

Subject 6
I

\
I
Number of Trials

4423

8666

13000

Fig. 2. REG output during anxious crying.

Ordering Random Events by Emotional Expression
1095

205

r

I

Subject 8
- --

Number of Trials

2000

4000

6000

Fig. 3. REG output during anger.

concerned with sharp shifts in direction of movement, either up or down, rather than where the shift is taking place in relation to baseline. Thus a sudden shift upward, even though it begins well below the baseline, represents a greater than average number of correlation counts generated by the REG. This subject showed great lability in emotional expression throughout all her sessions. Talking or the slightest amount of sighing would often spontaneously lead to the expression of intense crying alternating with anger. The REG output correlated well with her emotional lability, with strong trends both above and below baseline at different times, although the final cumulative distribution of counts is well within chance. Figure 4 shows the first of three 13-minute segments from her session of April 25, 1993. She spent the first 8 minutes of the session talking about recent events without much emotion, then when instructed to sigh, spontaneously trembled, cried, and felt fear. When talking, the REG output was nominal, wandering above and below baseline in a typical random walk. When she began to cry, however, (at 666 trials) the REG output rapidly dropped.
All Subjects

The basic unit of analysis for quantitative assessment in this study is the 13minute segment, which consists of 1,000 trials where each trial is scored by the

R. Blasband

-447 0

1

I

Number of Trials

1

1

333

666

1000

Fig. 4. REG output during anxious crying.

number of counts observed in 200 samples of the 50-50 random process. The trial counts are averaged for the 1,000 trials in the segment. For convenience, these segment means are then converted to z scores using the formula, z= (MN - lOO)lSE, where MN is the count mean for the 1,000 trials in the segment, and SE is the standard error of the mean, computed as the SD (trial standard deviation), averaged over 1,000 trials, divided by the square root of 1,000. These z scores were used for all the analyses. Table 1 shows the average REG scores ( z scores) by subject and emotional state. The compounded data revealed in the totals show that the REG output during the expression of emotion is significantly different from that when patients are talking with neutral affect. Also, there is a significant upward shift in REG output when anger is expressed and a significant shift downward when the patient is crying. As can be seen in the table, this pattern is consistent across all eight subjects.

Study 2
The experimental setup was similar to that used in Study 1 except for a change in locale and the use of upgraded software from PEAR. The change in locale consisted of a move of the same trailer-office to a new location, a few miles away. The office was located approximately 50 feet from my home and the closest other dwelling was at least one-quarter mile away. The upgraded

Ordering Random Events by Emotional Expression
TABLE 1 ANOVA of REG Output vs. Emotion-Individual Subjects and Totals Subject Neutalk Anxcrying Anger F-Test

207

1

4
5
6

7
8

10 11
13

Total

M n M n M n M n M n M n M n M n M n

Note: The n refers to the number of segments. M refers to the mean z score for the various conditions for each subject. Each epoch is 1,000 trials, and each trial score is the number of hits in 200 attempts of a 50-50 random process. In each row, means that share a superscript do not differ significantly (by Fisher's PLSD multiple comparison test, 0.05 level).

software permitted continuous generation of REG output, unconstrained by the earlier 1,000 trial limit, the use of computer function keys to mark events in real time, and much more precise statistical analysis of data segments by trials, timed to seconds, or as marked by function keys. Videotaping, therefore, was replaced by taking timed notes of events and pressing appropriate function keys to mark times of the beginning and end of periods of talking, sighing, emoting, etc. REG data were obtained during the treatment of nine female patients, two of whom had been subjects in Study 1. The number of therapy sessions per patient ranged from one to 12. The same criteria for the selection of subjects used in Study 1 were used in Study 2. In the new office, the REG was within arm's reach of the therapist, on the side away from the patient. The therapist could press the computer function keys and read computer time, but the REG readout was not displayed.
Procedure
II

At the moment the patient entered the office, the REG was started. As the session progressed, appropriate function keys were pressed to mark the beginning and end of events of overt emotional expression. These events and the time as displayed on the computer were simultaneously noted by the therapist on a pad and by pressing a function key. The end of the session was marked by turning off the REG.

I

208

R. Blasband

The REG data were examined only at the end of the 8-month test period, which began in January 1995. To analyze the data, notable events were first listed for each patient, then the REG record was examined for that event time period. Calibration runs were conducted during periods throughout the experiment when the office was unoccupied. A one-way ANOVA was used to analyze the data.
Results

The mean of 299,514 calibration trials was 100.012 with a SD of 7.072, well within the parameters for calibration determined by the PEAR laboratory. The 86,789 trials produced during all patient sessions yielded 70 notable events fitting the description given in Study 1, above. Thirty events were of talking with neutral affect; 30 were of the expression of anxious, frustrated, or depressed crying; eight were of the expression of anger; and two were of the expression of longing. As in Study 1, the basic unit of analysis is the segment of REG output associated in time with each notable event. In Study 2, however, the segments consist of a varying number of trials. For each segment, a z score is computed in the same way as in Study 1, and these z scores are used for the analysis. The z scores for the segments associated with the 70 notable events are listed in Table 2. An analysis of the data in Table 2 is seen in Table 3. It shows a summary of an overall comparison of REG output during conditions of the expression of neutral talking, anxious crying, anger, and longing using ANOVA. Significant differences are seen between all conditions [ F (3,66) = 3.956, P = .012]. This was due to the anxious crying condition being significantly lower than all of the other conditions in pairwise multiple comparison analysis [by Fisher's protected least significant difference (PLSD) statistic]. We can see from Table 3 that in Study 2 there was no significant difference in REG output between periods of neutral talking and the expression of anger, but there was a significant difference between periods of neutral talking and crying and between anger and anxious crying. As in Study 1, the direction of REG output is upward with the expression of anger and downward with the expression of anxious crying. The compounded data reveal highly significant differences between all variables. Data associated with longing was not included in this analysis since it was not recorded in Study 1. Longing will, however, be discussed below. Table 4 shows the results of a 2 x 2 ANOVA for the combined results of Studies 1 and 2, using study as the first factor (1 vs. 2) and condition as the second factor (neutral talking, anxious crying, anger). This analysis shows that the two studies did not differ significantly [F(1, 138) = 0.032, p = .8575]. Nor is there a significant study by condition interaction [F(2, 138) = 0.814, p = .4453]. It is therefore valid to pool the two studies for a combined analysis. Table 5 shows a comparison of the REG output for the different states of emotional expression using the pooled data from Studies 1 and 2.

Ordering Random Events by Emotional Expression
TABLE 2 Data for 70 Events, Study 2
N

209

Mean

Neutral talking
1061 263 1399 261 929 1430 844 1127 157 362 686 416 210 255 158 155 548 1428 391 625 62 25 2 795 619 853 185 661 507 843 300

Anxious crying
2234 2263 422 1658 1288 894 1643 529 1619 1681 929 1352 1715 1089 1732 1482 1007 3154 328 1258 2118 1632

R. Blasband
TABLE 2 (continued) Data for 70 Events, Study 2 N Mean SD

Z
-1.660 -0.121 -1.698 0.180 0.865 -1.197 -0.132 -0.935

1962 111 513 222 865 1595 1184 2283
Anger

99.735 99.919 99.470 100.086 99.869 99.788 99.973 99.862 100.188 100.136 99.848 100.476 100.233 100.163 100.488 100.381 100.620 100.1 10

6.988 7.675 7.064 7.492 7.186 7.290 7.082 7.046 7.418 6.979 6.761 6.687 6.536 7.738 7.144 6.832 7.007 7.214

622 863 683 607 45 1 338 162 452
Longing

0.663 0.563 -0.563 1.659 0.699 0.423 0.878 1.144 1.886 0.357

463 527

This analysis shows that when all of the data are considered there are significant differences between all the studied states of emotional expression and that anxious crying is associated with a downward shift of REG output and anger is associated with an upward shift in REG output. This may be illustrated by the graph of cumulative REG output shown in Figure 5.

Discussion
Although this study was designed simply to explore the possible relationship between emotional expression and REG output and to develop working hypotheses for future experimentation, the results from both studies appear remarkably robust. The data indicate that overt emotional expression on the part

TABLE 3 Study 2-Comparison of Four Conditions, All Data Group Neutral Anxcry Anger Longing Count Mean
SD

SE 0.148 0.233 0.237 0.777

30 30 8 2

0.137' -0.437" 0.696' 1.127'

0.810 1.276 0.671 1.099

Note: The count is the number of separate segments for each condition of emotional expression. Means with different superscripts differ significantly at the 5% level. Means that share a superscript do not differ significantly (by Fischer's PLSD multiple comparison test, 0.05 level).

Ordering Random Events by Emotional Expression
TABLE 4 ANOVA, Combined Results of Studies 1 and 2
-

21 1

Source Study (A) Condition (B) AB Error

DF
1 2 2 138

Sum Sq. 0.024 32.626 1.221 103.506

Mean Sq. 0.024 16.313 0.610 0.750

F test 0.032 21.750 0.814

P 0.8575 0.0001 0.4453

of female patients correlates significantly with shifts in the distribution of trial scores generated by an electronic random event generator. Furthermore, we find a highly significant correlation between the direction of the shift and the kind of emotion expressed: Periods where the patient was crying with fear, anxiety, frustration, or sadness were associated with a marked downward shift in REG output, whereas periods where the patient was expressing anger were associated with a marked upward shift. In attempting to understand the basis for this phenomenon, we are faced with several serious problems, the first of which is that there is no satisfactory understanding in traditional mechanistic biophysical terms of what an emotion

I
0
10

Anxious Crying: Z-score 4.47

20

30

40

50

60

Thousands of Trials

Fig. 5. Combined cumulative deviation of REG output.

212

R. Blasband
TABLE 5 Comparison of Conditions, Studies 1 and 2 Pooled

Group Neutral talking Anxious crying Anger

Count

Mean

SD
0.722 1.039 0.658

SE 0.091 0.134 0.144

63 60 21

0.218" -0.547b 0.793'

Note: The count is the number of separate segments included. Means with different superscripts differ significantly at the 5% level. Means that share a superscript do not differ significantly (by Fischer's PLSD multiple comparison test, 0.05 level)

is, any more than there is an understanding of what the mind is. Second, even if we understood the biophysical basis of emotions in traditionally accepted terms, i.e., as originating and functioning strictly within the physical body, there is no known basis for understanding how emotions could influence a random event generator either locally or at a distance. A tentative, but incomplete, functional explanation is suggested, however, by combining Jahn and Dunne's concept of resonance in man-machine interactions and Reich's concept of bioenergetic pulsation in emotional expression. Jahn and Dunne (1987) hypothesize that consciousness operates primarily on a wave-mechanical basis and that the degree of resonance between the operator and the device is a function of the degree of superposition of the wavelike properties of the two. When resonance is established, they state, "molecular experiential patterns can arise whose observable characteristics differ significantly from the simple sum of their individual behaviors." At this point we must assume that human-machine resonance as Jahn and Dunne have defined it can be established whether or not there is conscious intention of the subject (operator, patient) toward the machine. Their field REG studies, cited above, indicate that this is possible. As noted earlier, Reich proposed that anxiety and pleasure are antithetical functions with respect to the phenomenon of total organismic pulsation. For several reasons, Reich found that bioelectricity per se could not satisfactorily account for all the phenomena seen in his experiments (Reich, 1948). His proposition of a vital, nonelectromagnetic, bioenergetic force called orgone energy did, however, satisfy the necessary requisites for understanding the phenomena, and he recast his formulations in its terms. According to Reich, states of anxiety or depression were functionally identical to a contractive movement of orgone energy toward the core, pleasure as an expansion of energetic excitation toward the skin surface, and anger as expansion toward the skin surface, but stopping at the musculature. The results of the present studies indicate that, with respect to direction of REG output, anger and anxietylsadness are antithetical emotional expressions. We have found that the emotions described by Reich as expansive significantly correlate with an upward shift in REG movement, corresponding to a positive or constructive correspondence to the binary output with the regularly alternating template sequence. Those described as contractive are correlated

Ordering Random Events by Emotional Expression

213

with a downward shift in output or a destructive correspondence with the template. Both indicate increased order in the nominally random process. This finding is supported by an analysis of the REG output when patients were longing, an expansive emotion according to Reich. We found that longing, despite the fact that its expression in the study involved crying, was in all instances correlated with marked, highly anomalous upward shifts in REG output. Although there were only two events where longing was expressed (Study 2), their average z score was 1.27 (SD of 1.099). Establishing that the expressed emotions correspond to direction of REG response and that this fits Reich's hypotheses with respect to bioenergetic movement within the physical boundaries of the organism does not, however, explain the nonlocal effect on the REG. And, assuming that many of the same forces are operative in emotional expression and conscious intentionality, we cannot easily invoke electromagnetism as an explanatory mechanism since Jahn and Dunne found similar results in their local and nonlocal experiments, as well as in experiments performed atemporally (Jahn and Dunne, 1987). The issue is further complicated by the fact that directionality in REG output with the device used in this experiment is completely arbitrary: Once the original signal is electronically rectified, upward and downward shifts of the REG output do not mean that they are generated by physically greater or fewer noise source pulses. So, as much as one might be tempted to hypothesize some expansive physical force being emitted during, say, the expression of anger, which then secondarily causes an increased generation of electrical impulses in the REG, such a causal mechanism would be impossible given the lack of linkage between the original generation of electronic signals by the REG and the final direction of its output. The fact remains that we simply do not know enough at this time to satisfactorily explain these phenomena. We can, however, explore the following hypothesis for future experimentation. First we must answer the oft-asked question of the role of the experimenter in determining the outcome of the experiment. On beginning the first study, I was aware that I brought to it certain assumptions and intentions. These were my known beliefs related to the subject prior to beginning Study 1: 1. Conscious intention and passive participation in an experimental setup could effect anomalous changes in the output of machines. 2. Information could be anomalously transmitted from one person to another independent of time and distance. 3. Life processes, including the generation and expression of emotions, were not the result of a highly sophisticated mechanical concatenation of dead parts, but rather the expression of a spontaneously pulsatile life energy functioning within a membrane. 4. This same energy functioned at large, external to living systems, where it provided a medium from which electromagnetic impulses could emerge and through which they were transported. To my mind it

214

R. Blasband was possible, though definitely unproven, that it also served as the medium for the transmission of psychokinetic impulses, telepathic information, and other anomalous manifestations of consciousness. 5. If conscious intention and passive attention could anomalously affect an REG, so might spontaneous emotional expression. There was no conscious belief that opposite directions of REG output would be found to correlate with certain kinds of emotions, although I was certainly preconsciously aware of Reich's findings, noted above. (By preconscious I mean information that is accessible to conscious awareness, although not necessarily conscious at the time.) 6. At the beginning of Study 2, I was consciously aware of the bidirectionality of REG performance in correlation with emotional expression.

I believe that my knowledge and biases prior to undertaking the investigation may have been a significant factor in obtaining our results, My current working hypothesis is that the investigator, patient, and REG cofunction in a state of resonance and that the REG output is a manifestation of the functional unity of the triad. REG anomalies in group situations, where members of the group exerted no conscious intention toward the device, were seen in PEAR studies reported by Nelson et al. (1996; 1998) and also independently by Radin, Rebman, and Cross (1996), Schwartz et al. (1997), and Rowe (1998). In their discussions of the possible cause/source of the observed anomalies, Nelson, Radin, and Schwartz postulate that the interacting participants of the group may generate a consciousness field "to which the REG responds via an anomalous decrease in the entropy of its nominally random output." (Nelson et al., 1996). Noting that the results of their (PEAR) benchmark REG and remote viewing experiments indicated a lack of dependence of the effects on time and distance, and assuming that the anomalous effects found in the group REG studies derive from the same basic phenomenon as the laboratory experiment, Nelson et al. conclude that "no conceptual models based on currently known physical fields with their usual 11r2 dependencies and very limited advanced and retarded signal capabilities are likely to suffice." They go on to suggest that in view of these facts and the further finding that, "the basic effects are analytically tantamount to small changes in the elemental binary probabilities underlying the otherwise random distributions," that the "anomalies may be more inforrnational than dynamical in their physical character."

3de Quincey states that such concepts as quantum field potentials and field consciousness "have far more to do with the nature of time and probabilities than with space. So-called quantum fields are not actually fields in any spatial sense. They are abstract mathematical descriptions of matrices of probabilities (of tendencies for certain events to occur). It is only the representations of such probabilities that take on the characteristics of fields. Probabilistic events, as tendencies of events to occur, are temporal-perhaps even psychological. In the end, statements of probability are statements about psychological expectations" (de Quincey, 1999).

216

R. Blasband

Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobyns, Y. H., & Bradish, G. J. (1998). FieldREG 11: Consciousness field effects: Replications and explorations. Journal of Scientific Exploration, 12,425-455. Panksepp, J. (1998). Affective neuroscience. Oxford, England: Oxford University Press. Radin, D. I., Rebman, M., & Cross, M. P. (1996). Anomalous organization of random events by group consciousness: Two exploratory experiments. Journal of Scientific Exploration, 10, 143-169. Reich, W. (1937). Experimental investigation of the electrical function of sexuality and anxiety. Republished in The Impulsive Character. Koopman, B . G. (Trans.) New York: Meridian, 1974. Reich, W. (1942). The function of the orgasm. New York: Meridian. Reich, W. (1948). The cancer biopathy. New York: Orgone Institute Press. Reich, W. (1 949). Character analysis. New York: Orgone Institute Press. Rowe, W. D. (1998). Physical measurements of episodes of focused group energy. Journal of Scientific Exploration, 12, 569-583. Rubik, B. (1995). Can western science provide a foundation for acupuncture? Alternative Therapies, 1, 41-43. Schwartz, G. E. R., Russek, L. G. S., Zhen-Su S., Song, L. Z. Y. X., & Xin, Y. (1997). Anomalous organization of random events during an international quigong meeting: Evidence for group consciousness or accumulated qi fields? Subtle Energies & Energy Medicine, 8, 55-65.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 217-231,2000

0892-3310100 O 2000 Society for Scientific Exploration

Energy, Fitness, and Information-AugmentedElectromagnetic Fields in Drosophila melanogaster

Ditron LLC, R 0. 70 Box Excelsior, MN 55331

Department of Materials Science and Engineering, Stanford University Stanford, CA 94305

Abstract-Exposure of developing larvae to a few specific electromagnetic fields (EMFs) and information-augmented EMFs modified (a) the expression of larval development time, a genetically based trait relevant to development and whole organism fitness, and (b) a measure of energy metabolism, the [adenine triphosphateladenine diphosphate] ([ATP]l[ADP]) ratio in isofemale strains of Drosophila rnelanogaster. The study represents a compilation of approximately 10,000 larvae and 7,000 adults counted. The specific EM frequencies used in this study, 5.0, 7.3, 8.0, and 9.3 MHz at output power levels in the approximately 1 microwatt range, were produced by two small electronic devices of physically identical nature, but one was intentionally imprinted with specific information. Exposure periods varied from 4 hours to one life cycle. Larval development time was significantly shortened (approximately 10%) and the [ATP]/[ADP] ratio significantly increased (approximately 30%) in a Faraday cage without the EMFs compared to a Faraday cage with the specific EMFs. The Faraday cage represents a shielded environment that facilitates exposure to both fewer and specific electromagnetic frequencies. Larval development time results for development in the laboratory environment, which represents exposure to background EMFs of various frequencies, were intermediate. The information-augmented EMFs also gave intermediate results. Overall, there were no significant effects observed for the other measured fitness components-third instar larval weight, adult survival, and surviving adult weight. We discuss a thermodynamic model to account for our results and general bioelectromagnetic effects and attribute the change in development time to EMFs modifying electron transport chain activity and the [ATP]/[ADP] ratio via the influence of the EMFImagnetic vector potential upon electron availability and nicotinamide adenine dinucleotide levels.
Keywords:

fitness experiments-[ATP]/[ADP] ratio-Faraday cagesinformation-augmented and normal EMFs, theoretical models

218

M. J. Kohane & W. A. Tiller

1. Introduction
Bioelectromagnetic studies have advanced considerably in the past decade (Goodman, Greenbaum, and Marron, 1995). For example, studies of nontherma1 effects on cells of the immune system from exposure to electromagnetic fields (EMFs) in the extremely low frequency range ( ~ 3 0 0 indicate that Hz) stimulatory, inhibitory, and no field exposure effects exist even for identical field parameters. The results depend upon the degree of cellular activation and the physical and biochemical boundary conditions experienced (Eichwald and Walleczek, 1996). Furthermore, low frequency EMFs influence specific RNA transcripts in human cells and transcription in Drosophila melanogaster cells (Goodman, Wei, and Henderson, 1989; Goodman et al., 1992), and Ho et al. (1992) have shown that weak static magnetic fields cause abnormalities in first instar larvae of D. melanogaster. A number of models have been proposed to account for EMF effects on biological systems. Ho et al. (1992) suggested that the weak static magnetic fields they studied must affect some cooperative process involved in pattern determination during critical stages of early Drosophila development. On the basis of the immune cell experiments, Eichwald and Walleczek (1996) suggested that external EMFs interact with cellular systems at the level of intracellular signal transduction pathways, specifically, Ca-signaling processes. Nossol, Buse, and Silny (1993) reported magnetic field influences on the in vitro redox activity of cytochrome oxidase activity. Additionally, weak EMFs stimulate adenine triphosphate (ATP) synthesis and alter Na,K-ATPase activity (Blank, Soo, and Papstein, 1995; Lei and Berg, 1998). Thus, magnetic fields may influence cellular energy metabolism, and Menendez (1996) has suggested that an electromagnetic coupling process may explain the proton translocation mechanism in cellular energy transfer. A significant association has been observed between stress, larval development time and aspects of energy metabolism, the cofactor nicotinamide adenine dinucleotide (NAD), and the [ATP]/[ADP] ratio in D. melanogaster (Kohane, 1988, 1994). In the present paper, we use the theoretical and experimental approach presented in these earlier papers and expand it to incorporate the intention-imprinted electronic device (IIED) techniques of Dibble and Tiller (1999) and investigate the hypothesis that both normal and informationaugmented EMFs may influence fitness and energy metabolism. We study fitness and the [ATP]/[ADP] ratio under nonstressful nutrient conditions, in the presence and absence of small electronic devices that produce EMFs of frequencies much higher than those used in previous studies. We assess these frequency effects using exposure periods from 4 hours to one life cycle in order to detect EMF effects that may not be observed at more conservative levels of EMF frequency and exposure period. Our results indicate that larval development time is significantly shortened and the [ATP]/[ADP] ratio significantly increased in a Faraday cage without the devices compared to a Faraday cage with the devices. The Faraday cage

Electromagnetic Fields in D. rnelanogaster

219

represents a shielded environment with respect to electromagnetic (EM) radiation and facilitates exposure to fewer frequencies in the absence of devices and specific EM frequencies in the presence of devices. In addition, the IIEDs gave significantly better results on larval development time than did the unimprinted control devices. Overall, there are no significant effects for the other fitness components assessed, third instar larval weight, adult survival, and surviving adult weight. Thus, a reduction in exposure to EMFs increases one component of fitness, suggesting that EMFs may act as a biological stress (Goodman, Greenbaum, and Marron, 1995; Smith, 1996). Although we acknowledge the fact that EMFs may modify the larval environment (e.g., the food), we attribute the effects of the specific EMFs in this study to the modified [ATP]/[ADP] ratio as a consequence of altered NAD levels, electron availability, and electron transport chain activity. Finally, we discuss a thermodynamic model that may rationalize our observed fitness and energy changes and general bioelectromagnetic effects.

2. Experimental Methods
(a) Strains We studied larvae obtained from two isofemale strains, Strains 1 and 2 of Kohane (1994). Nonstressful food was used for strain culture and experiments, and the food composition was as follows: 36 g agar, 108 g sugar, 72 g dry yeast, 10 ml propionic acid, and 24 ml nipagen in 2,000 ml water (Kohane, 1987). Separate constant temperature rooms (18"C, 55% relative humidity) were used for (a) device storage, (b) strain culture and unexposed adult culture, and (c) experiments.
(b) Faraday Cages

A standard Faraday cage consisted of a copper mesh screen enclosing a certain spatial volume. It is electrically grounded so the EM waves of wavelength larger than the mesh size, which impinge on the screen, will leak off to ground and only minimally penetrate to the interior space. Thus, the interior space has a greatly reduced EM integrated power density in the wavelength range larger than the copper mesh spacing. The one layer of copper mesh cages (dimensions: 40 x 40 x 30 cm) used here can be expected to reduce the EM field strength by a factor of approximately 10. (c) Electronic Devices Our experiments used two electronic devices in order to assess exposure to multiple frequencies and a single frequency as follows: (dl), a triple oscillator device producing frequencies of 5.0, 8.0, and 9.3 MHz; and (d2), a single oscillator device, producing a frequency of 7.3 MHz. Additionally, we studied two categories of EMFs produced by these devices. The first category

220

M. J. Kohane & W. A. Tiller

involved devices (dl, o) and (d2, o), which had not been exposed to human informational influences. The second category involved devices (dl, j) and (d2, j), which had been exposed to human informational influences (see below). Thus, (dl, o) and (dl, j) and (d2, o) and (d2, j) constituted physically identical pairs of devices that differed only in the fact that one of each pair, (dl, j) and (d2, j), respectively, had been exposed to the human informational influence. The devices were individually wrapped in aluminum foil and stored in separate Faraday cages and were fabricated to be identical to those produced by Clarus Corporation (La Costa, CA). The triple oscillator device was powered by line voltage to 9V DC, and the single oscillator device was previously charged for 24 h by line voltage to 9 V DC and used with battery power.
( d ) Intentions

The actual imprinting procedure was as follows: (a) The device was placed along with its current transformer on a table around which the imprinters sit; (b) Four people (two men plus two women) who were all accomplished meditators, coherent, inner-self managed and readily capable of entering an ordered mode of heart function (Tiller, 1997) and sustaining it for an extended period of time, sat around the table ready to enter a deep meditative state; (c) A signal was then given to enter such an internal state, to cleanse the environment and create a sacred space for the intention, then, a signal was given by one of the four to put attention on the tabletop objects and begin a mental cleansing process to erase any prior imprints from the device; (d) After 3 or 4 minutes, another signal was given to begin focusing on the specific prearranged intention statement for about 10-15 minutes; (e) Next, a final signal was given to shift focus to a closing intention designed to seal off the imprint into the device and minimize the leakage of the essential energylinformation from the devices. This completed the process, so the four people withdrew from the meditative state and returned to their normal state of consciousness. The specific intention was "to activate the indwelling consciousness of the device (d, j) so as to increase the concentration of NAD plus the activity of the available enzymes, dehydrogenases and ATP synthase in the mitochondria so that production of ATP is significantly increased relative to that produced in the unimprinted device (d, o)." This chemical transformation process in the cells of the fruit fly larvae was expected to significantly influence their fitness, which would manifest itself via a reduced larval development time for these larvae because they have a larger pool of ATP to work with (Kohane, 1994). (e) Fitness We conducted four similar experiments over a 6-month period and assessed EMF effects on the fitness of the strains using the above devices and different exposure periods. The experiments are summarized in Tables 1 and 2. We measured fitness at l8OC using the procedures given in Kohane (1988; 1994).

Electromagnetic Fields in D. melanogaster
TABLE 1 Summary of Fitness Experiments Treatment and replicate number Experiment 1 2 3 4fz Strain" 2 1 1 1 ~ e v i c e ~C dl d2 dl dl 16 16 15 15

Fc
15 15 14 15

d, j d 15 1.5 15 16

d, o d 16 16 15 16

Exposure period e 4hours 4hours 4days Lifecycleplus 4 days

Date February 1997 February 1997 July 1997 July1997

Note: Fitness components assessed were larval development time, adult survival, and surviving adult weight. "Strain 1 and strain 2 refer to two isofemale strains. (dl) refers to a triple oscillator device producing frequencies of 5.0,8.0, and 9.3 MHz; and (d2) to a single oscillator device, producing a frequency of 7.3 MHz. The output power of the devices at the exposure distances is expected to be less than 1 pW. Categories of EMFs produced by these devices were as follows: (a) (d, o), devices which had not been exposed to human informational influences; (b) (d, j), devices which had been exposed to human informational influences-an intention concerned with significantly increasing the [ATP]/[ADP] ratio and decreasing larval development time. (F) refers to culture in a Faraday cage without a device, and (C) refers to culture in the laboratory environment. * (d, 0) and (d, j) refer to culture in a Faraday cage with a single device. "Experiments were conducted at 18OC. Experimental vial cultures involved 30 larvae (0-4 h old) transferred to a single vial containing nonstressful food. f ~ oExperiments 1-3, larvae were derived from unexposed adults, and for Experiment 4, larvae r were derived from exposed adults. gThe [ATP]/[ADP] ratio and third instar larval weight were assessed in Experiment 4 (see Materials and Methods).

Experiments were conducted at 18OC since development time differences have been detected at this temperature for the strains studied here (Kohane, 1994). The treatments investigated in the experiments were as follows: (a) (C)culture in the random EMF environment of the laboratory; (b) (F)-culture in the relatively reduced EMF environment of the Faraday cage without a device; (c) (dl, o) and (d2,o)-culture in the relatively reduced EMF environment of the Faraday cage in the presence of a device that had not been associated with human informational influences; and (d) (dl, j) and (d2, j)-culture in the relatively reduced EMF environment of the Faraday cage in the presence of a device that had been associated with human informational influences. A single replicate involved 30 larvae (0-4 h old) transferred to a single vial containing nonstressful food (see above). For each experiment, all vials were established within a 3-hour period. Vials were transferred to Faraday cages and the cages were placed, at the same time, immediately next to each other on the same bench in a constant temperature room. Treatment (C) involved vials concurrently transferred to a tray, which was placed on the lid of treatment (F). We used vial cultures as a thermocouple and could not detect temperature variation between treatments in each experiment. Exposure of vials to devices in Faraday cages was achieved as follows: The

222

M. J. Kohane & W. A. Tiller
TABLE 2 Results for Each Experimental Condition Described in Table 1 Treatment

Variable
a

C

F

j

o

Experiment 1 16.276 (0.170) 2 15.504 (0.097) 3 15.964 (0.315) 4 16.507 (0.087) ~xperiment~ 1 0.917 (0.833) (0.933) 2 0.900 (0.842) (0.915) 3 0.933 (0.830) (0.938) 4 0.867 (0.833) (0.967) Experimentc 1 1.140 (0.049) 2 1.155 (0.045) 3 1.267 (0.099) 4 1.254 (0.047) Experiment 4d 1.478 (0.157) Assaye NAD 1.305 (0.046) Pure water 0.486 (0.035)
--

"Larval development time (T,, in days). Means and standard deviations are given in parentheses. Adult survival given as proportions for an input of 30 larvae. Medians and 25 and 75 percentiles are shown in parentheses for adult survival since the data distribution curves for each batch were not normal. The 25 percentiles are given before the 75 percentiles. 'Surviving adult weight (mg). Means and standard deviations are given in parentheses. dLar~al weight in mg. Means and standard deviations are given in parentheses. '[ATP]/[ADP] ratio. Means and standard deviations are given in parentheses.

respective device was placed in the center of a Faraday cage and vials/bottles were transferred to cages and placed around the perimeter. The device was then removed at the end of a specific time period and larval development proceeded until adults eclosed. The device was approximately 15 cm from the vials/bottles on average, where the output power of the devices in their specific frequency ranges is expected to be less than 1 microwatt. Vials were monitored daily and surviving adults were collected and weighed. Surviving adult weight was calculated for each vial as the weight of the number of flies surviving divided by the number of flies surviving and is given in mg. Larval development time is given as T,, the time taken for half of the surviving adults to emerge.

Electromagnetic Fields in D. melanogaster

223

In Experiments 1-3 (Table l ) , larvae were derived from unexposed adults. In Experiment 4 (Table l), larvae were derived from exposed adults of Strain 1 as follows: Replicate bottle cultures were exposed to all treatments-(C), (F) (dl, j), (dl, o) for one life cycle. Emerging adults were synchronously collected from the respective cultures and used to generate larvae for experimental vials. These were then exposed to the same treatment as the parents using a 4day exposure period for devices.
( ' Energy Levels During Larval Development

The energy assay follows Kohane (1994) and primarily detects electron transport chain activity. We measured the [ATP]/[ADP] ratio in third instar larval homogenates where the glucose-hexokinase reaction, ADP-regenerating reactions, and glycolysis and the citric acid cycle were not functional or were severely impaired (Kohane, 1994). Supplemental dH,O does not alter the [ATP]/[ADP] ratio in these homogenates. Supplemental 0.0 1 M NAD increases the [ATP]/[ADP] ratio as a consequence of altered electron transport chain activity (Kohane, 1994). Kohane (1 994) stored larval homogenates supplemented with NAD and dH,O on ice for a reaction time of 7 minutes to facilitate metabolic activity, prior to determination of the [ATP]/[ADP] ratio. In order to assess treatment, fitness, and the [ATP]/[ADP] ratio in the same experiment, we established experimental vials for energy assays concurrently with Experiment 4 of Table 1. We also adopted a more conservative approach and stored the homogenates, supplemented with NAD and dH,O, in their respective treatments for a longer reaction time of 40 minutes. Thus, the assay assesses in vivo and in vitro effects of EMFs on the [ATP]/[ADP] ratio (KOhane, 1994). We avoided age-dependent effects on energy metabolism by preparing larval homogenates on the same day as follows: 15 third instar larvae were collected, transferred to microcentrifuge tubes, weighed (mg), and homogenized in 250 p1 ice-cold dH,O to which 250 p1 of ice-cold 0.01 M NAD or dH,O were immediately added. The solutions were mixed and transferred to each treatment for a reaction time of 40 minutes to facilitate metabolic activity. The microcentrifuge tubes were then freeze-clamped in liquid nitrogen. ATP, ADP, and AMP (adenine monophosophate) were extracted using 4.2 M formic acid and 4.2 M ammonium hydroxide (Kohane, 1994). They were quantified using an automated Isco high performance liquid chromatography (HPLC) apparatus, a Vydac column (302IC4.6), and a preprogrammed gradient from 0.025 M sodium phosphate monobasic pH 2.8 to 0.5 M sodium phosphate monobasic pH 2.8 (modified from Kohane, 1994). There were eight replicates per treatment for both supplemental NAD and dH,O. All extractions were completed within a 4-hour period. Replicates were stored at 2-4°C and assayed in random order during a 3day period.

224

M. J. Kohane & W. A. Tiller

3. Results
( a ) EMFs and Fitness We have used visual inspection of Figure 1 to assess the homogeneity of the treatment results across experiments. Considering larval development time, (C) and (F) were not homogeneous and rankings for (C) were as follows: (C2) < (C3) I (Cl) < (C4). The rankings for (F) were as follows: (F3) < (F2)1 (F4) I (Fl). (d, j) was essentially homogeneous. Considering (d, o), the rankings were as follows: (o 1) = (02) = (03) < (04). All treatments appeared to be homogeneous across experiments for adult survival. Considering surviving adult weight, treatments were not homogenous across experiments. Rankings were as follows: (C) - (Cl) I (C2) < (C4) 1 (C3); (F) - (Fl) = (F2) < (F3) = (F4); (d, j) - (jl)= (j2) < (j4) < (j3); and (d, 0)- (01) = (0 2) < (04) < (03). Finally, larval weight was only assessed in Experiment 4. On the basis of the observed heterogeneity, we have considered treatment comparisons for each experiment and have used visual inspection of Figure 1 to assess these comparisons. In our experiments, comparisons between (d, o) and (d, j) indicate the effects of exposure to the two categories of EMFs-the intention effect. The presence of (d, j) significantly decreased larval development time in comparison to (d, o) in all experiments. We did not observe a significant device effect for adult survival, surviving adult weight, and larval weight. Comparisons between (C) and (F) indicate the effect of a reduction in exposure to random EMFs-the Faraday cage effect. The presence of the Faraday cage significantly decreased larval development time in comparison to (C) in all experiments. We did not observe a significant Faraday cage effect for adult survival and larval weight. Considering surviving adult weight, a significantly higher weight was observed in Experiments 3 and 4 for (F) in comparison to (C). Comparisons between (F) and (d, o) and (d, j) indicate the effects of the addition of specific EMFs to the Faraday cage environment-the oscillator effect. In all experiments, the presence of the Faraday cage without a device (F) significantly decreased development time in comparison to both (d, j) and (d, 0). As noted above, the effect of (d, j) was to decrease development time in comparison to (d, 0). We did not detect any significant comparisons for both adult survival and larval weight. Considering surviving adult weight, no treatment effects were observed for Experiments 1, 2, and 3. However, in Experiment 4 the treatment rankings were as follows: (F) > (d, j) = (d>o).

Electromagnetic Fields in D. rnelanogaster

Treatment and Experiment

Treatment and Experiment

Treatment and Experiment

Treatment and Experiment

Fig. I . EMFs and fitness-Notched box plots are given for the experiments described in Table 1 as follows: (a) larval development time (the time taken for half of the surviving flies to emerge, T,, in days); (b) adult survival for an input of 30 larvae, given as a proportion; (c) surviving adult weight (mg); and (d) larval weight (Experiment 4 only, mg). The x-axis gives the treatment (C, F, j, and o) for each experiment (1, 2, 3, and 4) as described in Table 1. Note: A notched box plot provides a simple graphical summary of a batch of data and implements confidence intervals on the shown median values. The boxes are notched at the median and return to full width at the lower and upper confidence interval values. If the intervals around two medians do not overlap, then one can be confident at about the 95% level that the two population medians are different. Outside values are represented by an asterisk and far outside values by an open circle.

226

M. J. Kohane & W. A. Tiller

(b) EMFs and the [ATP]/[ADP]Ratio

Results for the [ATP]/[ADP] ratio in the presence of both NAD and dH20 are given in Figure 2. Again, we have used visual inspection of the figure to assess the comparisons described above. Comparisons between (d, j) and (d, o) indicated significant differences for both assays, with a higher [ATP]/[ADP] ratio observed for (d, j). Comparisons between (F) and (C) indicated a dependence on the assay as follows: (F) > (C) for NAD and (F) = (C) for dH20.Comparisons between (F) and (d, j) and (d, o) indicated that (F) produced a higher [ATP]/[ADP] ratio than (d, j), which was greater than (d, 0). Differences between treatments were more evident for supplemental NAD than dH20. Finally, the Pearson correlation coefficient for the [ATP]/[ADP] ratio and larval development time in Experiment 4 was -0.856.
4. Discussion

A reduction in exposure to random laboratory EMFs and two categories of EMF frequencies shortened larval development time and increased the [ATP]/[ADP] ratio in D. melanogaster. The magnitude of this effect appeared to be modified by the category of EMFs assessed, and we suggest that this may occur via modified and information-augmented EMFs (see below). This is another example of intention imprinting having a significant effect on physical reality (Dibble and Tiller, 1999). Differences observed for the [ATP]/[ADP] ratios were not large, but were significant, indicating that the EMFs studied here modified energy metabolism. Although our results for developing larvae

Treatment and Assay

Fig. 2. EMFs and the [ATP]/[ADP] ratio in the presence of NAD and dH,O as assayed in Experiment 4 of Table 1. The x-axis gives the treatment (C, F, j, and o) for each assay (1: NAD and 2: dH,O).

Electromagnetic Fields in D. melanogaster

227

1
I

I

indicated a decrease in the [ATP]/[ADP] ratio in the presence of EMFs, other studies have shown that EMFs (15, 50 Hz) may stimulate both ATP synthesis and cell proliferation in Corynebacterium glutamicum and Sacchharomyces cerevisiae (Lei and Berg, 1998; Mehedintu and Berg, 1997). This contrast may be a consequence of the higher EMF frequencies studied here, the use of the Faraday cage, and the fact that both the biological system under study and its status influence the effects of EMFs (Goodman, Greenbaum, and Marron, 1995). Larger treatment differences were observed for the [ATP]/[ADP] ratio and supplemental NAD, in comparison to supplemental dH,O (Figure 2). This result was in accord with Kohane (1994) where the [ATP]/[ADP] ratios were also higher, as a consequence of the shorter reaction time. As previously noted, supplemental NAD increases electron transport chain activity and the [ATP]/[ADP] ratio in larval homogenates. Hence, EMF effects on energy metabolism may be greater when the electron transport chain is active. EMF effects on energy metabolism may manifest both in vivo and in vitro, and we will assess details of this in future experiments. A higher [ATP]/[ADP] ratio was observed for the treatments that produced a shorter development time, providing additional evidence for a relationship between energetics and fitness (Parsons, 1997; Watt, 1985). Our results also suggest that that EMFs may function as a biological stress (Goodman, Greenbaum, and Marron, 1995; Smith, 1996), possibly via change in energy metabolism, since exposure to EMFs lengthened larval development time, which is associated with total fitness: Earlier sexual maturity leads to higher fitness (Cole, 1954; Hoffmann and Parsons, 199 1; Parsons, 1997). We detected significant effects on larval development time, but not on third instar larval weight, adult survival, and, in general, surviving adult weight. In this regard, Veicsteinas et al. (1996) studied the development of chicken embryos exposed to an intermittent EMF of 50 Hz. Statistical comparisons between exposed and sham-exposed values did not show significant differences for the variables studied-extracellular membrane components and histological examinations of brain liver and heart during development, egg fertility and egg weight, body weight of chickens at 90 days from hatching, and histological analysis of body organs. Furthermore, Morganato et al. (1995) concluded that continuous exposure to a 50 Hz magnetic field during 70% of the life span of rats prior to sacrifice did not significantly alter growth rate; the morphology and histology of liver, heart, lymph nodes, testes, and bone marrow. Hematology, hematochemistry, and the neurotransmitters dopamine and serotonin were also unaffected by the exposure. The significant results observed here for larval development time may be a consequence of higher EMF frequencies, the Faraday cage, and the biological system under study (see above). The decrease in larval development time observed here did not appear to be due to a trade off with the measured fitness components but may also arise as a trade off with other life-history traits (Hoffmann and Parsons, 1991; Odum,

228

M. J. Kohane & W. A. Tiller

1985). In the present study, we assessed EMF frequency effects under nonstressful nutrient conditions and a relatively low larval density (Kohane, 1988, 1994). Fitness differences are enhanced under stress, and, therefore, we may both detect differences for additional fitness components and clarify the surviving adult weight results under more stressful nutrient conditions (Kohane, 1994). We also note that the EMF effects may be due to modification of the larval environment, in particular the food, and we will assess this idea in future experiments studying EMF effects on larval energy metabolism in vitro. Treatment results across the four experiments were not homogeneous for larval development time and adult survival. Of interest were the following: (a) the relative homogeneity of the effect of (d, j) on larval development time suggesting that the intention may produce a constrained larval development time and (b) the shorter development time observed for all treatments in Experiment 3, in comparison to Experiment 4, suggesting that exposure period may influence this fitness component. Considering surviving adult weight, significant differences were only observed in Experiments 3 and 4, where the Faraday cage effect produced the highest weights (F). The results for Experiment 4 suggest that cross-generational exposure to EMFs may affect fitness in ways that differ from single generation exposure. However, in general there did not appear to be specific patterns to the heterogeneity. Hence, at this time we cannot conclude much about strain, specific device (single or triple oscillator), and exposure period, and we will assess these aspects in detail in future experiments. The thermodynamic basis for an EMF effect on biological systems is a consequence of the influence of the standard electrochemical potential upon biological processes. Standard thermodynamic theory (Tiller, 1990) shows us that the electrochemical potential, q i , of species j in medium A, is given by

Here, p, = standard state chemical potential, T = temperature, k = Boltzmann's constant, a = chemical activity, ( a = y, where c = concentration and y = activity coefficient), e = electron charge, z = valence, V = electrostatic potential, E = electrical permittivity, m = magnetic permeability, I? = electric field, and H = magnetic field. One finds that zJ = +1 and -1, respectively, for the proton and electron. Thus, the last term represents the applied EMF contribution to the standard electrochemical potential and, hence, to various biological systems. The particular physical variable or quantum potential that may be important in our study is the magnetic vector potential (MVP), denoted A , which creates an electric field E, and magnetic field (Jackson, 1962; Kraus and Carver, 1973; Tiller, 1993, 1997). Therefore, we attribute the EMF and augmented EMF effects on larval development time to the influence of the MVP upon the activity of the electron transport chain activity and the [ATP]/[ADP] ratio as follows: the MVP diminished electron transfer to the electron transport chain from the NAD pool and cellular electron availability. The diminished electron

Electromagnetic Fields in D. melanogaster

229

transfer modified the redox potential causing lowered electron transport through the electron transport chain and decreased proton pumping and energy availability. This effect may have decreased the proton motive force across the inner mitochondria1 membrane, subsequently modifying cytosolic phosphorylation status (Brand and Murphy, 1987; Kohane, 1994). The above approach suggests that the MVP may (a) modify NAD levels and (b) interact with the electron transport chain at positions beyond electron availability. We will assess these hypotheses in future experiments studying EMF effects on larval energy metabolism in vitro. Discussion of the results concerning the categories of EMFs and the possibility of augmenting EMFs requires that one look to a deeper understanding of the quantum vacuum or so-called empty space. This has been established by quantum theory as a chaotic, virtual particle sea of boundless energy (energy density equivalent to approximately glcc) at the quantum relativity level. An interaction occurs between this virtual particle sea and the fundamental particles of physical matter to ultimately determine the magnitude of p, in Equation 1 (Feynman and Hibbs, 1965; Lee, 1988; Tiller, 1997). Change in characteristics of the physical vacuum should, in turn, change the ground state energy of fundamental particles, atoms, molecules, and biological moieties (Tiller, 1993). These changes may manifest in biological systems as a consequence of the relationship between the ground state energy of fundamental particles (or charge carrying species) and the standard state electrochemical potential (p, in Equation 1). Tiller (1997) suggested that directed human intention is capable of altering characteristics of the physical vacuum. Thus, if a human intention can shift these characteristics even a tiny amount, the ground state energy of the electron would be appreciably altered. Thus, the primary effect of intention on biological systems may occur by modification of this ground state energy for the charge carrying species and, thus, the standard state electrochemical potential for atoms containing such particles. Therefore, pi, in Equation 1 may be affected by directed human intention. The magnetic vector potential is thought to be involved with human intention (Tiller, 1993, 1997). If we call the incremental change in A associated with a specific focused intention, AA , then standard electrodynamic equations yield changes A E and A H (Jackson, 1962; Kraus and Carver, 1973) and the intention effect may both enter physical reality via Equation 1 and influence energy metabolism and fitness. Although others have proposed a kinetic rate-limiting step for EMF effects on biological systems (Eichwald and Walleczek, 1996, see above), a sound basis also exists for a thermodynamic potential change as the rate-limiting step. The final apportioning of the total effect between thermodynamic and kinetic categories awaits further research. In conclusion, the process path from EMFs to a biological effect in this work is likely to be via an influence of the magnetic vector potential on electron transport chain activity. The intentionaugmented EMF effect is likely to just insert contributions (Ap, and AA) into

230

M. J. Kohane & W. A. Tiller

Acknowledgments
We wish to acknowledge Professor W. B. Watt, Department of Biological Sciences, Stanford University, for technical help and the loaning of facilities and equipment. We also acknowledge Durance Corporation and Ditron LLC for financial assistance and laboratory facilities.

References
Blank, M., Soo, L., & Papstein, V. (1995). Effects of low frequency magnetic fields on Na,K-ATPase activity. Bioelectrochemistry and Bioenergetics, 38,267-273. Brand, M. D., & Murphy, M. P. (1987). Control of electron flux through the respiratory chain in mitochondria and cells. Biology Review, 62, 14 1-193. Cole, L. C. (1954). The population consequences of life-history phenomena. Quarterly Review of Biology, 29, 103-107. Dibble, W. E., Jr., & Tiller, W. A. (1999). Electronic device mediated pH changes in water. Journal of Scientific Exploration, 13, 155-176. Eichwald, C., & Walleczek, J. (1996). Activation-dependent and biphasic electromagnetic field effects: Model based on cooperative enzyme kinetics in cellular signaling. Bioelectromagnetics, 17, 427-435. Feynman, R. P., & Hibbs, A. R. (1965). Quantum mechanics and path integrals. New York: McGraw-Hill. Goodman, E. M., Greenbaum, B., & Marron, M. T. (1995). Effects of electromagnetic fields on molecules and cells. International Review of Cytology, 158, 279-338. Goodman, R., Wei, L. X. J., & Henderson, A. (1989). Exposure of human cells to low-frequency electromagnetic fields results in quantitative changes in transcripts. Biochemica et Biophysica Acta, 1009, 216-220. Goodman, R., Weisbrot, D., Uluc, A., & Henderson, A. (1992). Transcription in Drosophila melanogaster salivary gland cells is altered following exposure to low-frequency electromagnetic fields. Bioelectromagnetics, 13, 11 1-1 18. Ho, M. W., Stone, T. A., Jerman, I., Bolton, J., Bolton, H., Goodwin, B. C., Saunders, P. T., & Robertson, F. (1992). Brief exposure to weak static magnetic fields during early embryogenesis cause cuticular pattern abnormalities in Drosophila larvae. Physics in Medicine and Biology, 37, 1171-1179. Hoffmann, A. A., & Parsons, P. A. (1991). Evolutionary genetics and environmental stress. Oxford, England: Oxford University Press. Jackson, J. 0 . (1962). Classical electrodynamics. New York: John Wiley and Sons. Kohane, M. J. (1987). Fitness relationships under different temperature regimes in Drosophila melanogaster. Genetica, 72, 199-210. Kohane, M. J. (1988). Stress, altered energy availability and larval fitness in Drosophila rnelanogaster. Heredity, 60, 273-281. Kohane, M. J. (1994). Energy, development and fitness in Drosophila melanogaster. Proceedings of the Royal Society, Series B, 257, 185-191. Kraus, J. D., & Carver, K. R. (1973). Electromagnetics. New York: McGraw Hill. Lee, T. D. (1988). Particle physics and introduction to field theory. London: Harwood Academy. Lei, C., & Berg, H. (1998). Electromagnetic window effects on proliferation rate of Corynebacterium glutamicum. Biolectrochemistry and Bioenergetics, 45, 261-265. Mehedintu, M., & Berg, H. (1997). Proliferation response of yeast Sacchharomyces cerevisiae on electromagnetic field parameters. Biolelectrochemistry and Bioenergetics, 45, 261-265. Menendez, R. G. (1996). An electromagnetic coupling hypothesis to explain the proton translocation mechanism in mitochondria, bacteria and chloroplasts. Medical Hypotheses, 47, 179-182. Morganato, V., Nicolini, P., Conti, R., Zecca, L., Veicsteinas, A., & Cerretelli, P. (1995). Biological effects of prolonged exposure to ELF electromagnetic fields in rats: 11. 50 Hz magnetic fields. Bioelectromagnetics, 16, 343-355. Nossol, B., Buse, G., & Silny, J. (1993). Influence of weak static and 50 Hz magnetic fields on the redox activity of cytochrome-C oxidase. Bioelectromagnetics, 14, 361-372. Odum, E. P. (1985). Trends expected in stressed ecosystems. Bioscience, 35,419-422.

Electromagnetic Fields in D. melanogaster

23 1

Parsons, P. A. (1997). Stress-resistance genotypes, metabolic efficiency and interpreting evolutionary change: From living organisms to fossils. In Bijsma R., & Loeschcke V. (Eds.), Environmental stress, adaptation and evolution (pp. 291-305). Basel, Switzerland: Birkhauser Verlag. Smith, 0. (1996). Cells, stress and EMFs. Nature Medicine, 2, 23-24. Tiller, W. A. (1990). The science of crystallization: Microscopic interface processes. London: Cambridge University Press. Tiller, W. A. (1993). What are subtle energies? Journal of Scientific Exploration, 7, 293. Tiller, W. A. (1997). Science and human transformation: Subtle energies, intentionality and consciousness. Walnut Creek, California: Pavior Publishing. Veicsteinas, A., Belleri, M., Cinquetti, A., Parolini, S., Barabato, G., & Molinari Tosatti, M. P. (1996). Development of chicken embryos exposed to an intermittent horizontal sinusoidal 50 Hz magnetic field. Bioelectromagnetics, 17, 41 1-424. Watt, W. B. (1985). Bioenergetics and evolutionary genetics-opportunities for new synthesis. American Naturalist, 1275, 11 8-143.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 233-255,2000

0892-33 10/00 O 2000 Society for Scientific Exploration

A Dog That Seems to Know When His Owner Is Coming Home: Videotaped Experiments and Observations

20 Willow Road London NW3 I TJ, UK e-mail: ars @dircon.co.uk

173 Kay Brow Ramsbottom, Bury, BLO 9AI: UK

Abstract- Many dog owners claim that their animals know when a member of the household is about to come home, showing their anticipation by waiting at a door or window. We have investigated such a dog, called Jaytee, in more than 100 videotaped experiments. His owner, Pam Smart (P.S.) traveled at least 7 km away from home while the place where the dog usually waited for her was filmed continuously. The time-coded videotapes were scored blind. In experiments in which P.S. returned at randomly selected times, Jaytee was at the window 4% of the time during the main period of her absence and 55% of the time when she was returning ( p c .0001). Jaytee showed a similar pattern of behavior in experiments conducted independently by Wiseman, Smith, and Milton (1998). When P.S. returned at nonroutine times of her own choosing, Jaytee also spent very significantly more time at the window when she was on her way home. His anticipatory behavior usually began shortly before she set off. Jaytee also anticipated P.S.'s return when he was left at P.S.'s sister's house or alone in P.S.'s flat. In control experiments, when P.S. was not returning, Jaytee did not wait at the window more and more as time went on. Possible explanations for Jaytee's behavior are discussed. We conclude that the dog's anticipation may have depended on a telepathic influence from his owner.

Keywords: dog-anticipation-telepathy-human-animal bonds

Introduction
Many dog owners claim that their animal knows when a member of the household is about to come home. Typically, the dog is said to go and wait at a door, window, or gate while the person is on the way home (Sheldrake, 1994, 1999a). Random household surveys in Britain and the United States have shown that between 45% and 52% of dog owners say they have noticed this kind of behavior (Brown and Sheldrake, 1998; Sheldrake, Lawlor, and Turney, 1998; Sheldrake and Smart, 1997).

234

R. Sheldrake & P. Smart

Dog owners often ascribe their animal's anticipations to telepathy or a sixth sense, but there could be more conventional explanations: First, the dog could be hearing or smelling its owner approaching. Second, the dog could be reacting to routine times of return. Third, it could be responding to subtle cues from people at home who know when the absent person is returning. Fourth, the animal may go to the place at which it waits for its owner when the person is not on the way home; the people at home may remember its apparent anticipation only when the person returns shortly afterward, forgetting the other occasions. Thus the phenomenon could simply be an artifact of selective memory. In order to test these possibilities, the dog should be capable of reacting at least 10 minutes in advance, the person to whom the dog responds should come home at nonroutine times, the people at home should not know when this person is coming, and the behavior of the dog should be recorded in such a way that selective memory can be ruled out (Sheldrake, 1994). This recording of the dog's behavior can be done most effectively by means of time-coded videotape. In this paper we describe a series of videotaped experiments and observations with a dog called Jaytee, belonging to Pamela Smart (P.S.).
Jaytee's Anticipatory Behavior

P.S. adopted Jaytee from Manchester Dogs' Home in 1989, when he was still a puppy, and soon formed a close bond with him. She lived in Ramsbottom, Greater Manchester, in a ground-floor flat, adjacent to the flat of her parents, William and Muriel Smart, who were retired. When she went out, she usually left Jaytee with her parents. In 1991, when P.S. was working as a secretary in Manchester, her parents noticed that Jaytee used to go to the French window in the living room almost every weekday at about 4:30 p.m., around the time she set off to come home. Her journey usually took 45-60 minutes, and Jaytee would wait at the window most of the time she was on her way. Since she worked routine office hours, the family assumed that Jaytee's behavior depended on some kind of time sense. P.S. was laid off from her job in 1993 and was subsequently unemployed. She was often away from home for hours at a time and was no longer tied to any regular pattern of activity. Her parents usually did not know when she would be returning, but Jaytee still continued to anticipate her return. His reactions seemed to occur around the time she set off on her homeward journey. In April 1994, P.S. read an article in the British Sunday Telegraph about the research Rupert Sheldrake (R.S.) was doing on this phenomenon (Matthews, 1994) and volunteered to take part. The first stage in this investigation was the keeping of a log by P.S. and her parents. Between May 1994 and February 1995 on 100 occasions she left Jaytee with her parents when she went out, and they made notes on Jaytee's reactions. P.S. herself kept a record of where she

Experiments With a Return-Anticipating Dog

235

had been, how far she had traveled (usually at least 6 km and sometimes 50 km), her mode of transport, and when she had set off to come home. On 85 of these 100 occasions, Jaytee reacted by going to wait at the French window in the living room before P.S. returned, usually 10 or more minutes in advance. When these data were analyzed statistically, a linear regression of Jaytee's waiting times against P.S.'s journey times showed that the times when Jaytee began waiting were very significantly ( p < .0001) related to the times that P.S. set off (Sheldrake and Smart, 1998). It did not seem to matter how far away she was. Jaytee's anticipatory reactions usually began when P.S. was more than 6 km away. He could not have heard her car at such distances, especially against the background of the heavy traffic in Greater Manchester and on the M66 motorway, which runs close to Ramsbottom. Moreover, Mr. and Mrs. Smart had already noticed that Jaytee still anticipated P.S.'s return when she arrived in unfamiliar vehicles. Nevertheless, to check that Jaytee was not reacting to the sound of P.S.'s car or other familiar vehicles, we investigated whether he still anticipated her arrival when she traveled by unusual means: by bicycle, by train, and by taxi. He did (Sheldrake and Smart, 1998). P.S. did not usually tell her parents in advance when she would be coming home, nor did she telephone to inform them. Indeed, she often did not know in advance when she would be returning after shopping, visiting friends and relations, attending meetings, or an evening out. But it is possible that her parents might in some cases have guessed when she might be coming, and then, consciously or unconsciously, communicated their expectation to Jaytee. Some of his reactions might therefore be due to her parents' anticipation, rather than depending on some mysterious influence from P.S. herself. To test this possibility, we carried out experiments in which P.S. set off at times selected at random after she had left home. These times were unknown to anyone else. In these experiments, Jaytee started to wait when she set off, even though no one at home knew when she would be coming (Sheldrake and Smart, 1998). Therefore his reactions could not be explained in terms of her parents' expectations. Our first series of investigations involved the recording of Jaytee's reactions in a notebook and, hence, necessitated a subjective assessment of his behavior. In this paper we describe a preplanned series of 12 experiments with randomly chosen return times in which Jaytee's behavior was recorded throughout the entire period of P.S.'s absence on time-coded videotape. We also discuss four videotaped experiments with randomly chosen return times carried out with Jaytee at our invitation by Wiseman, Smith, and Milton (Sheldrake, 1999b; Wiseman, Smith, and Milton, 1998). In addition, we describe 95 videotaped observations of Jaytee's behavior in three different environments. We made these observations to find out more about the natural history of the dog's anticipatory behavior, On these occasions, P.S. did not return at randomly selected times, but rather at times of her

236

R. Sheldrake & P. Smart

own choosing. She went out shopping, visiting friends or members of her family, attending meetings, or visiting pubs and returned when she felt like it. Her journeys varied in distance between 7 and 22 km away from home. They took place at various times of the day or evening and followed no routine pattern. When she left Jaytee with members of her family, they were not informed when she would be returning, and she usually did not know in advance herself. On 50 occasions, Jaytee was left on his own. We also carried out a series of 10 control observations in which Jaytee was filmed continuously on evenings when P.S. was not returning home, or was returning unusually late.

Methods
In these experiments, when P.S. went out she left Jaytee either with her parents, William and Muriel Smart; or alone in her own flat in Ramsbottom, Greater Manchester, next door to her parents' flat; or with her sister, Cathie MacKenzie, in the nearby town of Edenfield. Having left Jaytee, P.S. traveled a minimum distance of 7 km. She recorded in a notebook the details of where she had been, when she set off to come home, how long her journey took, and her mode of transport. In some cases she traveled in taxis or in cars belonging to her sisters or friends, but in most cases she traveled in her own car, since we had already established that Jaytee's anticipatory behavior still occurred when she traveled in unfamiliar vehicles, and hence could not be explained in terms of the dog hearing her car (Sheldrake and Smart, 1998). While P.S. was out, Jaytee's visits to the window and his absences from it were monitored continuously on videotape. The videotaping procedure was kept as simple as possible, so that the filming of Jaytee could be done routinely and automatically. The video camera was set up on a tripod and left running continuously in the long-play mode with a long-play film, with the time-code recorded on it. In this way up to 4 hours of continuous observation was possible without anyone needing to attend to the camera. P.S. switched the camera on just before she left and switched it off when she returned. Because of the need to keep Jaytee's visits to the window under continuous surveillance, all experiments involved absences of less than 4 hours. The camera pointed at the area where Jaytee usually waited. In both P.S.'s parents' flat and in P.S.'s own flat (a ground-floor flat adjacent to her parents') this was by the French window in the living room, through which he could see the road outside where P.S. drew up and parked her car. In P.S.'s sister's house, Jaytee jumped up onto the back of a sofa from which he could see out of the window. Experiments With Randomly Selected Return Times In a preplanned series of 12 experiments with randomly selected return times, Jaytee was left at P.S. parents' flat and neither P.S. nor her parents knew

Experiments With a Return-Anticipating Dog

237

in advance when she would be returning. In all these experiments, P.S. traveled in her own car. P.S. was beeped on a telephone pager when it was time to set off home. On most occasions, the random selection of the times and the beeping of P.S. were done by R.S., who was in London, over 300 km away. On two occasions (on November 19, 1996 and July 1, 1997) the selection of random times and the beeping was done by another person in London who was unknown to P.S. and Jaytee. These beep times were within a prearranged period, between 45 and 90 minutes long. This period commenced 80 to 170 minutes after P.S. had gone out. The beep window was then divided into 20 equal intervals, and one of these was selected at random by throwing a die three times, to determine the page, row, and column in standard random number tables (Snedecor and Cochran, 1967). Reading downward from this point looking at the first two digits of each random number, the first pair of digits between 01 and 20 determined the time at which the beep was to be given. Three of the 12 experiments were carried out in the afternoon, with beeps at 2:22, 3:04, and 3:36 p.m.; the remaining experiments were carried out in the evening with beeps at a range of times between 8:09 and 9:39 p.m. Observations in Diflerent Environments We carried out a preplanned series of 30 observations in P.S.'s parents' flat between May 1995 and July 1996. Seven of P.S. absences were in the daytime, at various times in the morning and afternoon, with P.S.'s times of return ranging from 11:13 a.m. to 3:36 p.m. Twenty-three were in the evening, with P.S. returning at a range of times between 7:30 and 10:45 p.m. The length of her absences ranged from 85 to 220 minutes. In P.S.'s parents' flat we also carried out a preplanned series of 10 control experiments on evenings when P.S. was not returning or coming home unusually late. Her parents were not informed that she would not be returning during the 4-hour period that the videotape was running. This series of observations was made between July and November 1997, during the period when we were doing experiments with randomly selected return times. We also carried out a preplanned series of 50 observations in P.S.'s own flat, where Jaytee was left on his own, between May 1995 and September 1997. On 15 of these occasions, P.S. went out and returned in the morning, with times of return ranging from 9:59 to 11:57 a.m.; on 34 occasions she returned in the afternoon, at a range of times between 12:20 and 4:50 p.m.; and on one occasion she returned in the evening, at 9:27 p.m. The length of her absences ranged from 81 to 223 minutes. The five observations at P.S.'s sister's house were conducted between October 1995 and June 1996, two in the morning and three in the evening, with absences ranging from 93 to 199 minutes.

238

R. Sheldrake & P. Smart

Analysis of Videotapes and Tabulation of Data

The videotapes were analyzed blind by Jane Turney and/or Dr. Amanda Jacks, who did not know when P.S. set off to come home or other details of the experiments. Starting from the beginning of the tape, they recorded the exact times (to the nearest second) when Jaytee was in the target area near the window and made notes on his activities there: for example that he was barking at a passing cat, sleeping in the sun, or sitting looking out of the window for no apparent reason. In cases where the same tape was scored blind by both people, the agreement between their records was excellent, showing occasional differences of only a second or so. (Although the scoring was carried out blind, when the end of the tape was reached and P.S. was seen entering the room, the judges then knew at what time she had arrived, and hence were no longer blind. But by this time the data had all been recorded and were not subsequently altered.) Some of the videotapes were also scored independently by P.S. and R.S. to see how well their records corresponded to each other and to the blind scores by Jane Turney of Amanda Jacks. Again the agreement was excellent, with occasional differences of only a second or two. For the tabulation of the data, two methods were used. First, all the visits of Jaytee to the window were included, even if he was there for reasons that seemed to be unconnected with his anticipatory behavior, for example if he was simply sleeping in the sun, barking at passing cats, or watching people unloading cars. In this way any selective use of data was avoided, although the data were noisy because they included irrelevant visits to the window that had nothing to do with P.S.'s returns. Second, these visits to the window that seemed to have nothing to do with Jaytee's anticipatory behavior were excluded. This set of data was cleaner but more dependent on subjective assessments. However, since these assessments were done blind they should not have involved any systematic bias.
Statistical Analysis

We used two main methods of analyzing the data, both of which were preplanned. The first provided a simple way of averaging and comparing different experiments. For each experiment, the percentage of the time that Jaytee spent by the window was calculated for three periods: The first 10 minutes after P.S. got into her car and started traveling homeward (the return period). In the case of experiments with randomly selected return times, this return period was deemed to begin at the time P.S. received the beep signaling that she should set off. All homeward journeys lasted at least 13 minutes. Thus Jaytee's reactions in the last 3 or more minutes of P.S.'s journey were omitted from the analysis in case he could have been responding to the sounds of her car approaching. In fact most journey times were more than 15 minutes long, so more than five minutes of Jaytee's behavior were omitted. In cases where the jour-

Experiments With a Return-Anticipating Dog

239

ney time lasted 23 minutes or more, the percentage of time for the combined first and second 10-minute periods of the return journey was also calculated, and a separate statistical analysis was carried out for comparison with the usual method involving only 10-minute return periods. 2. The 10-minute period prior to the return period (the prereturn period). 3. The time when P.S. was absent prior to the prereturn period (the main period). Because the experiments varied in length, the length of the main period ranged between 50 and 200 minutes. The percentage of the time that Jaytee spent by the window in these three periods was analyzed statistically by a repeated-measures analysis of variance (ANOVA), and comparisons of pairs of periods were made using the pairedsample t test. The second method of analyzing the data also involved 10-minute return periods, but the main period was also divided up into 10-minute intervals, defined in relation to the time at which P.S. was beeped to come home. The total number of seconds that Jaytee spent by the window in each of these 10-minute periods was then plotted on graphs. In cases where P.S.'s return journey lasted 23 minutes or more, data for two 10-minute return periods are shown on the graphs, representing the first 20 minutes of her homeward journey. A statistical analysis of the time-course data (including all visits to the window) was carried out for us by Dr. Dean Radin using a randomized permutation analysis (RPA) (Good, 1994; Hjorth, 1994). For each data set, he calculated the correlation between time at the window versus the 10-minute segment number of the original data for all experiments (as plotted in the graphs in Figure 4). These correlations showed strong positive trends. The RPA calculations made the assumption that under the null hypothesis, Jaytee should have spent about the same amount of time at the window in each of the 10-minute periods. The z scores were formed as z = (original correlation - average permutated correlation)/(standard deviation of perrnutated correlations), based on 500 random permutations. The RPA tests converged very rapidly; typically only about 100 random permutations were needed, so the estimated z scores with 500 permutations were quite accurate.

Results
Experiments With Randomly Selected Return Times The overall results summarized in Figure 1 show that Jaytee was at the window far more when P.S. was on her way home than during the main period of her absence. When all Jaytee's visits to the window were included in the analysis (Figure IA), he was at the window for an average of 55% of the time during the first 10 minutes of P.S.'s return journey, as opposed to 4% of the time during the main period of P.S.'s absence. During the 10-minute prereturn period he was at the window 23% of the time. These differences were highly significant statistically (repeated-measures ANOVA, F value [df 2, 221 = 20.46;

240

R. Sheldrake & P. Smart

main period.

pre-return.

return.

Fig. 1. The average percentage of time spent at the window by Jaytee during the main period of P.S.'s absence (main period) during the 10 minutes prior to her setting off to come home (prereturn), and during the first 10 minutes of her homeward journey (return). Standard errors are indicated by bars. (A) Data for all visits to the window. (B) Data excluding irrelevant visits.

p c .0001; paired-sample t test comparing main period with return period p = .0001). When Jaytee's irrelevant visits to the window were excluded from the analysis, the general pattern was very similar (Figure lB), but the percentage of time at the window was of course somewhat lower. In the main period Jaytee spent 0.5% of the time by the window, in the prereturn period 18%, and in the return period 54%. The significance of these differences was higher than when all Jaytee's visits were included (repeated-measures ANOVA, F value [df 2,221 24.36; p = 3 x lop6). In six out of the 12 experiments, P.S.'s return journeys took more than 23 minutes and hence included two 10-minute periods rather than just one. In the analysis shown in Figures 1A and B, only the first 10-minute return period was included. When both 10-minute return periods from these experiments were included in the analysis, the average percentage of time at the window during the return period increased from 55% to 61% when all visits to the window

Experiments With a Return-Anticipating Dog

241

were included, and from 54% to 59% when irrelevant visits were excluded. The statistical significance of the differences was even higher than before [repeated-measures ANOVA, F values (df 2,22) 25.43 and 29.03 respectively]. The increased percentage of time the Jaytee spent at the window during the 10-minute prereturn period was statistically significant (paired-sample t test comparing main period with prereturn period for the data included all visits to the window, p = .04). The difference between the prereturn and return periods was very significant (p = .0009). However, Jaytee did not visit the window in the prereturn period in all experiments, but only in seven out of 12. The detailed time courses for all 12 beep experiments are shown in Figure 2. The graphs show the duration of all Jaytee's visits to the window in each 10minute period, both with and without the exclusion of irrelevant visits. In one of these experiments, Jaytee did not go to the window at all, but in all the others he was at the window for the highest proportion of the time when P.S. was on her way home. In six of these experiments, P.S. was beeped to come home in the first half of the beep window (early beep) and in the other six she was beeped in the second half (late beep). Inspection of the graphs shows that Jaytee responded in the prereturn period in only two of the early-beep experiments, whereas he did so in five of the late-beep experiments (three when irrelevant visits to the window were excluded). Thirty Ordinary Homecomings In order to observe how Jaytee behaved under more or less natural conditions, we made a preplanned series of 30 videotapes of Jaytee at P.S.'s parents' flat while P.S. went out and about. She returned at times of her own choosing, ranging from 11: 13 a.m. to 10:45 p.m., with absences ranging from 85 to 220 minutes. P.S. did not tell her parents when she would be returning, and usually she did not know in advance herself. The overall results are shown in Figure 3A. The general pattern is clear. On average, Jaytee was at the window for the highest proportion of the time (65%) in the return period, when P.S. was on her way home. He was at the window 31 % of the time in the 10-minute prereturn period, and only 1 1 % of the time during the main period of her absence. These differences were highly significant statistically ( p < .0001). The paired-sample t test (two-tailed) showed that the difference between the main period and return period was significant a t p < .0001, between the prereturn and return period at p = .008, and between the main period and prereturn period at p = .0009. A number of interesting details are hidden by this averaging process. First of all, although on 24 occasions Jaytee spent more time at the window when P.S. was on her way home, on six occasions he did not. On five (all in the evening) he did not go to the window at all during the first 10 minutes of her homeward journey. On the sixth (in the morning) he did so for only 10 seconds. On some of these occasions he was unusually inactive and may have been exhausted

242

R. Sheldrake & P. Smart

EARLY BEEPS
1 ' "
400 300 4 g 200 500 19/11/96
600

7
-

25/3/97

500

-100 0

r

l

n

l

r

~

~

~

2

4

6

8 10 12 14 16 18 20 22 period

s l ~

u

100 0
s ~ a

-

"'

.................... .. . .,......... ... .
, ,, ,, ,,,, ,

, ,,, ,

.

-100' ~ 8 0

f---------,
r~ r '8 1 ~ . 1q = 'I l z I m l n l l l

2

4

6

8

10 12 14 16 18 20 22 period

-50

0
0 2 4 6 8 10 12 14 16 18 20 22 period 0 2 4 6 8 10 12 14 16 18 20 22 period

600: 500 400

11/2/97

450

1

8/10/97

L

-

, 300 -2
8 200 %
100 0
1 0 0 r l ~ l ~ l ~

-

0

2

4

6

8 10 12 14 16 18 20 22 period

l q l ~ l

..."................... -.......-.."..*
1 8

350 300 ,250 -E 200 150100 50 , u ) I H K H M O 0 1 ~ -50 I Ir 0 2

-

-

~

*

~

s

~

r

~

~

r

-

~

8

~

9

~

4

6

8

10 12 14 16 18 20 22 period

Fig. 2. The time courses from all 12 experiments in which P.S. came home at randomly selected times in response to being beeped. The ordinate shows the total number of seconds that Jaytee spent at the window in each 10-minute period, the abscissa the series of 10-minute periods defined in relation to the time at which P.S. was beeped to come home. Data for all Jaytee's visits to the window, including irrelevant visits, are indicated by circles, and data from which irrelevant visits have been excluded are indicated by squares. The beep

Experiments With a Return-Anticipating Dog

243

LATE BEEPS

0

2

' 0 °

: 7/5/97

"coo 8
8

in

-

1
4 6 8
period
. ~ 8 ~ ~ z

10 12 14 16 18 20 22

0

2

4

6

8

10 12 14 16 18 20 22
period

O4 6 -

29/8/97

ZOO
loo: 0

-

<\
o&
8 ~

$
8

i .................................................................
~ r ~

"
,

2 --000a>Of>i)f>i)-0. 0 -2-4-6

~

-100 0

2

4

6

8 10 12 14 16 18 20 22
period

-8 -10 0

--

I

~

~

-

I

~~

I

r

,

~

I

--

I

~

'

I

~

I

'

~

~

~

2

4

6

8 10 12 14 16 18 20 22
period

600 500

0
-

1/7/97

400 v,
C

300

8

8 200 100

0
-100

--w
t - - , - - r ~ ~ 8 ~ r ~ ~ . ~

0

2

4

6

8

10 12 14 16 18 20 22
period

500- 21/9/97 450 400 350 300 4 250 200 Z 150100 50 ............. . 0 -. I * I . I . I , -5 0 0 2 4 6 8 10 12 14 16 18 20 22

-

8

-

8

~

-

~

.

~

-

L
~ . ~

period

Fig. 2 (continued). window is indicated by a line with two arrowheads, and this represents the period during which P.S. could have received the signal to come home. Experiments with beeps in the first half of the beep window (early beeps) are on the left, and those with beeps in the second half of the beep window (late beeps) are on the right. The points for the 10-minute periods immediately following the beep during which P.S. was returning are indicated by filled circles or squares.

il

244

R. Sheldrake & P. Smart

.-

'

g60 50 30 20 10 0

daytime evening

i40

main period

pre-return

return

main period

pre-return

return

100 90 80

C

120 100

D
first 10 second lo third 10

, 70

2

60

::
30 20 10 0
main period pre-return return

0 normal
noisy

g, 2

80

a 60 40
20
0

main period

pre-return

return

f

a

70

::
main period pre-return return

long medium short

40
30

20 10 0

Fig. 3. Percentage of time spent at the window by Jaytee during the main period of P.S.'s absence, during the 10 minutes prior to her setting off to come home (prereturn), and during the first 10 minutes of her homeward journey (return). Standard errors are indicated by bars. (A) Averages from 30 ordinary homecomings. (B) Comparison of experiments in the daytime (7) and in the evening (23). (C) Comparison of normal experiments (23) and noisy experiments (7) in which Jaytee was at the window for more than 15% of the time during the main period of P.S.'s absence. (D) Comparison of the first, second, and third groups of 10 experiments. (E) Comparison of long (13), medium (9), and short (8) experiments.

after long walks or sick. But irrespective of the reasons for his unresponsiveness, the fact is that he did not show his usual signs of anticipation on six out of 30 occasions. Second, in the daytime Jaytee was generally more active and alert than in the evening, and on average he was at the window more (Figure 3B). There was more activity outside for him to watch, and on sunny days he tended to snooze by the window in the sunlight. Third, the effect of data noise on the pattern of Jaytee's response can be examined directly by comparing noisy experiments with normal experiments (Figure 3C). Noisy experiments were defined as ones in which Jaytee spent more than 15% of the time at the window in the main period. By this criterion

Experiments With a Return-Anticipating Dog

245

seven out of the 30 experiments were noisy. Most noisy experiments occurred in the daytime when there was much activity outdoors that Jaytee went to the window to watch. Also, on sunny days he tended to lie down by the window in the sun and go to sleep. Nevertheless, in both normal and noisy experiments Jaytee was at the window least in the main period, more in the prereturn period and most when P.S. was actually returning. These differences were highly significant for both normal and noisy experiments analyzed separately ( p = .0004 and p = .0001, respectively). Fourth, the question of whether Jaytee's pattern of response changed with time can be examined by comparing the average of the first 10 experiments (from May to September 1995) with the second (from September 1995 to January 1996) and third batches of 10 experiments (from January to July 1996). The pattern was similar in all three groups (Figure 3D). Finally, the length of time that P.S. was away from home varied considerably. Did Jaytee behave in a similar way when she returned after short absences and after longer ones? To explore this question, we have divided the data up into three groups: long, medium, and short absences, defined respectively as 180 minutes or more, 110-170 minutes, and 80-100 minutes. The general pattern in all three groups was similar, but in the short absences the experiments were noisier, and Jaytee showed more anticipation in the prereturn period (Figure 3E). Since Jaytee was at the window most in the final period, when P.S. was on the way home, could it be that Jaytee simply went to the window more and more when P.S. was out? If he did so, he would automatically be at the window most in the final period whatever the length of the experiment, and more in the penultimate period that in the previous periods. The going to the window more and more hypothesis can be tested by looking in more detail at the average time courses of long, medium, and short experiments in Figure 4. This figure shows data from all the experiments, as well as from the normal experiments after the exclusion of the minority of noisy experiments, which tended to obscure the usual pattern. The data in Figure 4 show that Jaytee's waiting at the window occurred soonest in the short experiments, later in the medium experiments and latest in the long experiments. In other words, Jaytee's behavior was more closely related to P.S.'s impending return than to the amount of time that had elapsed since she went out. If Jaytee had simply gone to the window more and more as time went on, there should have been little or no difference between the time he spent there in the long, medium, and short experiments in any given period. This can be tested statistically. (In the following analyses, all the data were included, with no exclusions of noisy experiments.) When P.S. was returning in the short experiments in period 8, Jaytee was at the window a significantly higher proportion of the time than in period 8 of the medium and long duration experiments (by a factorial analysis of variance, p = .004). Likewise, Jaytee spent a significantly higher proportion of the time at the window when P.S. was on the way home in the medium experiments in

' 0 ° 350

1

LONG

U all -0- normal

- - all 3 --D- normal

~

.

%

X

b

%
period

S

S

8

E !.

.P

8

~

o

T
+ - all 3

-0-

normal

-5 0

,

,

,

,

,

,

.

.

period

8

Fig. 4. The time courses of Jaytee's visits to the window during P.S.'s long, medium, and short absences. The horizontal axis shows the series of 10-minute periods (pl, p2, etc.). The vertical axis shows the average number of seconds that Jaytee spent at the window in each 10-minute period. Data for all 30 experiments are shown, as well as data for normal experiments after the exclusion of the seven noisy experiments. The last period shown on the graph represents the first 10 minutes of P.S.'s return journey (ret), the point for this is indicated by a filled circle or square. The bars show standard errors.

Experiments With a Return-Anticipating Dog

247

period 11 than in period 11 of the long absences, when she would not be returning for more than another hour ( p = .003). In a randomized permutation analysis (RPA), including all experiments, the observed time courses were tested against the null hypothesis that Jaytee should have spent about the same amount of time at the window in each of the 10-minute periods. The probabilities that the observed pattern of data differed from the null hypothesis by chance were Long experiments Medium experiments Short experiments Combined
p < .0008 p < .01 p < .008 p < .000003

Jaytee S Behavior When PS. Was Not Returning In order to study Jaytee's behavior when P.S. was not coming home, we filmed him at P.S. 's parents' flat on 10 evenings when P.S. was either spending the night away from home or coming home at least one hour after the filming period had terminated. Figure 5 shows the average time he spent at the window in the series of 10-minute periods between 6:30 and 10:OO p.m. In these control observations, Jaytee made a number of visits to the window for a variety of reasons, as usual, but he did not go to the window more and more as the evening went on. Observations on Jaytee at PS. 's Sister's House P.S. sometimes left Jaytee at her sister's house, and here too he usually went to the window when she was coming home. P.S. did not tell her sister when she would be returning, but her sister usually knew when she was on her way because of Jaytee's behavior.

period

Fig. 5. Time spent by Jaytee by the window on evenings when P.S. was not coming home. The first of the 30 10-minute periods was from 6:30 to 6:40 p.m., the last from 9:50 to 10:OO p.m. The figures shown are averages from 10 evenings. The bars show standard errors.

248

R. Sheldrake & P. Smart

In this house, in order to look out of the window Jaytee had to balance himself on the back of a sofa. Unlike the situation in P.S.'s parents flat and in her own flat, Jaytee could not wait by the window comfortably and rarely stayed for long. Nevertheless, in a series of five videotaped experiments, his general pattern of response (Figure 6A) was similar to that in P.S.'s parents' flat (Figure 3), although the percentage of time spent at the window was lower, the variability was greater and differences were not statistically significant.

Observations on Jaytee Left on His Own
We carried out a preplanned series of 50 videotaped experiments in which Jaytee was left by himself in P.S.'s own flat while she went out. The overall pattern (Figure 6B) was similar to that in P.S.'s parents' flat (Figure 3) and her sister's house (Figure 6A). The differences were significant statistically (repeated-measures ANOVA, p < .01; paired-sample t test comparing the main period with return period, p < .005).But the average proportion of the time at the window was lower than in P.S.'s parents' flat. A closer analysis of the data revealed that Jaytee showed two different patterns of response. In most of the tests (35 out of 50) Jaytee did not go to the window when P.S. was on her way home. In fact he made few or no visits to the

main period

pre-return

return

main period

pre-return

return

Fig. 6. Percentage of time spent by the window by Jaytee during the main period, prereturn, and return periods. The bars show standard errors. (A) In P.S.'s sister's house (average of five experiments). (B) Alone in P.S.'s flat (average of 50 experiments).

Experiments With a Return-Anticipating Dog

249

window during the entire time she was absent. One reason may be that the view from the window was largely obscured by a bush, so there was not much scope for watching activities outside, although it was still possible to see the road on which P.S. approached in her car. By contrast, in 15 out of 50 experiments (30%), Jaytee behaved much as he did at P.S.'s parent's flat and showed his usual anticipatory waiting while P.S. was preparing to come home and while she was on her way.

An Independent Replication
During the course of our research with Jaytee, at our invitation Wiseman, Smith, and Milton carried out four experiments with Jaytee, three at P.S.'s parents' flat and one at her sister's house. During these experiments, Wiseman filmed Jaytee while Smith accompanied P.S. and returned with her at randomly selected times in cars unfamiliar to Jaytee (Wiseman, Smith, and Milton, 1998). In all three experiments at P.S.'s parents' flat, the pattern of response was very similar to the pattern we observed, with Jaytee at the window most when P.S. was returning. Using the same definition of the main, prereturn, and return periods used in Figure 1, the average proportion of the time that Jaytee spent at the window was 4% in the main period, 48% in the prereturn period, and 78% in the return period. The differences between the periods were significant (by repeated-measures ANOVA, p = .02; comparison of the main period with return period by the paired-sample t test, p = .03). When the time courses were plotted following the same method used in our Figure 2, they showed a very similar pattern (Figure 7). Wiseman, Smith, and Milton recorded Jaytee's behavior only during the experimental period during which P.S. could have been asked to go home, and have no data on his behavior during the preceding period, up to 90 minutes long, from the time that P.S. left home until the beginning of the experimental period. This is the main difference between the graphs from Wiseman, Smith, and Milton's experiments and our own. In Wiseman, Smith, and Milton's experiment at P.S.'s sister's house, the first time Jaytee went to look out of the window for no apparent reason coincided with P.S. setting off to come home. In spite of these striking effects, Wiseman, Smith, and Milton (1998, 2000) portrayed their results as a refutation of Jaytee's ability to anticipate P.S.'s returns. They arrived at this conclusion by the use of narrow and arbitrary criteria for Jaytee's signal, based on his going to the window for no apparent external reason for a brief period (less than a minute in one experiment, and for at least two minutes in the others). They disregarded the rest of their own data and did not plot graphs. Unfortunately Wiseman, Smith, and Milton based their criteria not on the waiting behavior of Jaytee that we had already observed and documented on more than 100 occasions before they carried out their tests (Sheldrake and

R. Sheldrake & P. Smart

0

2

4

6

8 10 period

12

14

16

18

350-

13/6/95

200

-

b
-50
~ I I ~ I ~ ~ ~ ~ ~ ~

0

2

4

6

8 10 period

12

14

16

18

-100 0

2

4

6

8 10 period

12

14

16

18

Fig. 7. The time courses from the three experiments conducted by Wiseman and Smith with Jaytee at P.S.'s parents' flat. The data are taken from Wiseman, Smith, and Milton (1998); the graphs are plotted in the same way as those in Figure 2 and show the total amount of time that the dog spent at the window in successive 10-minute periods, defined in relation to the randomly selected time at which P.S. was told to return home. The final point on each graph, indicated by a filled circle, represents the first 10 minutes of P.S.'s return journey.

Experiments With a Return-Anticipating Dog

251

Smart, 1998), but rather on a claim made by the media about Jaytee's behavior. They showed, unsurprisingly, that statements on popular television shows are sometimes oversimplified. Ironically, the way their own skeptical conclusions were publicized in the media provided several striking examples of misleading claims (Sheldrake, 1999b, 2000).

Discussion
Nomal Explanations of Jaytee's Behavior The data presented in this paper imply that Jaytee's waiting by the window when his owner is coming home cannot be explained in terms of any of the following hypotheses: 1. Routine. Jaytee's anticipatory behavior when P.S. was coming home occurred at various times in the morning, afternoon, and evening and did not depend on a routine time of return. This was apparent in the series of 30 ordinary homecomings (Figures 3 and 4) as well as in our experiments with randomly selected return times (Figures 1 and 2; see also Sheldrake and Smart, 1998). The data from the experiments of Wiseman, Smith, and Milton (1998) with randomly selected return times replicate and confirm our own findings (Figure 7). Moreover, in control observations when P.S. was not coming home Jaytee did not start waiting at a particular time (Figure 5). 2. Hearing a familiar vehicle. In many experiments, Jaytee's anticipatory behavior was already apparent in the prereturn periods (Figures 2 , 3 , 4 , and 6) before P.S. had actually set off in a vehicle, and hence before he could have heard any characteristic sounds. When she was actually traveling home, Jaytee was waiting at the window when the vehicle was at least 7 km away, and in some cases more than 25 km. Although dogs can hear higher pitches than human beings, their general sensitivity to noise levels is similar to that of people (Munro, Paul, and Cox, 1997; Shiu, Munro, and Cox, 1997). It is not possible that Jaytee could have heard the sounds of familiar cars at such distances against all the background noises of Greater Manchester, and in a manner independent of the direction of the wind. Moreover, Jaytee also waited for P.S. in a similar way when she was traveling in taxis or other unfamiliar vehicles (Sheldrake, 1999a; Sheldrake and Smart, 1998), an effect replicated by Wiseman, Smith, and Milton (Figure 7). 3. Picking up clues from people at home. P.S. did not tell her parents or her sister when she would be coming home, and often did not know in advance herself. But perhaps in some of P.S.'s ordinary homecomings, her parents or her sister might have guessed approximately when she would return and consciously or unconsciously communicated their expectation to Jaytee. But this possibility cannot account for Jaytee's behavior in the trials with randomly se-lected return times (Figures 1,2, and 7) nor when he was alone (Figure 6B). 4. Selective memory or selective reporting of data. The video recordings permitted all Jaytee's visits to the window to be recorded, and the data presented in this paper include all the visits he made, even when these were obviously

252

R. Sheldrake & P. Smart

related to events going on outside, such as cats passing the window, or when he was sleeping by the window in the sunlight. The videotapes were analyzed blind by people who did not know the details of the experiments. Hence there was no scope for selective memory or selective reporting of data. The data from the experiments conducted with Jaytee by Wiseman, Smith, and Milton (1998) also show the same pattern of behavior by Jaytee as our own experiments (Figure 7). 5. Jaytee going to the window more and more the longer his owner was absent. The data in Figure 4 and the statistical analysis described above show that Jaytee's visits to the window were not explicable in terms of his going there more and more the longer P.S. had been absent. Nor did he go to the window more and more as time went on in the control experiments (Figure 5). His waiting by the window was related to P.S.'s returns, rather than to the length of time she had been away from home. The Possibility of Telepathy Jaytee seemed to be detecting P.S.'s intention to come home in a way that could not be explained in terms of any of the normal hypotheses considered above. Perhaps he was responding to her intentions or thoughts telepathically. The hypothesis of telepathy would not only agree with Jaytee's waiting behavior when P.S. was actually on her way home, but it could help to explain why Jaytee began to spend more time at the window before she set off. In reallife situations when P.S. returned home at nonroutine times of her own choosing, Jaytee's anticipations regularly began in the prereturn period, before she started driving home (Figures 3,4, and 6; see also Sheldrake and Smart, 1998). This pattern of behavior is in good agreement with the telepathic hypothesis because prior to getting into a car and driving, or being driven, P.S. was forming the intention to go home and preparing to do so. If Jaytee was responding telepathically to her intention to return, he would be expected to show this anticipation before she actually got into the car. But Jaytee also showed signs of anticipation in the experiments when P.S. returned at randomly selected times, before she received the signal to go home (Figures 1 and 2). How could he have anticipated when P.S. was going to be beeped? It is perhaps conceivable that Jaytee was telepathically picking up R.S.'s intention to beep P.S. from over 300 km away, but we do not take this possibility very seriously. On one occasion (on July 1, 1997) the beeping was done not by R.S. but by someone neither P.S. nor Jaytee had met, and Jaytee still responded in advance (Figure 2). It is also perhaps conceivable that Jaytee had a precognition of when P.S. would be beeped. But this would involve introducing another paranormal hypothesis in addition to the telepathic hypothesis. It is more economical to consider a possible explanation in terms of telepathy from P.S. In all the experiments with randomly selected return times, P.S. knew that she would be beeped to come home within a particular time period. Ideally, her

Experiments With a Return-Anticipating Dog

25 3

mind would have been entirely engaged with other concerns until the beep came. But unavoidably she was sometimes thinking about the signal to go home before it came, especially if it came toward the end of the period of time in which she knew she would be beeped. Jaytee might have picked up these anticipatory thoughts, just as he seemed to respond to a fully formed intention to go home. If Jaytee was indeed responding to P.S.'s expectation that she would soon be receiving the signal to return, then this anticipatory effect would be expected to show up more when the beep came toward the end of the period in which she knew she would be beeped than at the beginning. In four out of six of the trials in which P.S. was beeped in the first half of the beep period (early beep), Jaytee did not show any anticipation prior to P.S. setting off (Figure 2). By contrast, there were signs of anticipation in all but one of the late-beep trials. The exception was a trial in which Jaytee did not go to the window at all throughout the entire experiment. Thus Jaytee's anticipation of the beep signaling P.S.'s return may have been related to her own anticipation of the beep, which tended to be greater the later the signal came. A similar anticipation of P.S.'s setting off occurred in the experiments conducted by Richard Wiseman and Matthew Smith (Figure 7). Here again, Jaytee's early response may well have taken place in response to P.S.'s anticipation. While she was with Smith waiting for him to tell her when to return, she found it impossible not to think about going home. Smith himself knew when they were going to set off because the randomly determined time had been set in advance (Wiseman, Smith, and Milton, 1998). He could well have communicated his anticipation to P.S. unconsciously, for example through an increasing tenseness as the predetermined time approached. Moreover, in all three experiments, the randomly selected return time was in the second half of the experimental period, corresponding to the late beeps in our own experiments (Figure 2B). This increasing anticipation by P.S. that she would soon be going home as the experimental period progressed was an unavoidable feature of the experimental design adopted both by ourselves and by Wiseman, Smith, and Milton.
Why D d Jaytee Sometimes Not React to PS.'s Returns? i

In all our series of experiments with Jaytee, on some occasions he did not show his usual anticipatory behavior. In our preliminary series of 100 observations, he failed to do so on 15 occasions. On some of these occasions he was tired after long walks; on some he was sick; on others he was distracted by a bitch in heat in a neighboring apartment (Sheldrake and Smart, 1998). But in a few cases there was no obvious reason for his failure to react. In our series of 12 experiments with randomly selected return times, he did not go to the window at all in one experiment (Figure 2). In the series of 30 ordinary homecomings, he did not show his anticipatory behavior in six experiments. When Jaytee was left in P.S.'s flat on his own, his lack of anticipatory behav-

254

R. Sheldrake & P. Smart

ior was usual rather than exceptional. On most occasions he did not go to wait for her at the window or indeed visit the window at all. Nevertheless on 15 out of 50 occasions he showed his usual pattern of anticipation, waiting at the window when P.S. was returning. Thus he seemed capable of anticipating P.S.'s returns when he was on his own, but did not usually do so. Why not? Our guess is that it was a matter of motivation. His waiting at the window while P.S. was on her homeward journey may have been more for the sake of communicating his anticipation to members of P.S.'s family, as if to tell them she was on her way. When there was no one to tell, he was less motivated to wait at the window. Nevertheless, he sometimes did it anyway. The difference in his behavior in P.S.'s own flat and in her parents' was a matter of degree. In both places, he sometimes waited by the window when P.S. was returning, and sometimes failed to wait there. In P.S.'s parents' flat the ratio of occasions on which he waited to those he did not was around 80:20, whereas when he was alone in P.S.'s own flat it was 30:70. Evolutionary Implications The hypothesis that some dogs, such as Jaytee, can anticipate their owners' arrivals telepathically obviously needs to be tested further. We have already obtained comparable results with several other dogs. Similar anticipatory behavior is said by many animal owners to occur with other domesticated species, especially cats, parrots, and horses (Brown and Sheldrake, 1998; Sheldrake, 1999a; Sheldrake, Lawlor, and Turney, 1998; Sheldrake and Smart, 1997), and there is a need for experimental research on anticipatory behavior by animals of these species. It would also be worth investigating whether animals in the wild seem to know when members of their group are coming home: for example, do wolf cubs waiting at their den show signs of anticipation before the return of adults with food? Although parapsychologists and psychical researchers have conducted much research on person-to-person telepathy (for a review, see Radin, 1997), there has very little previous research on person-to-animal or animal-to-anima1 telepathy (Sheldrake, 1999a). If it turns out that telepathic communication does indeed occur among nonhuman animals, then this would imply a biological and evolutionary origin for person-to-person telepathy and would enable this paranormal phenomenon to seem more normal, at least in the sense that it is biological and has an evolutionary history.

Acknowledgements
We are grateful to Muriel Smart, the late William Smart, and Cathie MacKenzie for their invaluable cooperation in this research; to Amanda Jacks and Jane Turney for their analysis of the videotapes; and to Dean Radin for carrying out the randomized permutation analysis. We thank the Lifebridge Foundation and the Institute of Noetic Sciences for financial support.

Experiments With a Return-Anticipating Dog

255

References
Brown, D. J., & Sheldrake, R. (1998). Perceptive pets: A survey in north-west California. Journal of the Society for Psychical Research, 62, 396-406. Good, P. (1994). Permutation tests: A practical guide to resampling methods for testing hypotheses. New York: Springer Verlag. Hjorth, J. S. U. (1994). Computer intensive statistical methods. New York: Chapman and Hall. Matthews, R. (1 994, April 24). Animal magic or mysterious sixth sense? Sunday Telegraph, p. 17. Munro, K. J., Paul, B., & Cox, C. L. (1997). Normative auditory brainstem response data for bone conduction in the dog. Joumal of Small Animal Practice, 38, 353-356. Radin, D. (1997). The conscious universe: The scientific truth of psychic phenomena. San Francisco: Harper. Sheldrake, R. (1994). Seven experiments that could change the world: A do-it-yourselfguide to revolutionary science. London: Fourth Estate. Sheldrake, R. (1999a). Dogs that know when their owners are coming home. New York: Crown. Sheldrake, R. (1999b). Commentary on a paper by Wiseman, Smith and Milton on the "Psychic pet" phenomenon. Journal of the Society for Psychical Research, 63, 306-3 11. Sheldrake, R. (2000). The 'psychic pet' phenomenon. Journal of the Society for Psychical Research, 64, 126-128. Sheldrake, R., Lawlor, C., & Turney, J. (1998). Perceptive pets: A survey in London. Biology Forum, 91, 57-74. Sheldrake, R., & Smart, P. (1 997). Psychic pets: A survey in north-west England. Journal of the Society for Psychical Research, 61, 353-364. Sheldrake, R., & Smart, P. (1998). A dog that seems to know when its owner is returning: Preliminary investigations. Journal of the Society for Psychical Research, 62, 220-232. Shiu, J. N., Munro, K. J., & Cox., C. L. (1997). Normative auditory brainstem response data for hearing threshold and neuro-otiological diagnosis in the dog. Journal of Small Animal Practice, 38, 103-107. Snedecor, G. W., & Cochran, W. G. (1967). Statistical methods. Ames, IA: Iowa State University Press. Wiseman, R., Smith, M., & Milton, J. (1998). Can animals detect when their owners are returning home? An experimental test of the 'psychic pet' phenomenon. British Joumal of Psychology, 89,453-462. Wiseman, R., Smith, M., & Milton, J. (2000). The 'psychic pet' phenomenon: A reply to Rupert Sheldrake. Journal of the Society for Psychical Research, 64, 46-49.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 257-274,2000

0892-33 10/00 O 2000 Society for Scientific Exploration

ESSAY

What Can Elementary Particles Tell Us About the World in Which We Live?'

Department of Physics Texas A & M University, College Station, TX 77843-4242 e-mail: bryan @physics.tamu.edu

Abstract-What kind of space-time do we live in? Does it extend beyond the four dimensions of ordinary space and time? In physicists' efforts to explain the origin of elementary Dirac particles, namely the 12 kinds of quarks and leptons, they find that they are driven to as many as 7 or 8 additional spacelike dimensions. They generally assume that the extra dimensions are curled up into tiny circles or generalizations thereof, with diameters of the order of meter. These models tend to be quite complex. However, I will argue that it may be much easier to model these quarks and leptons if we assume that the extra dimensions are flat, that is, stretch out to infinity. What keeps quarks and leptons (and us) from drifting off into the higher dimensions may be a local "well" in space (a soliton) generated by the particles' field equations. Furthermore, only four extra dimensions may be needed. If there really are four extra dimensions besides the ordinary four, then why don't we see them? It may be that many people who have had an out-ofbody or near-death experience (NDE) have seen the extra dimensions. For example, the "tunnel" in the NDE may lead to another local universe like our own, only situated in another "well" in the extra dimensions. In the model that I will describe, quarks and leptons, which are accelerated to sufficiently high energies, can escape our local space-time "well" and travel freely in eight dimensions, as our consciousnesses seem to be able to do. Could it be that large configurations of these particles might even constitute spaceships, the UFOs that seem to come out of nowhere? Higher dimensions may also provide avenues for information transfer ascribed to ESP.
Keywords: elementary particle physics-extra dimensions-psychic phenomena

1. Introduction: Elementary Dirac Particles and Extra Dimensions
Consider the ordinary hydrogen atom, pictured in Figure 1 as Niels Bohr saw it, with the electron orbiting the proton. If you blew up the atom to be as large as the tip of your little finger, then the tip of your little finger, scaled up like-

This paper is an extension of a talk given at the annual meeting of the Society of Scientific Exploration, held at the University of Virginia, Charlottesville, VA, May 27-30, 1998.

'

Ronald A. Bryan

electron

a

proton

Fig. 1. Electron orbiting proton in the Bohr Model of the hydrogen atom.

wise, would be as large as Saturn. The electron was the first elementary particle to be discovered (by J. J. Thomson in 1897, in a gas discharge tube, similar in principle to a neon sign). Today we know that there are (at least) six kinds of electrons: the e, v,, p , v,, z, and v,; namely, the electron, the electron neutrino, the muon, the mu neutrino, the tau, and the tau neutrino. These particles are like brothers and sisters, in that they all obey the Dirac equation of quantum mechanics. (The Dirac equation is a kind of theoretical quantum device that generates particle waves from space-time.) This means that these particles can also appear as antiparticles (e.g., the anti-electron is the positron) and all have an intrinsic spinning motion with angular momentum A12 (where A is Planck's constant divided by 2n), which can point either up or down. The e, p, and rare identical (with the same negative electric charge) except that they have different masses. The neutrinos are identical, too, except for their masses, which might be extremely small (we aren't sure) (Particle Data Group, 1998). (Neutrinos have no charge and at typical reactor energies can travel 25 light-years in lead before deflecting.) These six kinds of electrons are called leptons. Leptons are incredibly small. If you blew up any one of them to be as big as the tip of your finger, then the tip of your finger, increased proportionally, would be much bigger than our solar system and stretch out toward to the nearest star, or perhaps even beyond (we don't know) (Particle Data Group, 1998). The other constituent of the hydrogen atom, the proton, is far smaller than it appears in Figure 1. Nevertheless, it contains three quarks, as shown in Figure 2.

Elementary Particles

the proton

Fig. 2. The quarks in the proton.

These quarks occupy as little volume as the electron does, although they are much heavier. They were discovered at the Stanford Linear Accelerator Center around 1968 in electron-proton scattering experiments. Other kinds of quarks have been discovered since then, and now we knob of six types. They have the following amusing names: up, down, charm, strange, top, and bottom. They too are like brothers and sisters in that they all obey the Dirac equation of quantum mechanics, exist in particle or antiparticle mode, and have two possible spin directions. They are like cousins to the six kinds of electrons. They differ from these leptons in that they have never been seen alone; they only occur in triplets, like the proton, or in quark-antiquark pairs, like the pi meson. They are well described by a theory known as Quantum Chromodynamics. I have plotted the leptons and quarks versus their rest-mass energies in Figure 3. (For example, the electron's rest-mass energy, as indicated on the vertical axis, is M C = 0.5 MeV = 0.5 Million electron Volts, corresponding to a par~ ticle-mass M = 9 x kg; c = speed of light.) A pair of quarks and a pair of leptons comprise each generation, and there are three generations (Particle Data Group, 1998) (see Figure 3). Because there are so many kinds of leptons and quarks, many physicists tried to model them as bound states of smaller, more elementary particles, variously called rishons, preons, subquarks, etc. The idea was that just as the hydrogen atom has (infinitely) many excited states, perhaps the quarks and leptons are just different configurations of two or three smaller particles. However, these subparticle models turned out to be about as complicated as the quarks and leptons that they were supposed to predict, and in fact, predicted far too many other kinds of particles that have never been seen.

Ronald A. Bryan
M C ~

lSf gen

2nd

3rd

EW

gen

gen

bosons

10,000 MeV 1000 MeV 100 MeV 10 MeV

1 MeV
0.1 MeV 0.01 MeV 0.001 MeV

-

-

I

i
1

0.00001 MeV

1".

photon

Fig. 3. The three generations of quarks and leptons plotted versus rest-mass energy; weakisospin pairs are joined. Quarks come in three "colors" (not shown), and leptons in one. The electroweak bosons, W', and W-, and (massless) photon are also plotted. The masses of the neutrinos are not known, but their experimental upper limits are indicated as horizontal bars. For comparison, the rest-mass energy of the proton is 938 MeV.

These days, most physicists trying to model quarks and leptons turn to higher dimensions. They basically generalize a gravitational theory proposed by Kaluza and Klein in the 1920s, where these authors added another spacelike dimension to the three of space and one of time in an attempt to unify electromagnetism with gravity (see, e.g., Lee, 1984). This fifth dimension did not stretch out to infinity like ordinary dimensions but was taken to be a circle. In fact, an incredibly small circle. The radius had to be the order of cm for the theory to predict the correct charge for the electron (Miller, 1980). To help visualize these higher dimensions, let's consider some projections. For definiteness, first consider ordinary three-dimensional space. I'll portray it in Figure 4 as a large block, but I will think of space as stretching out to infinity in each direction. (Actually, this is a model of space, i.e., Cartesian space; one assumes that space can be divided into little cubes, each identified by three coordinates.)

Elementary Particles

261

Fig. 4. Three-dimensional Cartesian space.

Now let's eliminate one spacelike dimension, say the (vertical) y-dimension, leaving just the horizontal x-z plane, as in Figure 5.

Fig. 5. Two-dimensional Cartesian space.

Let's eliminate the x dimension too. This leaves just one dimension, z, still understood to stretch out to infinity in both directions (Figure 6).

Fig. 6. One-dimensional Cartesian space.

Now let's add an extra dimension, a synthetic dimension not present in ordinary space-time, call it 2 (see Figure 7). (I will embellish all extra dimensions,

Fig. 7. Two-dirnenslvnal Cartesian space consisting of one ordinary dimension (z) and one extra dimension (x).

262

Ronald A. Bryan

such as 2, with a tilde.) Here, we have recovered a plane, but it is not the usual kind of plane because leptons and quarks, and we ourselves, cannot ordinarily make excursions in the i direction. In Kaluza-Klein theory (Lee, 1984), the extra dimension is mapped onto a tiny circle, as I mentioned above. This is illustrated in Figure 8 for the case of the extra dimension and one ordinary dimension, say z. It is like rolling the i - z plane up like a sheet of paper to form a tube. The extra dimension is shown in Figure 8 projected on a circle on the left and is denoted 2. This theory was only meant to reproduce gravity and electromagnetism. However, since the 1920s, 11 quarks and leptons have been discovered in addition to the electron.

Fig. 8. One ordinary dimension and the extra Kaluza-Klein dimension 2 , which appears as a minuscule circle of radius of the order of m.

Now, it may be that gravity has no more to do with the mass spectrum of quarks and leptons than it has to do with the energy-level spectrum of the hydrogen atom (in contrast to a basic assumption of Kaluza-Klein and string theory). Nevertheless, the extra dimension can still be useful, as it provides natural space for additional elementary particles. Indeed, physicists have routinely employed two extra dimensions for decades to describe a feature of elementary particles called weak isospin. To understand this feature, picture an electron spinning about an axis in ordinary space. This spinning motion is a quantum-mechanical version of the spinning motion of, say, a ball, or the earth about an axis joining the north and south poles, as pictured in Figure 9. The electron is very small, of course, if it has any size at all. We confirm that it has this spinning motion (predicted by the Dirac equation) from the way that it interacts with other particles. The direction of the axis about which the electron is spinning can be recorded by two angles, say 8 and 4, or equivalently, by the two coordinates of a small area on the surface of a large sphere (say, of unit radius) where the electron's extended spin axis touches the sphere. The latter is pictured in Figure 10. Now just as the electron has a spinning motion in ordinary space, it also behaves in reactions with other particles as if it had an additional spinning motion in a synthetic space (i.e., in extra dimensions). Physicists call this additional spin weak isospin. In Figure 11, I have sketched the electron's weak isospin, s", directed to a small area on the synthetic sphere (called the weakisospin sphere or the isospin sphere). The surface of the isospin sphere can be considered to be two extra dimensions. In a classical (i.e., non-quantum-mechanical) theory, each electron pointing to a different point on the isospin sphere would correspond to a different particle in ordinary space because it would behave differently in its inter-

Elementary Particles

263

S
Fig. 9. Earth spinning about its north-south axis.

unit sphere

L

Fig. 10. Unit sphere concentric with electron, and small area on unit sphere where the electron's spin axis touches.

Ronald A. Bryan

Fig. 11. Electron at center of (unit-radius) weak-isospin sphere; electron's weak-isospin axis, directed at small area on two-dimensional surface of sphere.

s",

actions with other particles. Thus, the surface's two extra dimensions would apparently turn one particle into an infinite number of different kinds of particles as seen in ordinary space. However, according to the rules of quantum mechanics, the electron may point only in two directions in ordinary space (up and down), and similarly it may point only in two directions in isospin space. Thus, isospin only doubles the number of electron types. The other member of the electron's doublet is the electron neutrino, v,, and its spin is usually taken to point "up." The electron's weak isospin is then taken to point "down." The e - v, doublet is plotted with all of the other doublets in Figure 3, and plotted alone in Figure 12. Let us suppose that we are constructing a theory of elementary particles, and we incorporate weak isospin to double the number of electrons. Fine, but there are six Dirac-particle doublets in Figure 3. How can we account for all six doublets? Well, if we are going to take the two-dimensional surface seriously and say that the electron doublet "lives" on that surface, then I think that it is only reasonable to take the volume within the spherical surface seriously too and say that the electron lives in the entire volume. In this way, I introduce another artificial dimension, the radial dimension, which brings the number of extra dimensions to three. This, as it turns out, gives enough space for an extended Dirac equation (a Dirac equation in 4 + 3 = 7 dimensions) to generate the up and down quarks as well as the electron and its neutrino, that is, enough space to generate the whole first generation. The e - v, and up-down doublets are plotted in Figure 3 and repeated below in Figure 13.

Elementary Particles

0.01 MeV OC 1 MeV .O 0.0001 MeV 0.00001 MeV

1

I t

1

1

tve

Fig. 12. Plot of e - v, weak-isospin doublet versus rest-mass energy. Upper experimental limit of electron neutrino's mass indicated by horizontal bar.

Although I have suggested that the Dirac particle is trapped within the spherical shell as a marble might rattle around inside a tin can, in fact a softer repelling surface is required to enable the Dirac equation to produce the kind of multiplets that experimentalists actually see. This surface allows the particle to penetrate somewhat and turns out to be what is called a harmonic-

0.1 MeV 0.01 MeV 0.001 MeV 0.0001 MeV 0.00001 MeV

t

I 1
1

1

lve

I

Fig. 13. Up and down quarks, and electron-neutrino and electron, grouped as weak-isospin doublets versus rest-mass energy, M C ~ . These particles constitute the first generation.

266

Ronald A. Bryan

oscillator "well." Mathematically, it is as if the Dirac particle were connected to the center of the isospin sphere with a rubber band. As I will explain in the next section, we can generate all three generations of quarks and leptons with a Dirac equation if we increase the number of extra dimensions from three to four and again restrain the Dirac particle with a harmonic-oscillator force. This is what I consider the particle-physics evidence that we live not in four, but rather in 4 + 4 = 8 dimensions (Bryan, 1998, 1999). It is reasonable to suppose that if four extra dimensions exist within a spherical shell, then these dimensions should exist outside the shell as well, perhaps stretching out to infinity in all extra directions. As far as I know, there is no elementary-particle-physics evidence (yet) for extra dimensions stretching out beyond the shell. However, there may be anecdotal evidence: the widely reported out-of-body experiences (Monroe, 1971), near-death experiences (NDEs) (Moody, 1975), existence of discarnate entities (Roberts, 1972), encounters with UFOs (Mack, 1994), and cases of extrasensory perception (Targ and Puthoff, 1977) suggest long-ranging extra dimensions. I will elaborate on this in the last section of this paper. In the next section, I give in more technical language the evidence that I have found for four extra dimensions. I will also discuss some limitations of the model. Readers not interested in the details can skip to the last section with little loss in continuity.

2. Some Technical Details
To reproduce the quarks and leptons, it is not at all necessary to take the radius of the extradimensional sphere to be as small as the Kaluza-Klein radius. A radius of the order of 10-l9m will suffice. If the surface of the sphere were impenetrable, then quantum mechanics (in this case, a Schrodinger equation acting in the three extra dimensions) would permit a particle to be trapped provided that its mass were m, or m, or m, ... This spectrum of possible massstates is sketched in Figure 14 as horizontal lines, drawn inside the walls of the "square well." (The masses' subscripts merely signify that they are distinct.)

mass rigid surface of isospin-sphere

\\
m3
2"

A

/

rigid surface of isospin-sphere

ml
O

C

radial distance in extra dimensions

Fig. 14. Quantum-mechanical mass-levels of a particle trapped within a hard spherical shell (a "square well") in three extra dimensions.

Elementary Particles

267

If the hard-shell square well is replaced by a harmonic-oscillator well (which means that the Dirac particle can "push" the wall out somewhat), then the trapped particles appear as 1 , 3 , 6 , 1 0 , . multiplets with allowed masses m,, m,, m, ..., as plotted in Figure 15. (Again, the subscripts merely denote distinct masses.) The 1 comprises one particle, the 3 comprises three different kinds of particles, etc. These multiplets exhibit what is known as SU(3) symmetry. The triplet 3 has the symmetry of quarks (quarks come in three "colors") and the singlet 1 has the symmetry of leptons (leptons are "co1or1ess~'). The Schrodinger equation can also factor in the two isospin states that come with each multiplet to create a quark doublet and a lepton doublet. The 3 doublet and the l doublet then have the same quantum numbers as the physical u - d and the e v, doublets, respectively, as in Figure 3 or 13. These two doublets, then, constitute the first generation, or family. mass

\A/!

A

harmonic-oscillator well

distance from center of extra dimensions
Fig. 15. Particle mass levels predicted by symmetric harmonic-oscillator well in three extra dimensions; SU(3) multiplicities 1 , 3 , 6 indicated.

Note, however, that the harmonic-oscillator well also generates 6s, as well as 10s, etc. (not shown). These sextuplets and decuplets would be new kinds of Dirac particles, neither quarks nor leptons. Such particles have not been seen. However, astrophysicists see indirect evidence that at least ten times as much matter exists in the universe than has been observed with our optical and radio telescopes. For instance, stars somewhat beyond the edge of the Milky Way circle our galaxy at surprisingly high speeds, indicating that they are being gravitationally pulled toward the center of the galaxy by much more matter than is visible to our instruments. The 6s, 10s, and so on predicted by my model might be the source of this so-called "dark matter." I have indicated how one generation of quarks and leptons can be generated by a harmonic-oscillator force acting on weak-isospin doublets. However, there are two more families to be accounted for. As I mentioned earlier, these can be generated by increasing the number of extra dimensions from three to four. A symmetric harmonic-oscillator well in four dimensions will yield SU(4) multiplets with multiplicities 1,4,10.... In an SU(4) = SU(3) x U(1) de-

268

Ronald A. Bryan

composition, the 1 stays a 1, the 4 breaks up into a 1 plus a 3, the 10 breaks up into a 1 plus a 3 plus a 6, etc. This results in the weight diagrams shown in Figure 16. Quarks and leptons suggested by these quantum numbers are indicated. In Figure 16, the quantum numbers, fi = 0, 1, 2, correspond to increasing mass levels. The chart is divided into two halves, with the weight diagrams on the left corresponding to particles with weak-isospin quantum number = l/z (isospin "up") and the diagrams on the right to particles with weak-isospin vPls = -% (isospin "down"). If the restraining force is that of a pure harmonic oscillator, then the multiplets go on past to fi = 2 infinity. I have calculated these particle multiplets using a quantum-mechanical Dirac equation in eight dimensions with a symmetrical harmonic-oscillator potential acting in the four extra dimensions (Bryan, 1998, 1999). The equation separates into two equations, a standard free-particle Dirac equation operating in ordinary four-dimensional space-time and a second Dirac equation acting in the four extra dimensions. An extremely large mass, M , (not any particle's rest mass), is added to the potential so that the Dirac equation in the extra four dimensions reduces approximately to a type of Schrodinger equation. This is used to predict the particles noted in Figure 16. Now some caveats: The Dirac equation of my model does not quite preserve SU(4) symmetry, or the SU(3) symmetry of the quarks, so it does not agree

isospin
b b ~ ~ "

isospin "down"

Fig. 16. SU(4) multiplets generated by a symmetrical harmonic-oscillator well acting in a Schrijdinger equation in four flat extra dimensions.

Elementary Particles

269

perfectly with Quantum Chromodynamics. Also, as can be seen in Figure 15, the model predicts masses rising at a constant rate, whereas experimentally, particles' masses seem to increase exponentially, as in Figure 3. However, "Higgs-Yukawa" terms might perhaps be introduced to generate the correct masses, as is done in the Standard Model. [The Standard Model is a gauge field theory, which fits all of the known elementary-particle data with some 19 adjustable constants. It was invented by Sheldon Glashow (1961), Steven Weinberg (1967), and Abdus Salam (1968). For a less technical description of the Standard Model, see Coughlan and Dodd (199 1) and references listed therein.] There are two general classes of particles in nature, the Dirac particles (and their antiparticles) that I have been describing, and bosons. The W', W, 2?, and photon are bosons, plotted along with the Dirac particles in Figure 3. They mediate the forces between the Dirac particles. My model treats Dirac particles as real particles but only takes bosons in account when they interact with Dirac particles. Work is underway to include bosons as real dynamical entities. There has to be a mechanism to keep the particles in place in the extra dimensions, so they do not drift off and disappear from our universe. In most models, localizing poses no problem because the assumed extra dimensions are tiny circles or generalizations of circles of the order of m in diameter, so the particles have hardly any place to go. However, in models with infinitely extended extra dimensions, such as mine, some trapping mechanism has to exist. For the time being, I simply introduce a potential well "by hand" to trap the particles. However, in 1983 Rubakov and Shaposhnikov had already considered the possibility that the extra dimensions might be infinite in extent, and they solved the particle-trapping problem with an elegant mechanism, namely a soliton, generated by the nonlinear field equations of their model (Rubakov and Shaposhnikov, 1983). However, they only considered the case of a single extra dimension. For this case, they trapped a boson in the well sketched in Figure 17.

v,

Fig. 17. Potential well V in one extra dimension generated by soliton (kink) in 4 ! ~theory; wave ~ function of Dirac particle trapped on the soliton is denoted Po.

-

270

Ronald A. Bryan

The width of the well is determined by the mass parameter, m. Note that the well does not rise indefinitely, but only as high as 2m2. This particular well traps just one kind of particle, at the level m 2 . Particles whose mass-squared exceeds 2m2 lie in the continuum and can propagate freely in Rubakov and Shaposhnikov's extra dimension 2 . If the Rubakov-Shaposhnikov model could be extended from the one extra dimension to four extra dimensions, then it might provide the entrapment that I have simulated with the harmonic-oscillator well. Furthermore, a RubakovShaposhnikov-type well would provide a physically natural way to limit the number of generations to just the three that have been seen, because the well rises only a finite amount. Particles with energies exceeding the top of the well would travel freely in all eight dimensions and would not be listed in Figure 3.

3. Some Speculations
I have outlined a model based on four extra dimensions stretching out to infinity, in addition to the usual four dimensions of space and time (Bryan, 1998, 1999). The quarks and leptons in this picture are nevertheless trapped in a very small region in these extra dimensions, about 10-l9 m in diameter (which also happens to be the upper limit of the size of the electron in ordinary space). Presumably I could modify the model so that the extra dimensions themselves were little circles 10-l9 m in diameter. However, the mathematics is much simpler if the extra dimensions stretch out indefinitely, and so, by Occam's razor, I choose such a model. ("The upper dimensions are just like the lower dimensions.") But if the extra dimensions do stretch out to infinity, then why haven't we seen them? Well, perhaps some of us have. It may be that some of the people who have had an out-of-body experience (Monroe, 1971) or an NDE (Moody, 1975) have seen and perhaps traveled into the extra dimensions. If we take ordinary space-time and reduce it to just one spacelike dimension, z, as in Figure 6, and then take the four extra dimensions of my model and reduce them to two extra dimensions, say i and j , then the particles are trapped in a kind of tube in both the ordinary and the extra dimensions, as illustrated in Figure 18. The particles are free to move back and forth along the z

Fig. 18. Eight-dime_nsiona_l space-time reduced to one ordinary dimension ( z ) and two extra dimensions (xand y ); quarks and leptons are s_hown_as quantum-mechanical wave packets, trapped in a harmonic-oscillator well in the X and y dimensions.

Elementary Particles

271

axis but are constrained by the walls of the tube (representing the harmonic-0scillator well) from going off in the 2 or y direction. Because we are made of quarks and leptons, we are also trapped in the tube according to this model. In one real dimension, we might be represented by the string of quarks and leptons, as illustrated in Figure 19.

Fig. 19. Human body depicted as.a string of quarks and leptons in one ordinary dimension, confined in a tube in two extra dimensions.

On the other hand, our consciousnesses may not be trapped in the tube and perhaps are not fundamentally constrained at all (Roberts, 1972), as suggested by Figure 20. [For an interesting article that hints how a consciousness might control a physical body using energies no greater than those of visible photons (of the order of electron volts), see Firsoff (1975).]

Fig. 20. Human consciousness sketched as a kind of wave function, not necessarily confined to human body (depicted in one dimension as a string of quarks and leptons).

Now, if there is one tube representing our universe, then there is probably another tube somewhere else representing another universe. There might be an uncountable number of tubes, or universes. Perhaps one of the other tubes is very close to our tube. Then a "tunnel" might make it possible for a human consciousness to drift over to the other tube, and so entering, see a whole new universe. This is illustrated in Figure 21. Perhaps upon entering the other universe, the consciousness encounters the "light" and other consciousnesses

272

Ronald A. Bryan

Fig. 21. Tunnel facilitating departure of human consciousness from our universe to another universe in higher dimensional space-time.

(Moody, 1975). The other universe need not have the same number of dimensions as our universe, and particles there could be pure energy without mass. As I mentioned earlier, if a particle has sufficient energy, then it might escape our universe (the tube) and propagate freely in all of the dimensions, ordinary and extraordinary. For example, if an electron and a positron were each accelerated to sufficiently high energies in a linear accelerator, then upon colliding they might escape the tube. This is illustrated in Figure 22. Perhaps the next generation of accelerators, in particular the LHC proton accelerator, being constructed at CERN in Geneva, Switzerland, will be able to accelerate particles to sufficiently high energies to escape.

Fig. 22. Electron (e-) and positron (e') accelerated in linear accelerator to sufficiently high energies to escape our universe and travel in the extra dimensions.

It may well be that if there are indeed long extra dimensions, then particles abound in that space. Probably they can bind to one another, just as particles in our four-dimensional space-time can bind to one another. If such particles exist, then perhaps they can be fashioned into UFOs. The UFOs would not be able to actually enter our space, as they would probably blow up upon falling into our "potential well." However, they might be able to approach very closely (perhaps even as close as a millimeter). Associated consciousnesses might

Elementary Particles

273

easily penetrate our space then. This might be the basis for reports of alien abduction (Mack, 1994). Finally, information might be able to propagate in ways not possible in ordinary space-time. For example, ordinary electromagnetic signals sent from submarines are quickly absorbed by the sea water, but if another kind of signal could be sent through the wall of our "tube" into the extra dimensions, then the signal might propagate quite freely in these higher dimensions, to finally return to a receiver somewhere else within our tube (Targ and Puthoff, 1977). This is illustrated in Figure 23.

sender

receiver

Fig. 23. Signal leaving sender in ordinary space-time, travelling through higher dimensions and returning to receiver in ordinary space-time.

3.I . Note Added

Recently there has been considerable interest in the possibility that the extra dimensions, although not flat or infinite in extent, might still be circles of millimeter size. Only gravity waves would range that far out. All other particles would be confined to distances of the order of m in the extra dimensions. Arkani-Hamed, Dimopoulos, and Dvali (1998) and Antoniadis et al. (1998) have proposed models of this sort. These models have passed a gauntlet of tests but many challenges remain. See, for example, Banks, Dine, and Nelson (1999).

Acknowledgments
I would like to thank Hal Puthoff and Robert Butts for much encouragement over the years. I would also like to thank Robert Monroe and John Mack for useful conversations, and Sunny Nash for helpful criticism.

References
Antoniadis, I., Arkani-Hamed, N., Dimopoulos, S., & Dvali, G. (1998). New dimensions at a millimeter to a fermi and superstrings at a TeV. Physics Letters B, 436, 257. Arkani-Hamed, N., Dimopoulos, S., & Dvali, G . (1998). The hierarchy problem and new dimensions at a millimeter. Physics Letters B, 429, 263. Banks, T., Dine, M., & Nelson, A. E. (1999). Constraints on theories with large extra dimensions. Journal of High Energy Physics (electronic journal), 06(1999), 0 14. Bryan, R. A. (1998). Are quarks and leptons dynamically confined in four flat extra dimensions? Nuclear Physics B, 523, 232.

274

Ronald A. Bryan

Bryan, R. A. (1999). Are the Dirac particles of the Standard Model dynamically confined states in a higher dimensional flat space? Canadian Journal of Physics, 77, 197 (hep-phl9904218). Coughlan, G. D., & Dodd, J. E. (1991). The ideas of particle physics: An introduction for scientists. Cambridge, UK: Cambridge University Press. Firsoff, V. A. (1975). Life and quantum physics. In Oteri, Laura (Ed.), Quantumphysics andparapsychology (p. 109). New York: Parapsychology Foundation. Glashow, S. L. (1 961). Partial-symmetries of weak interactions. Nuclear Physics, 22, 579. Lee, H. C. (Ed.). (1984). An introduction to Kaluza-Klein theories. Singapore: World Scientific. Mack, J. E. (1994). Abduction: Human encounters with aliens. New York: Scribner's. Miller, J. G. (1980). Kaluza and Klein's five-dimensional relativity. In Marlow, A. R. (Ed.), Quantum theory and gravitation (p. 221). New York: Academic Press. Monroe, R. A. (1971). Journeys out of the body. New York: Doubleday. Moody, R. A., Jr. (1975). Life after life. Atlanta, GA: Mockingbird. Particle Data Group (1998). Review of particle physics. European Physical Journal C, 3, 1. Roberts, Jane (1 972). Seth speaks: The eternal validity of the soul. Englewood Cliffs, NJ: Prentice-Hall. Rubakov, V. A., & Shaposhnikov, M. E. (1983). Do we live inside a domain wall? Physics Letters B, 125, 136. Salam, A. (1968). Weak and electromagnetic interactions. In Svartholm, N. (Ed.), Elementary particle theory: Relativistic groups and analyticity (pp. 367-377). Stockholm: Almqvist & Wiksell. Targ, R., & Puthoff, H. (1977). Mind-Reach: Scientists look atpsychic ability. New York: Dell. Weinberg, S. (1967). A model of leptons. Physical Review Letters, 19, 1264.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 275-279,2000

0892-33 10/00
O 2000 Society for Scientific Exploration

ESSAY

Modern Physics and Subtle Realms: Not Mutually Exclusive
ROBERT D. KLAUBER
1100 University Manor Drive, 38B, Fai@eld, IA 52556 e-mail: rklauber@netscape.net

Abstract- One facet of the change in worldview ushered in by the quantum mechanical revolution was DeBroglie's discovery that all particles are actually wavelike and that because of this, a plurality of particles can occupy the same region of space, at the same time. This well accepted and empirically validated principle is explored in the context of the quantum field theory of force fieldlparticle coupling. It is then shown that subtle (nonphysical) realms could readily exist without being in any way contradictory to, or inconsistent with, modern physics. Keywords: interactions-coupling-subtle realms-four forcesother universes

There is a common misconception, held by many scientists and nonscientists alike, that the laws of physics preclude the existence of nonphysical entities and any concomitant metaphysical realms. This viewpoint, as it turns out, is a vestige of pre-quantum mechanical scientific thinking and in no way represents a constraint imposed on reality by the postclassical physics of our modern age. A general pre-twentieth century scientific adage (even an axiom) held that no two objects could occupy the same place, at the same time. It was therefore implicit that apparitions and similar entities having the property of coexisting in time and space with physical structures such as doors and walls could not possibly exist. DeBroglie's discovery of the wavelike nature of matter changed that perspective dramatically. Today, physicists regularly deal with wave functions of leptons, quarks, photons, and the like, which overlap and share identical regions of space and time. Just as two waves rolling over the ocean heading in opposite directions can pass through each other unscathed, though occupying for a time the same area of the water surface, so too can two subatomic wavelparticles pass through one another unaltered, coexisting for a time in the same space. Trapped particles can, in fact, share a common "trap" indefinitely. If two such wavelparticles jointly occupy a particular region of space-time and do not interact with each another, then neither changes in any way. Often, however, they do interact, and such interaction can change their energies, momenta, charge, and other properties. Interactions are mediated by force fields

276

Robert D. Klauber

between the particles, and these force fields carry properties such as energy, momentum, and charge from one particle to the other. In quantum field theory these force fields, the carriers of the properties between particles, are actually propagating waves. Because waves and particles are essentially one and the same, we commonly refer to the force fields as particles, or more precisely, as virtual particles. They are called virtual (in contrast to the real particles whose interactions they mediate) in part because they are singularly evanescent. For example, a first real particle such as an electron can emit a virtual particle such as a photon, which subsequently is absorbed by a second real particle such as a second electron. The two electrons change energy and momenta (i.e., each recoils from the other), and we can measure those changes. The photon on the other hand exists only very briefly, long enough for it to carry the appropriate amounts of energy and momentum from electron one to electron two. We can never measure the photon physically and so distinguish it from the real particles by'calling it virtual. According to our current understanding, all forces are mediated by such virtual particles. But particles are really waves, and these "wavicles" make up the entire universe. The reason our universe isn't simply an uninteresting collection of independent waves continually passing through one another unimpaired and immutable is that the various waves are coupled to one another via interactions (forces). The wavelike particles making up your hand do not pass through an object such as a door because the electrons (waves) in your hand and the electrons in the door interact; that is, they continually exchange copious numbers of virtual photons that effectively push the door away when the hand "touches" it. Without this interaction (the coupling of particles in the hand to other particles in the door), the hand would simply pass directly through the door, never feeling the sensation of touch and in fact never knowing the door was there. It turns out that there are four interactions, or forces, known to modern physics. Two of these-the electromagnetic and gravitational interactionsare familiar in our macroscopic world, and two-the strong and weak forcesare predominantly subatomic. We presently believe that each of these four forces is mediated by a different type of virtual particle. The photon mediates the electromagnetic force; the graviton, the gravitational force; the gluon, the strong force; and intermediate vector bosons, the weak force. It is important to recognize that we know a particle exists (actually that anything at all exists) only because of the coupling (interactions) between particles. For example, an electron interacts with an electron detector by exchanging virtual photons with that detector. A detection signal occurs only because the electron being detected is coupled via the electromagnetic force to the electrons in the electronic circuitry of the detector. Similarly, if we feel a door with our hands, or perceive through any of our senses, it is only because the particles in our sense organs are coupled to the particles transmitting particular properties (information) from the object we perceive. If there is no coupling, there is no perception.

Modem Physics and Subtle Realms

277

A real world exemplar of this principle is the neutrino, a particle that has no electric charge and that therefore is not coupled to any charged particle via the electromagnetic force. A human skin cell or a particle detector that responds to virtual photons (i.e.,is coupled to the electromagnetic force) could have many neutrinos passing through it but would never register a thing. Similarly, neutrinos have no coupling with the strong force. So a detector that might be sensitive to virtual gluons would likewise be transparent to, and unable to detect, neutrinos. The various particles in creation are coupled in different ways via different combinations of the four forces. For example, the electron has electromagnetic, gravitational, and weak coupling, but not strong coupling. Quarks are coupled to all four forces. Neutrinos, because they are massless, or extremely close to massless, have gravitational coupling that is far too small to measure, and hence they effectively possess only weak coupling. This singular characteristic of neutrinos makes them not only interesting, but also particularly relevant to the theme of this article. Note that the only way we can detect neutrinos is via the weak force. But the weak force is so named because it is feeble. In fact, it is so extremely feeble that more than 200 billion neutrinos have passed through your thumbnail in the time it has taken to read this sentence, yet you felt nothing. The weak force (the only way your nervous system could have detected the neutrinos' presence) is so slight that none of those neutrinos interacted with a single atom in your nail. In fact, the only way neutrinos are actually detected in experiments is by using huge volumes of matter over long periods of time. In typical experiments, a mere handful of actual interactions is detected over many months. This near imperceptibility of weakly interacting neutrinos makes them almost ghostlike. They pass through matter virtually without our being aware of their presence. More remarkably, another property of the weak force may make certain neutrinos even more tenuous and even less a part of what we consider our universe. The weak force is restricted to particles physicists designate as having "lefthanded chirality." In oversimplified terms, one can think of an electron, neutrino, or quark as spinning, typically with the spin aligned in the direction of travel (velocity). Consider that the spin can be thought of as either clockwise (right-handed) around that direction or counterclockwise (left-handed). Peculiar as it may seem, only left-handed neutrinos couple with the weak force. Right-handed ones are immune, and hence transparent, to its effects. So only a left-handed neutrino could interact via the weak force with another particle such as a quark, electron, or other neutrino. The key point is this: Right-handed electrons and quarks exist. We know because they have been detected via the electromagnetic force. But we can not detect right-handed neutrinos in such a way because they do not interact electromagnetically. Because we can not detect right-handed neutrinos weakly, there is essentially no way to know if these particles even exist. Yet, there could be untold trillions of them passing every minute through each of us and

278

Robert D. Klauber

through every known detector. If left-handed neutrinos are almost ghostlike, then right-handed neutrinos are fully so. Consider then that conscious beings in our universe are aware of each other, the rest of the universe, and at least some aspects of their own selves only because of interactions between the particles/waves of which physical objects are made. As noted, these interactions, as far as we know, are limited to four. Consider further the possible existence of a new family of diverse particles, similar to right-handed neutrinos in that none of them interacts via any of the four forces dominating our reality. This new family could consist of a limited number of types, each of which fills our known universe in immense numbers leading to significant densities. Consider further that this family might have three or four or five different interactions of its own, coupling its members in various ways. This family and its set of interactions could then behave in generally similar fashion to our own family of particles and force fields, although it would have unique types of interactions manifesting as a complexity and chemistry all its own. It might evolve, grow, and perhaps even produce intelligent beings. And it would never be detected by any of us-at least through our physical senses. We would coexist in the same space and time, yet because all quantum waves in that system would pass unperturbed through, and without perturbation of, our system, we would live our lives oblivious to this other independent cosmos. If there is one such other family, why not many? In fact, why not a great many? The universe certainly favors unimaginably large numbers. If, as we suppose, there are an uncountable number of galaxies (including those beyond our horizon of visibility) and as many theorists propose, an uncountable number of other possible universes, then why not an uncountable number of other independent particle families? In the very place where you now sit, there may now also sit a plethora of other sentient beings, some of whom might also be pondering the sensory limitations of their particular version of quantum field theory. In this context, the proposition that a heaven or hell coexists in space with us might start to seem rather plausible. So might reports of close UFO encounters in which alleged advanced civilizations seem capable of manipulating and moving between physical and nonphysical realms. The list readily expands to near-death tunnels, spirits, angels, auras, astral planes, other "dimensions," and various other concepts relegated by many mainstream scientists to the arena of fantasy. When certain individuals assert they perceive such things, perhaps the proper scientific response should be investigation, rather than the more common practice of disparagement and dismissal. Something in these peoples physiologies may be somehow coupled, in presumably delicate fashion, to one or more other worldly force fields. We know individual consciousness and its attendant physical body interact in ways we still do not fully understand. Could that same consciousness not also
7

Modern Physics and Subtle Realms

279

1

interact, in still less understood ways, with all but impalpable, but nontheless equally real, trans-physical bodies? In concluding, we note that we certainly have not proven that subtle realms actually exist. Yet we must bear in mind that in the long history of mankind's numerous metamorphoses in paradigm, the universe has repeatedly surprised us by being far more extraordinary and expansive in every regard than we had previously imagined (or even, as some have said, than we can imagine). Given such a history, it would seem prudent to proceed carefully and without prejudice in matters of purported metaphysical nature and draw conclusions based on empiricism alone. In particular, no proponent of materialism should ever denounce as scientifically indefensible claims made by others regarding the possible existence of nonphysical realms. As we have seen, modem physics imposes neither a limit on the probability for existence of such transcendental worlds, nor restrictions on their nature, total number, or ultimate extent.

Journal of Scientific Exploration, Vol. 14, No. 2, pp. 281-299,2000

0892-3310100 O 2000 Society for Scientific Exploration

BOOK REVIEWS
The User Illusion: Cutting Consciousness Down to Size by Tor Nonetranders. Translated by Jonathan Sydenham. New York: VikingIPenguin Putnam, 1998. xii + 467 pp. $29.95 (c). ISBN 0-670-87579- 1.
This book is not a turgid treatise in defense of objectivity in the so-called science wars. It does not discredit the emerging field of consciousness studies, and it does not call for the reinstatement of consciousness as the leading taboo topic in psychology. It is not a public broadside promoting biological materialism and reductionism. And it does not cut consciousness down to size by advocating either polarity of epistemology-realism or idealism. Here is what this book is about. The original title of this book in Danish, Maerk Verden, means "mark the world." This aptly summarizes Nonetranders' thesis, namely, that human perceptual processes carve out-or mark off-small segments of the physical world and represent them to our conscious awareness as reality. To do this, our sensory systems must filter and discard enormous quantities of information. The author terms this discarded information "exformation" (e.g., the first paragraph of this review is similar to exformation). In contrast, the English title, The User Illusion, is a somewhat oblique reference to a concept developed in the early days of the personal computer, which describes the intuitively designed interfaces between the owners of personal computers and their machines. Unfortunately, this is not fully explained until page 290. In a footnote, Norretranders rebuts effectively the criticism that the so-called user illusion is nothing more than the latest metaphor in a long line of technologically based ones about the mind. The legs of his thesis are four empirically verifiable neuropsychological phenomena: (a) the extremely limited "bandwidth" of our conscious awareness in the here and now, (b) the undervalued role of subliminal perception in everyday life, (c) the half-second delay between sensory inputs and conscious awareness, and (d) the backward referral in time of subjective experience. Yet, given these factors, the Danish title seems preferable, because it summarizes how consciousness establishes physical reality from a neuropsychological point of view. This book about consciousness does not include any discussion of consciousness-related anomalies. This is unfortunate, considering the light that the four phenomena cited above could shed on extended human abilities. Some readers will be vexed by the following elements: (a) The New York Times' Op-Ed style of editorializing about social implications woven through some of the later chapters, (b) the fact that the author is a prominent intellectual and journalist, but not an academician, and (c) the author's method of

282

Book Reviews

scholarship, which impresses me as a good example of Polanyi's notion of "personal knowledge." This book is worthy of being read, because it is well-written, the chapter notes, bibliography, and index are edifying, and specialists and nonspecialists will find the content satisfying. On the balance, The User Illusion embodies an expression that Einstein reportedly used during his Princeton years whenever he was working on a particularly difficult problem, "I will a little think." Arnold L. Lettieri, Jr. PEAR Laboratory C-131, Engineering Quadrangle Princeton University Princeton, NJ 08544

Lamarck's Signature by Edward J. Steele, Robyn A. Lindley, and Robert V. Blanden. Reading, MA: Perseus Books, 1998. xxi + 286 pp. $25.00 (c). ISBN 0-7382-0014-X. Epigenetic Inheritance and Evolution by Eva Jablonka and Marion J. Lamb. New York: Oxford University Press, 1995, x + 346 pp., $60.00, (c). ISBN 019-854062-0.
The incredible richness of the biosphere challenges us with obvious questions: What is the origin of diversity among plants and animals? Are natural kinds static, or do they change over geological time? What are the forces causing these changes, and are they directed toward specific ends? Is there any drive toward increasing complexity, and if so, what is the relationship of this process to entropy? What is the interplay between heredity, embryonic development, and ecology? A theory that addresses these issues is needed as a cornerstone of biology. We now know a significant amount about the processes at work during the morphogenetic events that turn a fertilized egg into an embryo and then an adult. In large part, these processes are directed by the genome-information that is instantiated by patterns within DNA molecules. This information is passed from parent to offspring and carries instructions telling molecular machines how to assemble and modify a growing embryo, as well as instructions for building the machines themselves. It is now appreciated that the genome is not simply "beads on a string" of DNA; functionally, it consists of logic elements, subroutine modules, addresses for information retrieval and modification, error-correction mechanisms, hierarchical organization, etc. (see Shapiro, 1991). The inheritance of acquired characteristics theory was a natural way to at-

Book Reviews

283

tempt to explain cases of marvelous adaptation of organisms to features of their environment. This idea is most often associated with Lamarck, though it had adherents both before him and after him. Moreover, Darwin himself thought that inheritance of acquired characteristics did occur. This theory has the benefit of providing easily believable explanations. For example, the camel has rough pads on its knees because constant kneeling in gritty sand causes a thickening of the skin in that area. Unfortunately, in this most simplistic form, this theory faces a number of problems. For example, it would seem that most changes acquired by an organism within its lifetime are due to injury, disease, and age and are not beneficial to pass on to offspring. Likewise, it is often pointed out that thousands of years of circumcision have not produced a tendency for males to be born without foreskins. "Weismann s Barrier" provides a "genetic chastity belt around the reproductive organs" (Lamarck's Signature, p. 165). In most animals, offspring arise from a very small subset of the adult organism's cells-the germ line, whose cells are segregated long before adulthood, so changes occurring to somatic body cells are not passed on. It is possible to attempt to circumvent such objections. For example, Epigenetic Inheritance and Evolution discusses interesting complications such as the possibility of existing germ cells being subject to selection and competition; these processes may well be affected by environmental factors, simulating environmental inheritance when applied over the endogenous variation among germ cells. However, there is a much more basic reason why large-scale morphological changes wrought on the adult body cannot be passed on to offspring. The crucial factor is embryonic development. It is now known that embryos arise as the result of the complex interactions of chemicals generated in response to information contained in the DNA. The genetic material does not appear to contain descriptions of the outcomes of embryogenesis. The existence of reverse transcription (turning mRNA back into DNA), and even the possibility of reverse translation (turning a protein into mRNA), is simply not enough. The process of development is unidirectional; the laws of biochemistry are what drive the generation of form (such as a limb or nervous system) from basic ingredients such as proteins. There is no way to reverse the process and compute how to adjust every protein, including structural ingredients as well as controlling machinery, to result in a particular feature, such as a longer neck. The process of generation of fractal images shares this property. Given a simple mathematical function such as z = z2 + c, it is easy to produce fractal images of great complexity and beauty. The nature of chaos theory makes it impossible to reverse the process. Given a different image, it is simply not possible to know how to alter the function to produce the desired image. The same is true of embryonic development. However, there are limited cases in which inheritance of acquired characters does appear to occur; these are discussed in fascinating detail in Epigenetic Inheritance and Evolution. For example, plants do not all have a germ linelsoma
7

284

Book Reviews

distinction. In plants where a new organism can regenerate from any part of the adult, changes made to adult body parts can indeed become part of the offspring. Likewise, protozoa (single-cell organisms) can pass on altered features of their cytoskeleton, for example, because they reproduce by fission of the adult organism. Bacteria can acquire genetic material from the environment (in the form of "minichromosomes" such as episomes), and because these also reproduce by fission of the adult, such changes can become a permanent part of the reproducing line. Given the powerful reasons why Lamarckian mechanisms are unlikely to play a role in large-scale morphological change, what alternatives remain? The Darwinian theory is a conclusion drawn from three facts, all of which were known long before Darwin's day. These basic axioms are as follows: (a) Offspring resemble their parents more than they resemble unrelated individuals (equal traits are hereditary), (b) he transmission of characters from parents to offspring occurs with high but imperfect fidelity, and (c) resources (such as food, mates, territory, etc.) are limiting because of the exponential growth of populations during times of plenty. The genius of Darwin was to realize (and support by painstaking observation) the claim that these three facts, when joined together, form a powerful system for explaining evolutionary change. Because of the intense competition for limited resources, even small advantages lead to increased chances for survival; thus, beneficial errors in hereditary transmission will accumulate and eventually dominate the population over geological timescales. This basic scheme, when carried over a sufficient number of individuals, over sufficiently great timescales, is an immensely powerful way to leverage random changes into adaptive results. This algorithm is so generally useful that it has been applied to explain the fine-tuning of everything from thoughts and concepts in human culture (Dawkins, 1976) to basic physical laws of the universe (Smolin, 1997) to computer programs that solve problems for which we have no algorithmic solution (Koza, 1992). In biology, as well as in applications of genetic strategies for other sciences, the basic issue is to define the unit of selection (genes versus organisms; e.g., Dawkins, 1982) and the fitness function, a way of assessing the quality of individuals. It should be noted that the three basic axioms of modem evolutionary theory are not controversial. What may be questioned, although it usually isn't, is the issue of whether they are sufficient to explain the incredible richness of the biosphere. The success of mimicking the Darwinian strategy to design computer programs (known as Genetic Programming, Holland, 1992) suggests that our intuition far underestimates the creative power of this process. At the same time, progress in the field of Genetic Algorithms research seems to have slowed considerably on the hard problems, and it remains an open question as to whether a closer look at the mechanisms of development (the layer between genotype and phenotype) will be sufficient to successfully apply evolutionary strategies to those difficult problems.

Book Reviews

285

The idea that random changes can be harnessed by any mechanism to result in everything from viruses to human beings is repugnant to some, and a wellknown group of antievolutionists exists. They have argued that physical processes in and of themselves are not sufficient to result in the appearance of complex biological organisms. It is important to note that Lamarckian inheritance is of no use whatsoever to religious fundamentalist or dualist thinkers in arguing against the sufficiency of material evolution. A theory of environmentally caused changes that are inherited is just as mechanistic as neo-Darwinism; it is simply another way to explain adaptation without recourse to a grand design or creator. Of course, some will argue at this point that it is God who directs which somatic changes are carried into the genome. This move is useless, as then one might just as well posit that God controls what mutations occur in DNA, to later be subject to Darwinian selection. These are all interesting issues and are dealt with in these two volumes. Epigenetic Inheritance and Evolution is an excellent book for the amateur or professional biologist. It begins by reminding the reader that heredity is not transmission of characters but of instructions: A pure line of animals has identical genetic instructions, but the manifestation of their characteristics can be influenced environmentally. This book is packed with interesting special cases in which Lamarckian-like inheritance occurs. It covers inheritance systems other than the genome (epigenetic inheritance systems), such as episomes, phages, retroviruses, mitochondria, cytoskeleton (cilia, basal bodies, entrioles), chromatin marking, methylation, X-inactivation, mobile elements in DNA, etc., and discusses in excellent detail their role in sex, evolution, multicellularity, speciation, meiotic drive, hybrid inviability, etc. It has enough detail to satisfy, and it makes great reading for anyone who is interested in the details of real biology. Also included are viral shuttling of genetic info between organisms (horizontal inheritance), imprinting, parental age effects, and maternal control of gene expression. Issues such as segregation of germ line in various organisms and interactions between epigenetic systems and the genome are covered as well. Despite these fascinating examples of alternative types of inheritance, the authors point out early on (p. 20) that somatic changes are not put back into the germ line. Lamarck's Signature is also a very good read, aimed at more of a layperson audience. It is part of the Frontiers of Science series (edited by Paul Davies)a set that contains several other excellent books. This book doesn't cover as many examples of nonstandard inheritance as Epigenetic Inheritance and Evolution. The first six chapters are a good introduction to basic evolution and Lamarckism, molecular and cell biology, and immunology; they cover the basics in good detail: DNA, RNA, transcription, editing, viruses, recombination, reverse transcription, enzymes, etc. Only the last two chapters are about Lamarckism, and the first few set up the background. Lamarck's Signature focuses on the idea that the immune system instantiates an example of a Lamarckian inheritance mechanism. A cornerstone of the

286

Book Reviews

book is the fact that the immune system is able to specifically react to a practically infinite number of antigens, which are not known in advance. It does this by a selectionist scheme, whereby specific pieces of DNA are randomly mutated and those cells containing mutations, which allow them to bind antigens effectively, proliferate. The basic idea of this book is that DNA mutation is induced within a subset of lymphocytes as an attempt to further optimize the binding of those molecules to the antigen. This in itself is an example of the environment's effects on changes in DNA (even though the mutation events themselves are still random). Moreover, by the action of reverse transcriptase (an enzyme, provided by endogenous retroviruses, that turns mRNA back into DNA) and homologous recombination within germ cells, such mutations can be inserted into the germ line and become inherited. The authors summarize, "Rapid random mutation involving the replacement of the original DNA sequence coupled to selection for a better one (encoding an antigen-binding site with higher affinity than the original antibody) effectively results in the appearance of 'directed mutation,' the complete antithesis of random mutation" (p. 162). The two books complement each other nicely and are recommended for anyone who wants a thorough, professional, well thought out discussion of often neglected aspects of biological inheritance. Epigenetic Inheritance and Evolution is more technical, containing lots of in-depth examples and references. Larnarck's Signature is probably accessible to a wider audience; it has a very good introduction to relevant issues, but expert readers will want to skip to the last few chapters, which focus on Lamarckian inheritance in the immune system. The word Lamarckian tends to arouse strong feelings. It should be noted that one might pick up either book expecting revolutionary models of the emergence of complex animal forms by non-Darwinian means. If so, one would be disappointed. Upon reading both books, it becomes clear that there are lots of examples in biology of simple, single-cell systems (and some plants) where inheritance of acquired characters does indeed occur, and there are lots of complex effects in higher animals where inheritance of acquired characters is mimicked by other processes. What is not found in these books (and nowhere else that I am aware of) is any evidence for the truly controversial claim of Lamarckism-that changes occurring to the adult bodies of organisms can be transmitted to offspring in such a way as to make them better adapted to their environments. The bottom line is that all of the controversy and excitement regarding Lamarckism and its "challenge to Darwinian orthodoxy" is not only about environmental influences on the genome-we know these exist-and the idea that stress or other environmental inputs can affect mutation rates, etc., isn't very frightening or damaging to the conventional paradigm. The truly threatening idea about Lamarckism is the question of teleology: Can the environment direct changes that are beneficial to the organism? Mutation may be non-

Book Reviews

287

random with respect to location (hotspots in the DNA, etc.), but it seems to be always random with respect to adaptive advantage. Lamarckian inheritance is exceptionally exciting if and only if it can actually help to explain the origin and changes in major aspects of morphology, as well as the exquisite machinery of the cell. Although the importance of Darwinian models of evolution is incontrovertible, deep and interesting challenges remain, for this and other theories of life (Behe, 1996; Denton, 1986; Margulis, 1998).

References
Behe, M. J. (1996). Darwin's black box. New York: The Free Press. Dawkins, R. (1976). The selfish gene. New York: Oxford University Press. Dawkins, R. (1982). The extended phenotype: The gene as the unit of selection. San Francisco: Freeman. Denton, M. (1986). Evolution: A theory in crisis. Bethesda, MD: Adler & Adler. Holland, J. H. (1992). Adaptation in natural and artificial systems. Cambridge, MA: MIT Press. Koza, J. R. (1992). Genetic programming: on the programming of computers by means of natural selection. Cambridge, MA: MIT Press. Margulis, L. (1998). Symbioticplanet. New York: Basic Books. Shapiro, J. A. (1991). Genomes as smart systems. Genetica, 84, 3-4. Smolin, L. (1997). The life of the cosmos. New York: Oxford University Press.

Michael Levin Cell Biology Department Haward Medical School Boston, MA 02 115

Authentic Knowing: The Convergence of Science and Spiritual Aspiration by Imants Baruss. Purdue University Press, 1996.228 pp. $14.95 (paper), $32.00 (cloth). ISBN 1-55753-085-8.
Many of us believe that the signal task of human existence is to achieve a better understanding of who and why we are. Although the nature of such efforts has varied throughout history, in the past 400 years, they have consistently taken shape along a continuum defined by the polarities of religion and science. This admirably honest book attempts to harmonize these disparate elements into a common theme of man's efforts to define his place within the universe. Baruss, a professor of psychology, contends that both science and spirituality express a need for "authentic knowing," a concept derived from Heidegger's Eigentlichkeit. The author summarizes the concept of authenticity as "the effort to act on the basis of our own understanding." He expands on this idea in a concentrated foray through classical and quantum physics. In so doing, he draws attention to our tendency to compartmentalize scientific and religious beliefs such that they apply little to our everyday actions and

288

Book Reviews

thoughts. Such incongruities lead him into a discussion of the philosophy and social psychology of science. Here, he notes the heterogeneous composition of the scientific community, despite its claims to an objective universality (something with which readers of the Journal of Scientific Exploration are no doubt familiar). He quotes an old Sufi story to observe that many scientists seek smaller truths that lie where the light of their methods is brightest, rather than looking in the "darker" places where more important findings may lie concealed. With this introduction, he distinguishes between what he calls authentic and inauthentic science. The latter Baruss refers to as "scientism," and lists some of its characteristics as materialism, an emphasis on the blind accumulation of facts, and a belief in a universal, inflexible scientific method that can guarantee truth. Here, I believe he distinguishes the scientific method as a useful tool for discovery from its worship as a secular icon. In a wide-ranging discussion, Baruss explores theories of the mind ranging from Freud and Jung to Assagioli and Maslow, drawing attention to the similarities between certain types of meditative practice and psychotherapy. I was interested in the analogy he draws between witnessing meditation and psychoanalytic training. He discusses several scientific anomalies and applauds the formation of the Society for Scientific Exploration, although noting that its formation has not solved problems of funding nontraditional areas of research or creating new academic curricula. Baruss stumbles a bit when he attempts to solve the important problem of subjectivity by somewhat obscurely citing "modes of understanding superior to rational thought." These may well exist, but we remain too far from their understanding to employ them as research tools. Baruss goes on to discuss human transcendence (of the bounds of perception and conation) as exemplified in the writings of Franklin Wolff. Wolff, a mathematician by training, after achieving "enlightenment," wrote two books in which he made a sincere effort to describe the ineffable using analogies taken from mathematics. Treading into more dangerous waters, Baruss investigates the theosophical model as a means of describing transcendent states. This topic, too, is given an even-handed and responsible treatment. He is clearly aware of the dangers of blind obedience to any system and notes, as perhaps did John of the Cross, that leaving the security of the conventional entails a certain loss of groundedness, with the temptations of both scientism and blind spiritual allegiance (to regain a footing) being the price of one's authenticity. The author makes many important points. He notes that the normal functioning of the mind may obliterate the transcendent and that the best of science and spiritual practice are performed in a search for authentic knowing. He observes a hidden intelligent dimension in ourselves that occasionally irrupts into our lives with moments of transcendent insight. Here, he reminded me of Mircea Eliade, the great historian of religions, who took greatest delight in works of fiction that attempted to explore the meaning of such moments. To close with a quote from the book, "Knowledge is the currency of science, and when it is freed from the black and white constraints of its inauthentic aspects,

Book Reviews

289

science can be opened up to a colorful exploration of consciousness and the ultimate nature of reality." I recommend this book highly. The author does not claim to know the final answers, but he has formulated the questions clearly enough. I suspect Diogenes could rest easily in Professor Baruss' classroom. James A. Scott Massachusetts General Hospital Deptartment of Radiology Harvard Medical School Boston, MA 021 14

Alien Abductions: Creating a Modern Phenomenon by Terry Matheson. Amherst, New York: Prometheus Books, 1998. 317 pp. $26.95, (c). ISBN 157392-244-7.
The UFO rumor, craze, myth, or narrative fad has taken a fresh lease on life in the last couple of decades. Perhaps its biggest boost came in 1988 with a vasteyed neotonized grey gazing out from hundreds of thousands of paperback covers of Communion, horror writer Whitley Strieber's purportedly true story of alien abduction. It had already entered mass culture in a big way with Spielberg's 1977 Close Encounters of the Third Kind and is now firmly entrenched via the X-Files and the National Enquirer. What's going on? Are aliens really abducting and anally probing or fetus-harvesting thousands of reluctant people, perhaps for their entire lifetime and even beyond death into endless hybrid rebirths? That is a less interesting question for Terry Matheson, English professor at the University of Saskatchewan, than the puzzle of this narrative's immense popularity in a culture ostensibly shaped by rational bureaucratic and scientific canons. Professor Matheson's enjoyable, plainly written study proposes that the UFO narrative gains its force from its adoption as a myth, an organizing structure that (borrowing from Northrop Frye, Roland Barthes, and other mythologists) blends dream and ritual to "reflect a culture's preoccupations and concerns as well as the things it fears," ordering and structuring (although hardly explaining) "certain aspects of the world that would otherwise remain unintelligible or objects of dread" (p. 285). Some of these elements are age-old-the nature of cosmic, social, and interpersonal forces often allegorized in astrological or sacred doctrines-while others are typical precisely of our technological and allegedly rational epoch. Ancient Greeks figured powerful ambiguities of lust, war, and political power in such mythologized tales as Leda's rape by Zeus and her subsequent triple impregnation with twins, Castor and Pollux, and the Helen fated to ruin two cultures in protracted battle. Today, Matheson argues, we find in the bleak, emotionless, and faceless UFO aliens a strikingly vivid figuration of late twentieth century postindustrial life at its worst; but also at its darkly numinous,

I

290

Book Reviews

with technology's ceaseless parade of wonders and medical promises, we find its ambiguous offer that we might soon transcend human limitations. Matheson's approach is historical (technically, diachronic), tracing the slow waxing and waning of subordinate elements in this developing mythos (a narrative form often regarded as timeless or synchronic). While he claims to bracket the reality of UFO abductions, attending solely to the narrative or semiotic codes and forces in play, this is disingenuous. The book is published, after all, by Prometheus, notable for its sturdy inquisitorial list of texts by people friendly to CSICOP and other ardent skeptical organizations. His method is known in literary studies as close reading: In a forensic way, he examines the claims and descriptions in several notable books by and about abductees, testing them for consistency, plausibility, sequential influence, and their rhetorical design and devices. A brief introduction to the nature of narrative draws on some technical work from the 1980s and earlier by narratologists and morphologists. Written testimony is not a clear windowpane through which we can see the truth of reported events. Authors help form and reshape the tale they tell. Even in legal narrative, the courtroom "construction of truth" is "primarily a matter of the overall narrative plausibility" (p. 37), the case as a whole rather than the details. Still, he notes, realistic details help, and it is noteworthy how many get inserted later into abduction narratives as authors tidy the blurts and scraps or summarize floods of verbal and pictorial testimony. Avowals of integrity and probity are frequently used to fill the void left by the inevitable absence of hard or compelling evidence, so such books usually begin with "an introduction or preface written by a presumably objective third party who often possesses impressive academic credentials" (p. 38). Such strategies are not invoked to deceive but are part of the protocols and practice of writing and reading, which impose certain "resemblances.. .from account to account [that] may say more about the nature of a realistic narrative's inner logic" (p. 39) than speak to their true content. All this is compelling, yet Matheson's skeptical presumptions can lead him astray. Citing a certain blatant inconsistency in John Fuller's The Interrupted Journey, he remarks: "Because Fuller makes no attempt to resolve this, the Hills' credibility is bound to suffer" (p. 53). Yet it did not do so, on that basis at least, which is a surprising fact that Matheson's methods do not quite resolve. His investigation proper launches from that key 1966 text, which dealt with the celebrated case using hypnosis, in the early 1960s, to unpack (or perhaps instill) the terrifying Ur-abduction reports of Betty and Barney Hill. In the nearly four decades since, its core story has gathered fresh elements, dropped others, strengthened that early reliance on hypnosis while abandoning the stuffy requirement that trained specialists should do the honors, and in general has followed a course convincingly seen as the elaboration of a living myth. Matheson takes us through consecutive versions from Raymond Fowler (Betty Andreasson's abduction and space tour, replete with fundamentalist im-

Book Reviews

291

agery and glossolalia appropriate to her Christian faith-and Fowler's own abduction, belatedly realized) to Travis Walton's self-authored testimony, Ann Druffel and D. Scott Rogo's treatment of several gay women who oddly enough were spared the usual phallic probing, the vatic arrival of Budd Hopkins with his menacing extraterrestrial aliens, and Strieber with his even more terrifying occult shapeshifters, Ray Fowler again with four men in a boat whose drawings and stories do not especially resemble each other, despite the extensive interpretative zeal and leading questions of their interrogator, concluding with the arrival of the big guns from academia, historian Dr. David Jacobs and Harvard psychiatrist Professor John Mack. Matheson adopts an annoying tic in these analytic recountings: Often, sentences tell us that "many readers may be inclined to.. ." (p. 86), "some readers may conclude that.. ." (p. 117), "many readers will emerge from this section having concluded.. ." (p. 208), and on and on. This is exactly the kind of narrative bullying Matheson discerns so frequently in the abduction documents. I found myself wondering whether the first draft was a straightforward skeptical demolition of these often woolly, dreamlike, perverse, and inconsistent narratives, lightly rejigged with slightly old-fashioned narratological gadgetry in order to gain a publishing niche. Professor Matheson's account is persuasive, as far as it goes. According to structuralist Claude L6vi-Strauss, myth is an ideational and affective ensemble of stories that articulates schemata of expected behaviors while specifying and defusing its culture's antinomies or internal contradictions. Its aim is the coercive institution of order, regularity, and harmony-even if those ends are met, at times, through controlled ritual passages into frenzy, carnival, and hysteria. This is the type of analysis adverted to in the book, although rarely attempted in any subtlety. Occasional references to Thomas Bullard's 1982 Ph.D. thesis, the only work I know of other than Jacques Vallee's to look closely into the folkloric and mythological components of the emerging mythos, make one hunger for more detailed analytics-a far more exact semiotic unmasking of the codes at work, rather than vague if plausible generalizations about the impact of science fiction iconography on vulnerable people at the ends of their twentieth century tethers. Structuralism, of course, is now out of fashion, replaced in the humanities by variants of critical theory, Lacanian psychoanalysis, poststructuralism, and discourse theory-approaches that emphasize a shimmering uncertainty where earlier models sought a reliable if abstract binary algebra. Even so, like Lkvi-Strauss's myths and Jung's archaic dreams risen from a postulated collective unconscious, discourses are held by contemporary literary theory to possess an eerie autonomy, indeed a preeminence over any thought or intention that one might suppose lies behind the utterances they "enable." Like cosmic radiation, discourses traverse the fragmented subject. Meanwhile, cognitive and experimental neurosciences offer similar accounts, replacing old folk images of a unified self with modular brains and multiple intelligences. Consciousness, individual and collective, is prey to memes, an inward ecology of

I

292
I
1

Book Reviews

mental viruses that might indeed in their totality constitute the parliament of the self. On this view, a myth might be a kind of commensal package of memes, a roaming mental genome that infests us even as we learn some local patois and hear or speak its latest modish utterances. Can hermeneutic, narratological, semiotic, or discursive analyses take us much farther than Matheson manages in his intriguing but frustrating book? He provides a telling critique of the major texts in the abduction industry, but he does not go very deep or venture far from the books under consideration. There is no discussion of the murky undergrowth that preceded and paralleled the invasion of the grey gynecologists: the delicious semi-occult contortions of John Keel (especially) and Brad Steiger, let alone the subterranean foliations and filiations of mind cults and more reputable marginal belief systems: the Heaven's Gate dupes, Scientology with its space opera theogony, Theosophy and its Ascended Masters, all the charmingly crackpot scholarship that Desmond Leslie assembled in the 1950s in Flying Saucers Have Landed. The vimanas of ancient Atlantis and Mu might not seem direct ancestors of John Mack's aporias, half in this world and half beyond it, but I scent a narrative trail. And there remain endless apertures for an inventive reader of these accounts. Andreasson's friendly ufonauts passed along faux explanations Matheson finds 'unedifying,' cast in "pseudoscientific language" (p. 198). This is so, but consider the following: the hybrid-engineering aliens must "put their 'protoplasma' in the 'nucleus of the fetus and the paragenetic,'" and Betty mentions "balancing 'the oscillating telemeter wheels"' (ibid), although this latter task apparently relates to their propulsion or guidance systems. But Betty simply might have been confused; in the last two years, biologists have found they can significantly extend the lifespan of in vitro human fibroblasts by inserting into the nucleus a genetic package that codes for the recently discovered enzyme telomerase. That prompts the chromosomes to repair their ever-shortening telomeres (key to their replicative capacity), which very recently were found to form a wheel or loop at the ends of DNA strands. Surely that is not what the aliens were telling Betty, but I would not be surprised if the evolving mythos incorporates such a reading with a cry of recognition. This kind of urban myth is so charming, so B-movie sci-fi, that for decades I have gobbled down the revelations of Adamski, Mack, Jacobs, C. D. B. Bryan (that upmarket journalist). I wallowed in Whitley Strieber and his creepy, profitable concoctions or perhaps psychodramas and laughed my head off at Jim Schnabel's splendid travelogue among the beamed-up, Dark White. Somewhere in there, a curious prickle ran down my spine. I started making lists, drawn from these books, of the signs and symptoms of alien abduction. I recalled the primary school near Monash University, where a whole class and the teacher witnessed a close encounter of the third kind, just a kilometer or two from where I was studying in April 1966. I glanced back through my own science fiction novels and, one after another, out popped virtually the entire checklist: the investigation and probing on the floating slab, the wafted transi-

Book Reviews

293

tion through a wall in a bubble, the mysterious mutant fetus, the transferred embryo, the creatures suspended in tubes, the occlusions of memory, the greateyed animals with cold voices, the prophecies of doom or transformation, etc. I am not about to spring any unseemly revelations, leap from the UFO closet. But it did focus my amazed attention on the ubiquity of these narrative elements, the odd way in which they seem to have seeped into our dreams and our unconscious (or out of it), long before they were written in fat lurid paperbacks or dramatized for network television and Spielberg movies. I do not know their source, and nobody else does either. Matheson is surely correct: The abduction mythos is a culturally created phenomenon or perhaps a spontaneously emergent one, catching in its slowly shifting narratives the changing pressures, fears, and hopes of turn-of-millennium technological societies. I hope other scholars, drawing on more recent techniques and perspectives, soon open out the trail he has broken.
Darnien Broderick Department of English and Cultural Studies University of Melbourne, Australia

At the Threshold by Charles F. Emmons. Mill Spring, NC: Wild Flower Press, 1997. xi + 268 pp.
Skeptics unfamiliar with the field often dismiss the UFO phenomenon with something like, "If it were true, then surely science would recognize it by now." And even for people sympathetic to ufology, the lack of mainstream acceptance is something of a puzzle. Surely, given the reported size of alien craft and the frequency of sightings, one would expect overwhelming evidence despite institutional resistance. At the Threshold goes a long way toward resolving this puzzle. Author Charles Emmons, a professor of sociology at Gettysburg College, adopts primarily a sociological perspective-but not to explain away the UFO phenomenon as myth and folklore. Instead, At the Threshold examines the social mechanisms through which orthodox opinion has come to reject UFOs and through which ufology has become a deviant field. It also surveys the spectrum of opinion-among scientists, skeptics, and believers-regarding UFOs and explores some of the challenges UFOs pose to science, both as a social institution and as a method of understanding reality. Indeed, there is much explaining to be done. Emmons overviews the entire spectrum of UFO phenomena-from sightings, to physical evidence, to abductions, to channeling and other associated paranormal phenomena, making it clear that there is no facile, obvious means to explain away any category of UFO events. In discussing false-memory syndrome, for instance, Emmons discusses the precautions that hypnotherapists use to test for suggestibility and

294

Book Reviews

notes the corroborations between hypnotically induced memories and other evidence. At the Threshold gives the lay reader a good feel for the type of dialogue that exists among debunkers, skeptics, and true believers, Without going into the minutiae of the debates, it overviews each type of UFO phenomenon, the explanations typically used to debunk it, and the severe weaknesses of these explanations. Emmons makes a convincing case that although there may be no incontrovertible evidence for any single UFO event, taken as a whole they defy conventional explanation. The debunkers' position is portrayed as a hodgepodge of ad hoc explanations, ridicule of opponents, and out-of-hand rejection of evidence that "couldn't be true and therefore isn't true." Emmons refrains from impugning anybody's personal integrity, but his point that CSICOP and the debunkers are caught in a narrow "reality tunnel" is hard to resist. At the Threshold draws on an impressive array of secondary sources, representing skeptical positions as well as those of ufologists. He also has gathered some new data (on ufology, not UFOs) that will make this book relevant even to seasoned researchers in the field. The surveys and interviews with mainstream astronomers are particularly illuminating in understanding how ufology became a deviant science. Emmons has found that most astronomers are shockingly uninformed about ufology, despite their majority opinion that the existence of extraterrestrial intelligence is highly likely. As Emmons carefully elucidates through interviews and statistics, astronomers (and other scientists) shun the field mostly because there is no funding for UFO research and because an interest in UFOs could ruin their academic careers. Such institutional mechanisms are abetted by media sensationalism and (official) government insistence that it has no evidence of UFOs, to isolate ufology from the rest of (mainstream) science. The disagreement between skeptics and believers as to what constitutes the realm of scientific possibility is mirrored within ufology itself. At the Threshold devotes an entire chapter to the New Age versus nuts-and-bolts division in ufology. While giving a clear exposition of the factions and controversies among ufologists, At the Threshold laudably avoids delving too deeply into the Byzantine web of conjecture, rebuttal, and conspiracy theory that has grown up around many of ufology's famous cases and phenomena. Just as orthodox science has no justification for summarily excluding ufology from its domain, neither can one justify excluding any UFO phenomena, no matter how New Age or weird (and some are weird indeed!) from ufology itself. Thus, taking UFOs seriously calls into question the very boundaries of science-not just scientific fact, but scientific method as well. Of course, ufology is not the only field to pose such a challenge; Emmons argues that such a challenge may even be raised within science itself by the New Physics. Emmons' final message is thus a call for tolerance and a suspension of judgment-an appropriate attitude for any field of scientific inquiry, but particularly for one that so powerfully challenges so many fundamental assumptions about reality.

Book Reviews

295

At the Threshold is written in a clear accessible style, free from jargon and free from any obvious agenda to debunk UFOs or prove they exist. Moreover, because At the Threshold covers most major UFO-related issues, it makes a fine introduction both to the content of the field and to its sociology. Emmons has done a commendable job of placing ufology in a social context that takes its researchers and discoveries at face value. Charles Eisenstein 104 Matthew Circle State College, PA 16801 chuck@statecollege. corn

Minds in Many Pieces: Revealing the Spiritual Side of Multiple Personality Disorder by Ralph B. Allison, M.D., with Ted Schwarz, 2nd ed. Los Osos, CA: CIE Publishing, 1998.208 pp. ISBN 0-9668949-0-1.
The author of this book is probably the most experienced and honored investigator of Multiple Personality Disorder in the world, and certainly in the United States. His foundational papers on the topic were published in the 1970s, when Dr. Allison started a tradition of giving workshops on the diagnosis and treatment of Dissociative Identity Disorders at meetings of the American Psychiatric Association. Discoveries and conclusions of the last 20 years are featured in the current edition of this work. Certain tentative opinions on some borderline or parapsychological issues followed from Dr. Allison's research. Since the topic is, even today, not widely studied in clinical and therapeutic medicine, Allison's account of how he became interested, and of early vicissitudes and later conflicts, deserves careful attention by borderline investigators. There is a fairly standard aetiology to be discovered in most highly developed cases of multiple personality disorder. Very early trauma and continued conflict within the family would surprise no student of cases. Allison is most impressed by and offers detailed discussion of cases that are not typical. Some of those even suggest spirit possession or successive life histories of the same "essential" person. Dr. Allison, himself, does not endorse either concept. He notes, however, that therapeutic methods suggestive of exorcism have been used, and sometimes were found effective in his own experience. The recommended attitude is pragmatic. If a certain procedure helps the patient recover unity of personality, it may be used before any conclusive interpretation of the forces or mechanisms involved. Some of the methods of exorcism are like those of monotheistic religions while others are more animistic in character. At one point in his career, Allison was investigated by the medical staff of an institution with which he was associated, but any suspicion of unprofessional practice was judged to be unwarranted and totally without merit. The pragmatic viewpoint

296

Book Reviews

prevailed, and successful therapy was accepted on its own terms by a majority of the senior staff. To the layman, the number of split-off or dissociated "personalities" in a given case is certainly surprising. Allison finds that once a major division has occurred, new temporary self systems may emerge or be constructed whenever the individual faces new stressful situations. In some cases, what emerges defies all expectations and may be bizarre or even terribly dangerous. For example, a small frail woman may become capable of combat with a few powerful security people; or again, a devoted puritan may become unspeakably vulgar in language and behavior; or a sober person may behave like one deeply intoxicated. Changes in the opposite direction also occur. Allison finds that some dissociated systems may be malevolent or persecutorial, dangerous even to the single organism, while others are benevolent and may play a constructive role. In certain cases, Allison thinks, it is correct to postulate an Inner Self Helper. Some factor in the dissociated personality will emerge to serve as the most effective therapist. This idea came from one of Allison's patients, whose recorded discussions seemed like a theatrical situation involving several actors, one of which was the Inner Self Helper, or prime essence of the patient herself. This has become a pivotal concept in Dr. Allison's theory of the psychotherapeutic process.
Robert F Creegan . Philosophy The University at Albany, SUNY Albany, NY 12222

Earth Under Fire: Humanity's Survival of the Apocalypse by Paul LaViolette. Schenectady, NY: Starlane Publications, 1997. 360 pp. $25.95, (c). ISBN 0-9642025-1-4, starecode.aol.com.
Breakthrough understandings of reality axiomatically are derived from new perspectives-fresh mental frameworks that like the lens of a pair of eyeglasses allow the wearer to see the external world with a whole new degree of clarity. Most often, these new metaphors come from intellectual structures derived from some discipline other than the one being studied. Machine age imagery, for example, became a common product of the industrial age, providing the architecture for the development of social organizational systems, economic theories, and even extraordinary new areas such as molecular nanotechnology. The principles of quantum physics are now working their way into business theory, and the understandings of the science of complexity are being explored as training metaphors for the military. Although great new understandings have been derived from this approach,

Book Reviews

297

at the highest level, there is an intrinsic shortcoming to this mental mechanism: It is unidimensional and essentially linear. It provides only a single intellectual structure for making sense out of a puzzling new situation. It is not multidimensional.. .not a systems approach. This is important, because unlike the structure and process of modem science of the last couple of centuries-one only ventured into the sacred space of other disciplines at the risk of certain professional assault-the future will evolve at the intersection of diverse disciplines. The trend is clear: The indicators abound, but it seems it will be some time before physicists, chemists, and biologists see themselves as colleagues observing the same reality from different perspectives, rather than warriors in the service of defending their own hierarchical scientific fiefdoms. Earth Under Fire is a significant though perhaps flawed attempt to weave together the threads of a number of disciplines into a fabric that provides a picture of a possible near-term future for the earth-one that is laden with disaster. If he is right, Paul LaViolette has done us all a profound service. If he is wrong, well, this is still a beneficial work, for it certainly pushes out our horizons and further provides a very practical example of how difficult it is to know something about a lot of things and make them all work together seamlessly. This is particularly so with disciplines as diverse as astronomy, geology, paleontology, climatology, astrology, and messages from the tarot. This list may be enough for many potential readers to summarily discount the book, particularly the last two sources. But first consider the underlying hypothesis; it might get your attention. The basic notion of this book is that there is evidence from various sources that suggests that every 26,000 years (plus or minus 3,000), a huge blast emanates from the center of our galaxy, sending our way a high intensity "superwave" shower of cosmic rays followed by a giant cloud of cosmic dust. Sometimes this happens on a 13,000-year cycle. The cloud sets up an abrupt change in global weather that generally wipes most things out (i.e., human and animal life). This might lead you to ask when the next cycle is due. Well, any day now, LaViolette suggests. For me at least, that was reason enough to work my way through the book. Think of it as insurance; you never know. I was not disappointed. Not yet convinced, perhaps, but certainly provoked. In my business of professionally thinking about the future, this would be called an "early indicator" of a low probability, high impact "wild card" event. It might not turn into anything, but if it does, it would be big. If the above hasn't piqued your attention, then probably nothing more that I could say about LaViolette's argument and its shortcomings would persuade you to dip into this volume. But if you are one who is now intrigued, then a couple of additional comments are in order. There are as many perspectives through which to view the future as there are scientific disciplines. If you add in nonscientific sources such as astrology and elements of tarot cards, then immediately the rigor that one assumes would accompany a science-based argument quickly begins to erode. In this book, that

298

Book Reviews

problem shows up in the first few pages. The zodiac, LaViolette contends, contains a couple of important explicit messages from our ancestors. Starting with Taurus, each of the signs in sequence stands for a specific component of subquantum creation. The key to how everything physical was created is apparently in the night sky. Ah, but there is a problem. A couple of the signs are out of order, we are told, and if you rearrange them "correctly," then they tell the right story. A bit of a reach, it seemed to me. That same kind of logic continues with the assertion that "these six pairs of celestial bodies map out a temperature gradient extending from Leo (the Sun) at the warmest end to Aquarius and Capricorn (Saturn) at the coolest end." But there is another problem. "Although the Moon is cooler than Mercury, it is, by far, much brighter, so this minor exception to the rule may be overlooked," writes the author. Then the zodiacal message from the past also supposedly references the center of our galaxy as the source of regular extraordinary explosions because (among other things) Sagittarius is shooting his arrow straight toward the point that is the center of the Milky Way, our galactic home. Yes, but it really does not point directly at the center-kind of in the general direction of the hub of our spiral neighborhood. It would have been closer 18,200 years ago, but how does one even know that there is any symbolism in the direction that the archer is pointing his bow? But it gets better-really. The geology and climate chapters are quite compelling. Careful assessments of beryllium- 10 deposits in ice corings in Antarctica (presumed to be the product of cosmic ray showers) show that the earth was washed near the end of the last ice age and on several earlier occasionstimes that correlate with other indicators of a superwave event. LaViolette does an admirable job of linking various ancient myths with physical and geological indicators that suggest that the solar system has experienced major conflagrations from solar outbursts throughout its past. He relates these possible events with historical changes in the meltwater discharge from the Mississippi River by looking at cores in the Gulf of Mexico. There are other signs. The rates of disappearance of large and small mammals in North America over the eons line up. Destructive floods called glacier bursts or glacier floods also show up at the same times. The rather famous Siberian mammoths that died and froze so suddenly that they still had grass in their mouths and stomachs were not enclosed within ice, but within frozen silt, "or in other words, within glacial flood sediments." Because grass cannot grow in that part of the world now because of the cold climate, it is fair to posit that the climate changed suddenly. This theory has been offered as the basic evidentiary argument for magnetic pole shifts, when the earth's crust becomes temporarily unclutched from its molten core, but it can also be supported by the notion of glacier floods. It is only in recent years that a great deal about our geological, geographics,

Book Reviews

299

ice corings. Rapid change climate, for example, did not seem like a realistic possibility until a decade ago. It is now, therefore, provocative to think about other events signaled by minute deposits that remain after years. I do not pretend to understand, let alone believe in, astrology. I certainly do not know anything about the tarot. But that does not keep me from believing that someday we might learn something that puts these perennial pursuits in a more logical light. In the meantime, setting aside those aspects of the book, if one objectively looks at all of the evidence that LaViolette has collected from the traditional sciences, it is possible to come away saying, "This might be possible." That puts the future in a whole new light. This is a book that is both fantastic and reasonable, soaring and detailed, and theoretical and practical. It promotes one of the biggest ideas in quite some time and therefore merits critical study by others in many disciplines. John L. Petersen is president of The Arlington Institute, a Washington, DCarea research institute, and author of Out of the Blue: Wild Cards and Other Big Future Surprises. John L. Petersen The Arlington Institute 1501 Lee Highway, Suite 204 Arlington, VA 22209-1 109 johnp @ arlingtoninstitute.org

SSE News

301

Fifth Biennial SSE European Meeting

Announcement and Call for Papers The Fifth Biennial SSE European Meeting will be held from Oct. 20-22, 2000, at the University of Amsterdam in the Netherlands. Dick Bierman and Chris Duif are hosting the meeting and coordinating registration. Ezio Insinna is the chairman of the program committee. Program A number of distinguished scholars have been invited to speak on the theme: Unorthodox Science: Past, Present, and Future. The following speakers have accepted at the time of publication:
S. J. Doorman (Dutch Skeptical Society): On the Future of Unorthodox Science. F. H. van Lunteren (University Utrecht, Netherlands): On the History of Unorthodox Science. W. Peschka (Germany): Kinetobaric Efft zts and B ioinformational Transfer. Pyatnitsky (Moscow Institute for High Temp., Russia): Effects of Consciousness on the Structure of Water. S. E. Shnoll (Moscow State University, Russia): Realization of Discrete States during Fluctuations in Macroscopic Processes. R. Sheldrake (UK): Animal Communication. R. van Wijk (University Utrecht, Netherlands): High Dilutions Research. K. Zioutas (University Thessalonike, Greece): Evidence of Dark Matter from Biological Observations. Further information on the program will be announced on the SSE website (www.scientificexploration.org/meetings/euro5 ,html)

Call for Papers All interested European SSE members are encouraged to submit abstracts for short contributed presentations to the program committee consisting of Ezio Insinna, Brenda Dunne, and Dick Bierman. Abstracts no longer than 500 words must be sent before August 15, preferably by e-mail to emi2Qworldnet.fr. Address: Ezio Insinna, 18 allee des Freres Lumiere, 77600 Bussy Saint Georges, France.

302

SSE News

LOGISTICAL INFORMATION Traveling
The International Schiphol Airport can be reached easily from almost everywhere. Train connections from the airport to the city (Central Station) are running every 15 minutes. Travel time from the airport to Central Station is about 15 minutes. If you are traveling by automobile, please note that there is no free parking around the conference location. Check with your hotel or the local organizer Dick Bierman about how to arrange for parking.

Hotels Main Hotel (-$90): Lancaster Hotel, Plantage Middenlaan 22, Amsterdam, the Netherlands. Phone: + 31 (0) 20 5356888. Email: res.lancasterhote1@edenhotelgroup.nl. Mention group code UVA-SSE when making reservations. Budget hotel, backpackerstyle (-$45): Arena Budget Hotels, Gravezandestraat 51, 1092 AA Amsterdam, the Netherlands. Phone: +31 (0) 20 6947444. Email: infor@hotelarena.nl. Information on other smaller hotels within walking distance will be provided on the website. The hotels are situated in a relatively green area near the Zoo and can be reached by taking either a cab from the Central Station or tram nr. 9 (get off at the Zoo for Lancaster or Tropenmuseum for Arena). Social Activities The University of Amsterdam is situated near the center of the city and the program will be set up in such a way that attendees have opportunities to enjoy the cheerful atmosphere and picturesque canals in Amsterdam. A reception and registration will take place on Thursday evening (October 19) at a nearby Caf6 (to be announced), all within a two-minute walk from the Lancaster Hotel. The Society banquet is planned for Saturday evening at the Zoo restaurant with a splendid view over the animals and gardens. The banquet will be preceded by a trip through the Amsterdam canals. The dress code is informal. Sightseeing Friday evening is free for exploring the city. Sunday afternoon might be used to visit the museums, which have large collections of Rembrandt and van Gogh paintings. The Anne Frank house is open for visitors on Sunday afternoon. For more information on local arrangements contact: Dick Bierman, email: bierman@psy.uva.nl, fax: +31-20-6391656. Registration Copy this registration form and send it to: Dick Bierman University of Amsterdam Roetersstraat 15 1018 WB Amsterdam The Netherlands Or: Register using the web registration form at our website.

SSE News

303

Registration Form European SSE 2000 Conference

Contact Information
Name: Affiliation for Badge: Street Address:

Phone:

E-mail:

Registration Fee
Select Registration Type: Early Fee (before Sept. 1): $85 / Dfl. 190,Late Fee (after Sept. 1): $100 / Dfl. 220,Daily Registration (1 day): $40 / Dfl. 100,Daily Registration (2 days): $75 / Dfl. 170,50% DISCOUNT. Check this if you are an enrolled student.

Lunches and Banquet
On Site Lunches ($20 for 3 days) Cl Friday Night Banquet ($35) (includes trip through canals) Check this if you are a vegetarian

Payment Method
Total amount (the sum of the registration fee, lunches, and banquet): Select Payment Method: Credit Card CardNumber: Signature: Check (make check payable to Euro SSE 2000)
U

$

-

-

-

Expiration Date

1

Bank transfer to PostBank Account 8417584 of Euro SSE2000, the Netherlands

Please mail registration form to Dr. Dick J. Bierman, SSE 2000, University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands

Society for Scientific Exploration Officers
Prof. Peter A. Sturrock, President Varian 302 Stanford University Stanford, CA 94305-4060 Ms. Brenda Dunne Executive Vice President for Education C131, School of Engineering &Applied Science, Princeton University, Princeton, NJ 08544-5263 Prof. Robert Jahn, Vice President D334, School of Engineering & Applied Science Princeton University Princeton, NJ 08544-5263 Prof. L. W. Fredrick, Secretary Department of Astronomy P. 0.Box 38 18 University of Virginia Charlottesville, VA 22903-08 18 Prof. Charles R. Tolbert, Treasurer Department of Astronomy P. 0.Box 3818 University of Virginia Charlottesville. VA 22903-08 18

Council

Dr. Marsha Adams 1100 Bear Gulch Rd. Woodside, CA 94062 Prof. Henry H. Bauer Chemistry, 234 Lane Hall VPI & su Blacksburg, VA 24061-0247 Dr. Stephan Baumann fMRI Project Director Psychology Software Tools, Inc. 2050 Ardmore Blvd. Pittsburgh, PA 15221 Prof. John Bockris Department of Chemistry Texas A&M University College Station, TX 77843 Dr. John S. Derr Albq. Seismology Center Albuquerque, NM 871 15

Dr. Roger D. Nelson C 131, Engineering Quad. Princeton University Princeton, NJ 08544 Dr. Harold E. Puthoff Institute for Advanced Studies-Austin 4030 W. Braker Ln., Suite 300 Austin, TX 78759-5329 Dr. Beverly Rubik Institute for Frontier Science 6114 LaSalle Avenue, #605 Oakland, CA 946 11 Dr. Robert M. Wood 1727 Candlestick Ln. Newport Beach, CA 92660

Journal of Scientific Exploration, Vol. 14, No. 3, p. 303, 2000

0892-33 10/00 O 2000 Society for Scientific Exploration

TRIBUTE
The current issue of our Journal, Volume 14, Number 3, is the first to be compiled under the direction of our new Editor, Professor Henry Bauer. The Council is most grateful to Henry for taking on this responsibility and we wish him well. However, it is also appropriate to reflect on the great achievement of the previous Editor-in-Chief, Dr. Bernhard M. Haisch, and the Executive Editor, Ms. Marsha Sims. They served as editors for so long that it is difficult to remember that there was an editor before them, but we are indebted also to Professor Ronald A. Howard who was our first editor, negotiating a contractual relationship with Pergamon Press and bringing the Journal into existence in 1987. Bernie took over the editorship of the Journal in 1988, and Marsha joined him in the editorial office in 1990. The relationship with Pergamon Press ended in 1992, when Allen Press took over. That was a critical juncture for the future of the Journal, and Bernie and Marsha set their sights high: They decided to publish four issues per year rather than two, and they aimed to expand both the length of each issue and the subscription base. They have had great success on all fronts: The last complete volume (Volume 13) totaled 724 pages, and the number of independent subscribers grew to 750 in 1999. These changes came about with no reduction in the quality of the Journal. Bernie instituted an Advisory Board that now has 18 members, and exercised tight control over the scientific standard of papers published in the Journal. It is hard to separate the growth of the Society from the growth of the Journal. This is a symbiotic relationship. In the last 10 years, the Society membership has grown from 303 to 729, comprising 371 associates, 260 full members, 44 emeritus members, and 54 student members. We now have members in over 40 countries scattered over the globe. This growth of the Society is due in no small measure to Bernie's and Marsha's vision and effort. We are also indebted to Bernie and Marsha that they instituted our current smooth and productive relationship with Allen Press, a relationship that is now even more important to the Society, because Allen Press will be handling more of our business, both for the Journal and for the Society. Our efficient and cooperative friends at Allen Press also deserve our thanks. Now that they have handed over the editorship and the management of the Journal, Bernie and Marsha may focus their energies on another exciting new enterprise. Bernie has created, and now directs, the California Institute of Physics and Astrophysics, where Marsha now serves as executive administrator. We wish them well and we hope that some of the results of their activities will be published in due course in the Journal of Scientific Exploration. Peter A. Sturrock

Journal of Scientific Exploration, Vol. 14, No. 3, pp. 304-305,2000

0892-3310100
O 2000 Society for Scientific Exploration

EDITORIAL
I feel honored to have been asked to serve as Editor-in-Chief. The task will be made possible by many people who have agreed to help in various ways and to whom I am enormously grateful: Stephen Braude, Dean Brown, Dean Radin, and Mark Rodeghier as Associate Editors; Michael Epstein, who has taken on book-review management and also serves as Associate Editor; David Moncrief, so assiduous in finding books worthy of review and in finding people to review them. Jill Franklin, our initial Managing Editor at Allen Press, has actually done most of the work for which I will be given credit. Journals often signal changes in editorships with some striking change in design to signal a revivification. I recall that with Chemical & Engineering News, with Science, and more recently, with American Scholar. The Journal of Scientific Exploration, however, stands in no need of revivification. My hope as I become Editor-in-Chief is that no change will be noticed in the substantive, physically attractive periodical that Bernie Haisch and Marsha Sims nurtured for so long; indeed, one might say created. So, in becoming Editor, I call for no new direction but rather recall our intellectual purpose: to provide for intellectually responsible discussion of scientifically anomalous topics not generally treated in mainstream scientific journals. One thinks at once of such subjects as UFOs, psychic phenomena, and cryptozoology. But we have equally provided a welcomed forum for unorthodoxies quite within the mainstream disciplines. The Society's Dinsdale Award has been accepted by Kilmer McCully, Halton Arp, William Corliss, Helmut Schmidt, and Ian Stevenson, so by any reckoning, the Society has honored work within the mainstream at least as often as work supposedly beyond the scientific pale. Mainstream orthodoxy routinely resists novelties that later become accepted. Throughout the 20th century there are examples: Bretz's Spokane flood, McClintock's recognition of "jumping genes", Mitchell's insights into biological energy mechanisms, Woese's Archaea, and McCully's homocysteine. Only late in the 20th century did science reluctantly grant that acupuncture can have some analgesic effect, that ball lightning exists, that the kraken is not myth but the real giant squid, that it is not foolish to look for intelligent life outside the Earth, that 5000-year-old megaliths incorporate substantial knowledge of astronomy, that human beings inhabited the Americas long before the days of the Clovis culture, and that living systems can sense not only electrical but also magnetic fields. Indeed, it may well be that the suppression of unorthodox views in science is on the increase rather than in decline. In Prometheus Bound (1994), John Ziman has outlined how science changed during the 20th century: traditionally (since perhaps the 17th century) a relatively disinterested knowledge-seeking activity, science progressively be-

Editorial

305

came handmaiden to industry and government, and its direction of research is increasingly influenced by vested interests and self-interested bureaucracies, including bureaucracies supposedly established to promote good science such as the National Academies, the National Science Foundation, and the National Institutes of Health. Parkinson's Law, it may be, applies to science as to other human activities: no sooner has an organization become successfully established than it is by that token already an obsolescent nuisance. Even recently accepted theories may quickly become too dogmatically held. I had thought that plate tectonics, for example, which just a few decades ago justified Wegener's heresy, remained the generally accepted scientific wisdom, but to my surprise, there exists a group of dissenting geologists who publish a newsletter with discussions of the geological phenomena for which plate tectonics does not provide a satisfactory explanation. David Pratt reviews the situation in this issue of the Journal of Scientific Exploration. Again, it is less than 2 decades since the discovery was hailed of HIV as the cause of AIDS, yet that belief too may be coming apart at the seams after South Africa's President Mbeki challenged the scientific orthodoxy to engage in genuine discourse with the dissidents. Yet another contemporary instance: a book just published makes a compelling case that cold fusion is a real phenomenon, albeit its present name may be misleading. The Society for Scientific Exploration, then, is performing a service to science more immediate and more connected to the mainstream than was envisaged when the society was founded. At the same time, we do not neglect the more heretical subjects. In this issue, Suitbert Ertel reemphasizes how potent is the evidence for the quasi-astrological Mars effect. In a future issue, a major study of the replicability of mind-machine interactions will offer provocative food for thought. So the Journal and the Society fill a unique niche-a forum for disciplined investigative discussion of what may become accepted science later in the 21st century and beyond.
Henry Bauer

Journal of Scientific Exploration, Vol. 14, No. 3, pp. 307-352,2000

0892-3310100 O 2000 Society for Scientific Exploration

Plate Tectonics: A Paradigm Under Threat

Dual en Bergselaan 68, 2565 AG The Hague, The Netherlands dp5 @ compuserve.com

Abstract-This paper looks at the challenges confronting plate tectonicsthe ruling paradigm in the earth sciences. The classical model of thin lithospheric plates moving over a global asthenosphere is shown to be implausible. Evidence is presented that appears to contradict continental drift, seafloor spreading, and subduction, as well as the claim that the oceanic crust is relatively young. The problems posed by vertical tectonic movements are reviewed, including evidence for large areas of submerged continental crust in today's oceans. It is concluded that the fundamental tenets of plate tectonics might be wrong.
Keywords: plate tectonics - continental roots -age of seafloor - vertical tectonics - surge tectonics.

Introduction
The idea of large-scale continental drift has been around for some 200 years, but the first detailed theory was proposed by Alfred Wegener in 1912. It met with widespread rejection, largely because the mechanism he suggested was inadequate: the continents supposedly plowed slowly through the denser oceanic crust under the influence of gravitational and rotational forces. Interest was revived in the early 1950s with the rise of the new science of paleomagnetism, which seemed to provide strong support for continental drift. In the early 1960s, new data from ocean exploration led to the idea of seafloor spreading. A few years later, these and other concepts were synthesized into the model of plate tectonics, which was originally called "the new global tectonics." According to the orthodox model of plate tectonics, the earth's outer shell, or lithosphere, is divided into a number of large, rigid plates that move over a soft layer of the mantle known as the "asthenosphere" and interact at their boundaries, where they converge, diverge, or slide past one another. Such interactions are believed to be responsible for most of the seismic and volcanic activity of the earth. Plates cause mountains to rise where they push together, and continents to fracture and oceans to form where they rift apart. The continents, sitting passively on the backs of the plates, drift with them, at the rate of a few centimeters per year. At the end of the Permian, some 250 million years ago, all the present continents are said to have been gathered together in a single supercontinent, Pangaea, consisting of two major landmasses: Laurasia in

l

307

308

David Pratt

the north, and Gondwanaland in the south. Pangaea is widely believed to have started fragmenting in the early Jurassic-although this is sometimes said to have begun earlier, in the Triassic, or even as late as the Cretaceous-resulting in the configuration of oceans and continents observed today. It has been said that "a hypothesis that is appealing for its unity or simplicity acts as a filter, accepting reinforcement with ease but tending to reject evidence that does not seem to fit" (Grad, 197 l , p. 636). Meyerhoff and Meyerhoff (1974b, p. 411) argued that this is "an admirable description of what has happened in the field of earth dynamics, where one hypothesis-the new global tectonics-has been permitted to override and overrule all other hypotheses." Nitecki et al. (1978) reported that in 1961 only 27% of western geologists accepted plate tectonics, but that during the mid-1960s a "chain reaction" took place, and by 1977 it was embraced by as many as 87%. Some proponents of plate tectonics have admitted that a bandwagon atmosphere developed and that data that did not fit into the model were not given sufficient consideration (e.g., Wyllie, 1976), resulting in "a somewhat disturbing dogmatism" (Dott and Batten, 198 1, p. 15 1). McGeary and Plummer (1 998, p. 97) acknowledge that "geologists, like other people, are susceptible to fads." Maxwell (1974) stated that many earth-science papers were concerned with demonstrating that some particular feature or process may be explained by plate tectonics, but that such papers were of limited value in any unbiased assessment of the scientific validity of the hypothesis. Van Andel (1984) conceded that plate tectonics had serious flaws and that the need for a growing number of ad hoc modifications cast doubt on its claim to be the ultimate unifying global theory. Lowman (1 992a) argued that geology has largely become "a bland mixture of descriptive research and interpretive papers in which the interpretation is a facile cookbook application of plate-tectonics concepts... used as confidently as trigonometric functions" (p. 3). Lyttleton and Bondi (1 992) held that the difficulties facing plate tectonics and the lack of study of alternative explanations for seemingly supportive evidence reduced the plausibility of the theory. Saul1 (1986) pointed out that no global tectonic model should ever be considered definitive, because geological and geophysical observations are nearly always open to alternative explanations. He also stated that even if plate tectonics were false, it would be difficult to refute and replace, for the following reasons: the processes supposed to be responsible for plate dynamics are rooted in regions of the earth so poorly known that it is hard to prove or disprove any particular model of them; the hard core of belief in plate tectonics is protected from direct assault by auxiliary hypotheses that are still being generated; and the plate model is so widely believed to be correct that it is difficult to get alternative interpretations published in the scientific literature. When plate tectonics was first elaborated in the 1960s, less than 0.0001% of the deep ocean had been explored and less than 20% of the land area had been mapped in meaningful detail. Even by the mid- 1990s, only about 3%-5% of

Plate Tectonics: A Paradigm Under Threat

309

the deep ocean basins had been explored in any kind of detail, and not much more than 25%-30% of the land area could be said to be truly known (Meyerhoff et al., 1996a). Scientific understanding of the earth's surface features is clearly still in its infancy, to say nothing of the earth's interior. Beloussov (1980, 1990) held that plate tectonics was a premature generalization of still very inadequate data on the structure of the ocean floor and had proven to be far removed from geological reality. He wrote:
It is ...quite understandable that attempts to employ this conception to explain concrete structural situations in a local rather than a global scale lead to increasingly complicated schemes in which it is suggested that local axes of spreading develop here and there, that they shift their position, die out, and reappear, that the rate of spreading alters repeatedly and often ceases altogether, and that lithospheric plates are broken up into an even greater number of secondary and tertiary plates. All these schemes are characterised by a complete absence of logic, and of patterns of any kind. The impression is given that certain rules of the game have been invented, and that the aim is to fit reality into these rules somehow or other (1 980, p. 303).

Criticism of plate tectonics has increased in line with the growing number of observational anomalies. This paper outlines some of the main problems facing the theory.

Plates in Motion?
According to the classical model of plate tectonics, lithospheric plates creep over a relatively plastic layer of partly molten rock known as the "asthenosphere" (or low-velocity zone). According to a modern geological textbook (McGeary and Plummer, 1998), the lithosphere, which comprises the earth's crust and uppermost mantle, averages about 70 km thick beneath oceans and is at least 125 km thick beneath continents, while the asthenosphere extends to a depth of perhaps 200 km. It points out that some geologists think that the lithosphere beneath continents is at least 250 km thick. Seismic tomography, which produces three-dimensional images of the earth's interior, appears to show that the oldest parts of the continents have deep roots extending to depths of 400-600 km and that the asthenosphere is essentially absent beneath them (Figure 1). McGeary and Plummer (1998) say that these findings cast doubt on the original, simple lithosphere-asthenosphere model of plate behavior. They do not, however, consider any alternatives. Despite the compelling seismotomographic evidence for deep continental roots (Dziewonski and Anderson, 1984; Dziewonski and Woodhouse, 1987; Grand, 1987; Lerner-Lam, 1988; Forte, Dziewonski, and O'Connell, 1995; Gossler and Kind, 1996), some plate tectonicists have suggested that we just happen to live at a time when the continents have drifted over colder mantle (Anderson, Tanimoto, and Zhang, 1992), or that continental roots are really no

310

David Pratt
NORTH AMERICAN CRATON

[EUROPE

W

JUAN DE A FUCf RIDGE

M - TA T I AL NI D C
RIDGE
I

ICELAND AND

E A'

Fig. 1. Seismotomographic cross-section showing velocity structure across the North American craton and North Atlantic Ocean. High-velocity (colder) lithosphere, shown in dark tones, underlies the Canadian shield to depths of 250-500 km. (Reprinted with permission from Grand, 1987. Copyright by the American Geophysical Union.)

mantle material beneath them, giving the illusion of much deeper roots (Polet and Anderson, 1995). However, evidence from seismic-velocity, heat-flow, and gravity studies has been building up for several decades, showing that ancient continental shields have very deep roots and that the low-velocity asthenosphere is very thin or absent beneath them (e.g., Jordan, 1975, 1978; MacDonald, 1963; Pollack and Chapman, 1977). Seismic tomography has merely reinforced the message that continental cratons, particularly those of Archean and Early Proterozoic age, are "welded" to the underlying mantle, and that the concept of thin (less than 250 km thick) lithospheric plates moving thousands of kilometers over a global asthenosphere is unrealistic. Nevertheless, many textbooks continue to propagate the simplistic lithosphere-asthenosphere model and fail to give the slightest indication that it faces any problems (e.g., McLeish, 1992; Skinner and Porter, 1995; Wicander and Monroe, 1999). Geophysical data show that, far from the asthenosphere being a continuous layer, there are disconnected lenses (asthenolenses), which are observed only in regions of tectonic activation and high heat flow. Although surface-wave observations suggested that the asthenosphere was universally present beneath the oceans, detailed seismic studies show that here, too, there are only asthenospheric lenses. Seismic research has revealed complicated zoning and inhomogeneity in the upper mantle and the alternation of layers with higher and lower velocities and layers of different quality. Individual low-velocity layers are bedded at different depths in different regions and do not compose a single layer. This renders the very concept of the lithosphere ambiguous, at least that of its base. Indeed, the definition of the lithosphere and astheno-

Plate Tectonics: A Paradigm Under Threat

311

sphere has become increasingly blurred with time (Pavlenkova, 1990, 1995, 1996). Thus, the lithosphere has a highly complex and irregular structure. Far from being homogeneous, "plates" are actually "a megabreccia, a 'pudding' of inhomogeneities whose nature, size, and properties vary widely" (Chekunov, Gordienko, and Guterman, 1990, p. 404). The crust and uppermost mantle are divided by faults into a mosaic of separate, jostling blocks of different shapes and sizes, generally a few hundred kilometers across, and of varying internal structure and strength. Pavlenkova (1990, p. 78) concludes: "This means that the movement of lithospheric plates over long distances, as single rigid bodies, is hardly possible. Moreover, if we take into account the absence of the asthenosphere as a single continuous zone, then this movement seems utterly impossible." She states that this is further confirmed by the strong evidence that regional geological features, too, are connected with deep (more than 400 km) inhomogeneities and that these connections remain stable during long periods of geologic time; considerable movement between the lithosphere and asthenosphere would detach near-surface structures from their deep mantle roots. Plate tectonicists who accept the evidence for deep continental roots have proposed that plates may extend to and glide along the 400-km, or even 670km, seismic discontinuity (Jordan, 1975, 1978, 1979; Seyfert, 1998). Jordan, for instance, suggested that the oceanic lithosphere moves on the classical low-velocity zone while the continental lithosphere moves along the 400-km discontinuity. However, there is no certainty that a superplastic zone exists at this discontinuity, and no evidence has been found of a shear zone connecting the two decoupling layers along the trailing edge of continents (Lowman, 1985). Moreover, even under the oceans, there appears to be no continuous asthenosphere. Finally, the movement of such thick "plates" poses an even greater problem than that of thin lithosphericic plates. The driving force of plate movements was initially claimed to be mantledeep convection currents welling up beneath midocean ridges, with downwelling occurring beneath ocean trenches. Since the existence of layering in the mantle was considered to render whole-mantle convection unlikely, twolayer convection models were also proposed. Jeffreys (1 974) argued that convection cannot take place because it is a self-damping process, as described by the Lomnitz law. Plate tectonicists expected seismic tomography to provide clear evidence of a well-organized convection-cell pattern, but it has actually provided strong evidence against the existence of large, plate-propelling convection cells in the upper mantle (Anderson, Tanimoto, and Zhang, 1992). Many geologists now think that mantle convection is a result of plate motion rather than its cause and that it is shallow rather than mantle deep (McGeary and Plummer, 1998). The favored plate-driving mechanisms at present are "ridge push" and "slab

312

David Pratt

the dominant mechanism and refers to the gravitational subsidence of subducted slabs. However, it will not work for plates that are largely continental or that have leading edges that are continental, because continental crust cannot be bodily subducted due to its low density, and it seems utterly unrealistic to imagine that ridge push from the Mid-Atlantic Ridge alone could move the 120"-wide Eurasian plate (Lowman, 1986). Moreover, evidence for the longterm weakness of large rock masses casts doubt on the idea that edge forces can be transmitted from one margin of a "plate" to its interior or opposite margin (Keith, 1993). Thirteen major plates are currently recognized, ranging in size from about 400 by 2,500 km to 10,000 by 10,000 km, together with a proliferating number of microplates (over 100 so far). Van Andel (1998) writes:
Where plate boundaries adjoin continents, matters often become very complex and have demanded an ever denser thicket of ad hoc modifications and amendments to the theory and practice of plate tectonics in the form of microplates, obscure plate boundaries, and exotic terranes. A good example is the Mediterranean, where the collisions between Africa and a swarm of microcontinents have produced a tectonic nightmare that is far from resolved. More disturbingly, some of the present plate boundaries, particularly in the eastern Mediterranean, appear to be so diffuse and so anomalous that they cannot be compared to the three types of plate boundaries of the basic theory.

Plate boundaries are identified and defined mainly on the basis of earthquake and volcanic activity. The close correspondence between plate edges and belts of earthquakes and volcanoes is therefore to be expected and can hardly be regarded as one of the "successes" of plate tectonics (McGeary and Plummer, 1998). Moreover, the simple pattern of earthquakes around the Pacific Basin on which plate tectonics models have hitherto been based has been seriously undermined by more recent studies showing a surprisingly large number of earthquakes in deep-sea regions previously thought to be aseismic (Storetvedt, 1997). Another major problem is that several "plate boundaries" are purely theoretical and appear to be nonexistent, including the northwest Pacific boundary of the Pacific, North American, and Eurasian plates, the southern boundary of the Philippine plate, part of the southern boundary of the Pacific plate, and most of the northern and southern boundaries of the South American plate (Stanley, 1989).

Continental Drift
Geological field mapping provides evidence for horizontal crustal movements of up to several hundred kilometers (Jeffreys, 1976). Plate tectonics, however, claims that continents have moved up to 7,000 km or more since the alleged breakup of Pangaea. Measurements using space-geodetic techniques-very long baseline interferometry, satellite laser-ranging, and the global positioning system-have been hailed by some workers as having

Plate Tectonics: A Paradigm Under Threat

313

but do not provide evidence for plate motions of the kind predicted by plate tectonics unless the relative motions predicted among all plates are observed. However, many of the results have shown no definite pattern and have been confusing and contradictory, giving rise to a variety of ad hoc hypotheses (Fallon and Dillinger, 1992; Gordon and Stein, 1992; Smith et al., 1994). Japan and North America appear, as predicted, to be approaching each other, but distances from the Central South American Andes to Japan or Hawaii are more or less constant, whereas plate tectonics predicts significant separation (Storetvedt, 1997). Trans-Atlantic drift has not been demonstrated, because baselines within North America and western Europe have failed to establish that the plates are moving as rigid units; they suggest in fact significant intraplate deformation (James, 1994; Lowman, 1992b). Space-geodetic measurements to date have therefore not confirmed plate tectonics. Moreover, they are open to alternative explanations (e.g., Carey, 1994; Meyerhoff et al., 1996a; Storetvedt, 1997). It is clearly a hazardous exercise to extrapolate present crustal movements tens or hundreds of millions of years into the past or future. Indeed, geodetic surveys across "rift" zones (e.g., in Iceland and East Africa) have failed to detect any consistent and systematic widening as postulated by plate tectonics (Keith, 1993). Fits and Misfits A "compelling" piece of evidence that all the continents were once united in one large landmass is said to be the fact that they can be fitted together like pieces of a jigsaw puzzle. Many reconstructions have been attempted (e.g., Barron, Harrison, and Hay, 1978; Bullard, Everett, and Smith, 1965; Dietz and Holden, 1970; Nafe and Drake, 1969; Scotese, Gahagan, and Larson, 1988; Smith and Hallam, 1970; Smith, Hurley, and Briden, 1981 ;Tarling, 197 I), but none are entirely acceptable (Figures 2 and 3). In the Bullard, Everett, and Smith (1965) computer-generated fit, for example, there are a number of glaring omissions. The whole of Central America and much of southern Mexico are left out, despite the fact that extensive areas of Paleozoic and Precambrian continental rocks occur there. This region of some 2,100,000 km2 overlaps South America in a region consisting of a craton at least 2 billion years old. The entire West Indian archipelago has also been omitted. In fact, much of the Caribbean is underlain by ancient continental crust, and the total area involved (300,000 km 2) overlaps Africa (Meyerhoff and Hatten, 1974). The Cape Verde Islands-Senegal Basin, too, is underlain by ancient continental crust, creating an additional overlap of 800,000 km2. Several major submarine structures that appear to be of continental origin are ignored in the Bullard, Everett, and Smith (1 965) fit, including the FaeroeIceland-Greenland Ridge, Jan Mayen Ridge, Walvis Ridge, Rio Grande Rise, and the Falkland Plateau. However, the Rockall Plateau was included for the sole reason that it could be "slotted in." This fit postulates an east-west shear

David Pratt

Fig. 2. The Bullard fit. Overlaps and gaps between continents are shown in black. (Reprinted with permission from Bullard, Everett, and Smith, 1965. Copyright by The Royal Society.)

field geology does not support either of these suppositions (Meyerhoff and Meyerhoff, 1974a). Even the celebrated fit of South America and Africa is problematic, as it is impossible to match all parts of the coastlines simultaneously; e.g., there is a gap between Guyana and Guinea (Eyles and Eyles, 1993). Like the Bullard, Everett, and Smith (1965) fit, the Smith and Hallam (1970) reconstruction of the Gondwanaland continents is based on the 500fathom depth contour. The South Orkneys and South Georgia are omitted, as is Kerguelen Island in the Indian Ocean, and there is a large gap west of Australia. Fitting India against Australia, as in other fits, leaves a corresponding gap in the western Indian Ocean (Hallam, 1976). Dietz and Holden (1970) based their fit on the 1,000-fathom (2-km) contour, but they still had to omit the Florida-Bahamas platform, ignoring the evidence that it predates the alleged commencement of drift. In many regions, the boundary between continental and oceanic crust appears to occur beneath oceanic depths of 2-4 km or more (Hallam, 1979), and in some places, the ocean-continent transition zone is several hundred kilometers wide (Van der Linden, 1977). This means that any reconstructions based on arbitrarily selected depth contours are flawed. Given the liberties that drifters have had to take to obtain the desired continen-

Plate Tectonics: A Paradigm Under Threat

315

Fig. 3. Computer-derived plate tectonic map for Permian time. (Reprinted with permission from Meyerhoff, 1995. Copyright by Elsevier Science.)

tal matches, their computer-generated fits may well be a case of "garbage in, garbage out" (Le Grand, 1988). The similarities of rock types and geological structures on coasts that were supposedly once juxtaposed are hailed by drifters as further evidence that the continents were once joined together. However, they rarely mention the many geological dissimilarities. For instance, western Africa and northern Brazil were supposedly once in contact, yet the structural trends of the former run north to south while those of the latter run east to west (Storetvedt, 1997). Some predrift reconstructions show peninsular India against western Antarctica, yet Permian Indian basins do not correspond geographically or in sequence to the western Australian basins (Dickins and Choi, 1997). Gregory (1 929) held that the geological resemblances of opposing Atlantic coastlines are due to the areas having belonged to the same tectonic belt, but that the differences are sufficient to show that the areas were situated in distant parts of the belt. Bucher (1933) showed that the paleontological and geological similarities between the eastern Alps and central Himalayas, 4,000 miles apart, are just as remarkable as those between the Argentine and South Africa, separated by the same distance. The approximate parallelism of the coastlines of the Atlantic Ocean may be due to the boundaries between the continents and oceans having been formed by deep faults, which tend to be grouped into parallel systems (Beloussov, 1980). Moreover, the curvature of continental contours is often so similar that many of them can be joined if they are given the necessary rotation. Lyustikh (1967) gave examples of 15 shorelines that can be fitted together quite well even though they can never have been in juxtaposition. Voisey (1 958) showed that eastern Australia fits well with eastern North America if Cape York is

316

David Pratt

placed next to Florida. He pointed out that the geological and paleontological similarities are remarkable, probably due to the similar tectonic backgrounds of the two regions.
Paleornagnetic Pitfalls

One of the main props of continental drift is paleomagnetism-the study of the magnetism of ancient rocks and sediments. The inclination and declination of fossil magnetism can be used to infer the location of a virtual magnetic pole relative to the location of the sample in question. When virtual poles are determined from progressively older rocks from the same continent, the poles appear to wander with time. Joining the former averaged pole positions generates an apparent polar wander path. Different continents yield different polar wander paths, and from this, it has been concluded that the apparent wandering of the magnetic poles is caused by the actual wandering of the continents over the earth's surface. The possibility that there has been some degree of true polar wander-i.e., a shift of the whole earth relative to the rotation axis (the axial tilt remaining the same)-has not, however, been ruled out. That paleomagnetism can be unreliable is well established (Barron, Harrison, and Hay, 1978; Meyerhoff and Meyerhoff, 1972). For instance, paleomagnetic data imply that during the mid-Cretaceous, Azerbaijan and Japan were in the same place (Meyerhoff, 1970a)! The literature is in fact bursting with inconsistencies (Storetvedt, 1997). Paleomagnetic studies of rocks of different ages suggest a different polar wander path not only for each continent, but also for different parts of each continent. When individual paleomagnetic pole positions, rather than averaged curves, are plotted on world maps, the scatter is huge, often wider than the Atlantic. Furthermore, paleomagnetism can determine only paleolatitude, not paleolongitude. Consequently, it cannot be used to prove continental drift. Paleomagnetism is plagued with uncertainties. Merrill, McElhinny, and McFadden (1996, p. 69) state that "there are numerous pitfalls that await the unwary: first, in sorting out the primary magnetization from secondary magnetizations (acquired subsequent to formation), and second, in extrapolating the properties of the primary magnetization to those of the earth's magnetic field." The interpretation of paleomagnetic data is founded on two basic assumptions: (a) when rocks are formed, they are magnetized in the direction of the geomagnetic field existing at the time and place of their formation, and the acquired magnetization is retained in the rocks at least partially over geologic time; and (b) the geomagnetic field averaged for any period of the order of 10' years (except magnetic-reversal epochs) is a dipole field oriented along the earth's rotation axis. Both these assumptions are questionable. The gradual northward shift of paleopole "scatter ellipses" through time, and the gradual reduction in the diameters of the ellipses suggest that remanent magnetism becomes less stable with time. Rock magnetism is subject to modification by later magnetism, weathering, metamorphism, tectonic defor-

Plate Tectonics: A Paradigm Under Threat

317

mation, and chemical changes. Moreover, the geomagnetic field today deviates substantially from that of a geocentric axial dipole. The magnetic axis is tilted by about 11" to the rotation axis, and on some planets, much greater offsets are found: 46.8" in the case of Neptune and 58.6" in the case of Uranus (Merrill, McElhinny, and McFadden, 1996). Nevertheless, because Earth's magnetic field undergoes significant long-term secular variation (e.g., a westward drift), it is thought that the time-averaged field will closely approximate a geocentric axial dipole. However, there is strong evidence that the geomagnetic field had long-term nondipole components in the past, though they have largely been neglected (Kent and Smethurst, 1998; Van der Voo, 1998). To test the axial nature of the geomagnetic field in the past, scientists have to use paleoclimatic data. However, several major paleoclimatic indicators, along with paleontological data, provide powerful evidence against continental-drift models, and therefore against the current interpretation of paleomagnetic data (see below). It is possible that the magnetic poles have wandered considerably with respect to the geographic poles in former times. Also, if in past geological periods, there were stable magnetic anomalies of the same intensity as the presentday East Asian anomaly (or slightly more intensive), this would render the geocentric axial dipole hypothesis invalid (Beloussov, 1990). Regional or semiglobal magnetic fields might be generated by vortexlike cells of thermalmagmatic energy, rising and falling in the earth's mantle (Pratsch, 1990). Another important factor may be magnetostriction-the alteration of the direction of magnetization by directed stress (Jeffreys, 1976; Munk and MacDonald, 1975). Some workers have shown that certain discordant paleomagnetic results that could be explained by large horizontal movements can be explained equally well by vertical block rotations and tilts and by inclination shallowing resulting from sediment compaction (Butler et al., 1989; Dickinson and Butler, 1998; Irving and Archibald, 1990; Hodych and Bijaksana, 1993). Storetvedt (1 992, 1997) has developed a model known as "global wrench tectonics" in which paleomagnetic data are explained by in situ horizontal rotations of continental blocks, together with true polar wander. The possibility that a combination of these factors could be at work simultaneously significantly undermines the use of paleomagnetism to support continental drift. Drift Versus Geology The opening of the Atlantic Ocean allegedly began in the Cretaceous by the rifting apart of the Eurasian and American plates. However, on the other side of the globe, northeastern Eurasia is joined to North America by the BeringChukotsk shelf, which is underlain by Precambrian continental crust that is continuous and unbroken from Alaska to Siberia. Geologically these regions constitute a single unit, and it is unrealistic to suppose that they were formerly divided by an ocean several thousand kilometers wide, which closed to com-

318

David Pratt

pensate for the opening of the Atlantic. If a suture is absent there, one ought to be found in Eurasia or North America, but no such suture appears to exist (Beloussov, 1990; Shapiro, 1990). If Baffin Bay and the Labrador Sea had formed by Greenland and North America drifting apart, this would have produced hundreds of kilometers of lateral offset across the Nares Strait between Greenland and Ellesmere Island, but geological field studies reveal no such offset (Grant, 1980, 1992). Greenland is separated from Europe west of Spitsbergen by only 50-75 km at the 1,000-fathom depth contour, and it is joined to Europe by the continental Faeroe-Iceland-Greenland Ridge (Meyerhoff, 1974). All these facts rule out the possibility of east-west drift in the northern hemisphere. Geology indicates that there has been a direct tectonic connection between Europe and Africa across the zones of Gibraltar and Rif on the one hand, and Calabria and Sicily on the other, at least since the end of the Paleozoic, contradicting plate-tectonic claims of significant displacement between Europe and Africa during this period (Beloussov, 1990). Plate tectonicists hold widely varying opinions on the Middle East region. Some advocate the former presence of two or more plates, some postulate several microplates, others support island-arc interpretations, and a majority favor the existence of at least one suture zone that marks the location of a continent-continent collision. Kashfi (1992, p. 119) comments:
Nearly all of these hypotheses are mutually exclusive. Most would cease to exist if the field data were honored. These data show that there is nothing in the geologic record to support a past separation of Arabia-Africa from the remainder of the Middle East.

India supposedly detached itself from Antarctica sometime during the Mesozoic, and then drifted northeastward up to 9,000 km, over a period of up to 200 million years, until it finally collided with Asia in the mid-Tertiary, pushing up the Himalayas and the Tibetan Plateau. That Asia happened to have an indentation of approximately the correct shape and size and in exactly the right place for India to "dock" into would amount to a remarkable coincidence (Mantura, 1972). There is, however, overwhelming geological and paleontological evidence that India has been an integral part of Asia since Proterozoic or earlier time (Ahmad, 1990; Chatterjee and Hotton, 1986; Meyerhoff et al., 1991; Saxena and Gupta, 1990). There is also abundant evidence that the Tethys Sea in the region of the present Alpine-Himalayan orogenic belt was never a deep, wide ocean but rather a narrow, predominantly shallow, intracontinental seaway (Bhat, 1987; Dickins, 1987, 1994c; McKenzie, 1987; Stocklin, 1989). If the long journey of India had actually occurred, it would have been an isolated island continent for millions of years-sufficient time to have evolved a highly distinct endemic fauna. However, the Mesozoic and Tertiary faunas show no such endemism but indicate instead

~

Plate Tectonics: A Paradigm Under Threat

319

~

I
I

and Antarctica (Chatterjee and Hotton, 1986). The stratigraphic, structural, and paleontological continuity of India with Asia and Arabia means that the supposed "flight of India" is no more than a flight of fancy. A striking feature of the oceans and continents today is that they are arranged antipodally: The Arctic Ocean is precisely antipodal to Antarctica; North America is exactly antipodal to the Indian Ocean; Europe and Africa are antipodal to the central area of the Pacific Ocean; Australia is antipodal to the small basin of the North Atlantic; and the South Atlantic correspondsthough less exactly-to the eastern half of Asia (Bucher, 1933; Gregory, 1899, 1901; Steers, 1950). Only 7% of the earth's surface does not obey the antipodal rule. If the continents had slowly drifted thousands of kilometers to their present positions, the antipodal arrangement of land and water would have to be regarded as purely coincidental. Harrison et al. (1 983) calculated that there is one chance in seven that this arrangement is the result of a random process.

Paleoclimatology
The paleoclimatic record is preserved from Proterozoic time to the present in the geographic distribution of evaporites, carbonate rocks, coals, and tillites. The locations of these paleoclimatic indicators are best explained by stable rather than shifting continents, and by periodic changes in climate, from globally warm or hot to globally cool (Meyerhoff and Meyerhoff, 1974a; Meyerhoff et al., 1996b). For instance, 95% of all evaporites-a dry-climate indicator-from the Proterozoic to the present lie in regions that now receive less than 100 cm of rainfall per year, i.e. in today's dry-wind belts. The evaporite and coal zones show a pronounced northward offset similar to today's northward offset of the thermal equator. Shifting the continents succeeds at best in explaining local or regional paleoclimatic features for a particular period and invariably fails to explain the global climate for the same period. In the Carboniferous and Permian, glaciers covered parts of Antarctica, South Africa, South America, India, and Australia. Drifters claim that this glaciation can be explained in terms of Gondwanaland, which was then situated near the South Pole. However, the Gondwanaland hypothesis defeats itself in this respect because large areas that were glaciated during this period would be removed too far inland for moist ocean-air currents to reach them. Glaciers would have formed only at its margins while the interior would have been a vast, frigid desert (Meyerhoff, 1970a; Meyerhoff and Teichert, 197 1). Shallow epicontinental seas within Pangaea could not have provided the required moisture because they would have been frozen during the winter months. This glaciation is easier to explain in terms of the continents' present positions: nearly all the continental ice centers were adjacent to or near present coastlines, or in high plateaus and/or mountain lands not far from present coasts. Drifters say that the continents have shifted little since the start of the Cenozoic (some 65 million years ago), yet this period has seen significant alterations in climatic conditions. Even since Early Pliocene time the width of the

320

David Pratt

temperate zone has changed by more than 15" (1,650 km) in both the northern and southern hemispheres. The uplift of the Rocky Mountains and Tibetan Plateau appears to have been a key factor in the late Cenozoic climatic deterioration (Manabe and Broccoli, 1990; Ruddiman and Kutzbach, 1989). To decide whether past climates are compatible with the present latitudes of the regions concerned, it is clearly essential to take account of vertical crustal movements, which can cause significant changes in atmospheric and oceanic circulation patterns by altering the topography of the continents and ocean floor, and the distribution of land and sea (Brooks, 1949; Dickins, 1994a; Meyerhoff, 1 970b).

Biopaleogeography
Meyerhoff et al. (1 996b) showed in a detailed study that most major biogeographical boundaries, based on floral and faunal distributions, do not coincide with the partly computer-generated plate boundaries postulated by plate tectonics, Nor do the proposed movements of continents correspond with the known, or necessary, migration routes and directions of biogeographical boundaries. In most cases, the discrepancies are very large, and not even an approximate match can be claimed. The authors comment, "What is puzzling is that such major inconsistencies between plate tectonic postulates and field data, involving as they do boundaries that extend for thousands of kilometers, are permitted to stand unnoticed, unacknowledged, and unstudied" (p. 3). The known distributions of fossil organisms are more consistent with an earth model like that of today than with continental-drift models, and more migration problems are raised by joining the continents in the past than by keeping them separated (Khudoley, 1974; Meyerhoff and Meyerhoff, 1974a; Smiley, 1974, 1976, 1992; Teichert, 1974; Teichert and Meyerhoff, 1972). It is unscientific to select a few faunal identities and ignore the vastly greater number of faunal dissimilarities from different continents that were supposedly once joined. The widespread distribution of the Glossopteris flora in the southern continents is frequently claimed to support the former existence of Gondwanaland, but it is rarely pointed out that this flora has also been found in northeast Asia (Smiley, 1976). Some of the paleontological evidence appears to require the alternate emergence and submergence of land dispersal routes only after the supposed breakup of Pangaea. For example, mammal distribution indicates that there were no direct physical connections between Europe and North America during Late Cretaceous and Paleocene times but suggests a temporary connection with Europe during the Eocene (Meyerhoff and Meyerhoff, 1974a). Continental drift, on the other hand, would have resulted in an initial disconnection with no subsequent reconnection. A few drifters have recognized the need for intermittent land bridges after the supposed separation of the continents (e.g., Briggs, 1987; Tarling, 1982). Various oceanic ridges, rises, and plateaus could have served as land bridges, as many are known to have been partly above

Plate Tectonics: A Paradigm Under Threat

321

water at various times in the past. It is also possible that these land bridges formed part of larger former landmasses in the present oceans (see below).

Seafloor Spreading and Subduction
According to the seafloor-spreading hypothesis, new oceanic lithosphere is generated at midocean ridges ("divergent plate boundaries") by the upwelling of molten material from the earth's mantle, and as the magma cools, it spreads away from the flanks of the ridges. The horizontally moving plates are said to plunge back into the mantle at ocean trenches or "subduction zones" ("convergent plate boundaries"). The melting of the descending slab is believed to give rise to the magmatic-volcanic arcs that lie adjacent to certain trenches.
Seafloor Spreading

~

The ocean floor is far from having the uniform characteristics that conveyor-type spreading would imply (Keith, 1993). Although averaged surfacewave data seemed to confirm that the oceanic lithosphere was symmetrical in relation to the ridge axis and increased in thickness with distance from the axial zone, more detailed seismic research has contradicted this simple model. It has shown that the mantle is asymmetrical in relation to the midocean ridges and has a complicated mosaic structure independent of the strike of the ridge. Several low-velocity zones (asthenolenses) occur in the oceanic mantle, but it is difficult to establish any regularity between the depth of the zones and their distance from the midocean ridge (Pavlenkova, 1990). Boreholes drilled in the Atlantic, Indian, and Pacific Oceans have shown the extensive distribution of shallow-water sediments ranging from Triassic to Quaternary. The spatial distribution of shallow-water sediments and their vertical arrangement in some of the sections refute the spreading mechanism for the formation of oceanic lithosphere (Ruditch, 1990). The evidence implies that since the Jurassic, the present oceans have undergone large-amplitude subsidences, and that this occurred mosaically rather than showing a systematic relationship with distance from the ocean ridges. Younger, shallow-water sediments are often located farther from the axial zones of the ridges than older ones-the opposite of what is required by the plate tectonics model, which postulates that as newly formed oceanic lithosphere moves away from the spreading axis and cools, it gradually subsides to greater depths. Furthermore, some areas of the oceans appear to have undergone continuous subsidence, whereas others underwent alternating subsidence and elevation (Figure 4). The height of the ridge along the Romanche fracture zone in the equatorial Atlantic is 1-4 km above that expected by seafloor-spreading models. Large segments of it were close to or above sea level only 5 million years ago, and subsequent subsidence has been one order of magnitude faster than that predicted by plate tectonics (Bonatti and Chermak, 198 1).

David Pratt

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150
Mln Yrr

Fig. 4. Vertical movements of the ocean bed during the last 160 million years: (1) according to the seafloor-spreading model, (2) the real sequence of vertical movements at the corresponding deep-sea drilling sites. The curves in the upper scaleless part of the diagram are tentative. (Reprinted from Ruditch, 1990.)

along ocean ridges and fall off steadily with increasing distance from the ridge crests. Actual measurements, however, contradict this simple picture: Ridge crests show a very large scatter in heat-flow magnitudes, and there is generally little difference in thermal flux between the ridge and the rest of the ocean (Keith, 1993; Storetvedt, 1997). All parts of the Indian Ocean display a cold and rather featureless heat-flow picture except the Central Indian Basin. The broad region of intense tectonic deformation in this basin indicates that the basement has a block structure and presents a major puzzle for plate tectonics, especially since it is located in a "midplate" setting. Smoot and Meyerhoff (1995) have shown that nearly all published charts of the world's ocean floors have been drawn deliberately to reflect the predictions of the plate-tectonics hypothesis. For example, the Atlantic Ocean floor is unvaryingly shown to be dominated by a sinuous, north-south midocean ridge, flanked on either side by abyssal plains, cleft at its crest by a rift valley, and offset at more or less regular 40-to-60-km intervals by east-west-striking fracture zones. New, detailed bathymetric surveys indicate that this oversimplified portrayal of the Atlantic Basin is largely wrong, yet the most accurate charts now available are widely ignored because they do not conform to plate tectonic preconceptions. According to plate tectonics, the offset segments of "spreading" oceanic ridges should be connected by "transform fault" plate boundaries. Since the late 1960s, it has been claimed that first-motion studies in ocean fracture

Plate Tectonics: A Paradigm Under Threat

323

zones provide overwhelming support for the concept of transform faults. The results of these seismic surveys, however, were never clear cut, and contradictory evidence and alternative explanations have been ignored (Storetvedt, 1997; Meyerhoff and Meyerhoff, 1974a). Instead of being continuous and approximately parallel across the full width of each ridge, ridge-transverse fracture zones tend to be discontinuous, with many unpredicted bends, bifurcations, and changes in strike. In places, the fractures are diagonal rather than perpendicular to the ridge, and several parts of the ridge have no important fracture zones or even traces of them. For instance, they are absent from a 700-km-long portion of the Mid-Atlantic Ridge between the Atlantis and Kane fracture zones. There is a growing recognition that the fracture patterns in the Atlantic "show anomalies that are neither predicted by nor ...y et built into plate tectonic understanding" (Shirley, 1998a, 1998b). Side-scanning radar images show that the midocean ridges are cut by thousands of long, linear, ridge-parallel fissures, fractures, and faults. This strongly suggests that the ridges are underlain at shallow depth by interconnected magma channels, in which semifluid lava moves horizontally and parallel with the ridges rather than at right angles to them. The fault pattern observed is therefore totally different from that predicted by plate tectonics, and it cannot be explained by upwelling mantle diapirs as some plate tectonicists have proposed (Meyerhoff et al, 1992a). A zone of thrust faults, 300-400 km wide, has been discovered flanking the Mid-Atlantic Ridge over a length of 1,000 km (Antipov et al., 1990). Because it was produced under conditions of compression, it contradicts the plate-tectonic hypothesis that midocean ridges are dominated by tension. In Iceland, the largest landmass astride the Mid-Atlantic Ridge, the predominant stresses in the axial zone are likewise compressive rather than extensional (Keith, 1993). Earthquake data compiled by Zoback et al. (1 989) provide further evidence that ocean ridges are characterized by widespread compression, whereas recorded tensional earthquake activity associated with these ridges is rarer. The rough topography and strong tectonic deformation of much of the ocean ridges, particularly in the Atlantic and Indian Oceans, suggest that instead of being "spreading centers," they are a type of foldbelt (Storetvedt, 1997). The continents and oceans are covered with a network of major structures or lineaments, many dating from the Precambrian, along which tectonic and magmatic activity and associated mineralization take place (Anfiloff, 1992; Gay, 1973; Katterfeld and Charushin, 1973; Dickins and Choi, 1997; O'Driscoll, 1980; Wezel, 1992). The oceanic lineaments are not readily compatible with seafloor spreading and subduction, and plate tectonics shows little interest in them. GEOSAT data and SASS multibeam sonar data show that there are NNW-SSE and WSW-ENE megatrends in the Pacific Ocean, composed primarily of fracture zones and linear seamount chains, and these orthogonal lineaments naturally intersect (Smoot, 1997b, 1998a, 1998b, 1999). This is a physical impossibility in plate tectonics, as seamount chains suppos-

324

David Pratt

edly indicate the direction of plate movement, and plates would therefore have to move in two directions at once! No satisfactory plate-tectonic explanation of any of these megatrends has been proposed outside the realm of ad hoc "microplates," and they are largely ignored. The orthogonal lineaments in the Atlantic Ocean, Indian Ocean, and Tasmanian Sea are also ignored (Choi, 1997, 1999a, 1 9 9 9 ~ ) .
Age of the Seajloor

The oldest known rocks from the continents are just under 4 billion years old, whereas-according to plate tectonics-none of the ocean crust is older than 200 million years (Jurassic). This is cited as conclusive evidence that oceanic lithosphere is constantly being created at midocean ridges and consumed in subduction zones. There is in fact abundant evidence against the alleged youth of the ocean floor, though geological textbooks tend to pass over it in silence. The oceanic crust is commonly divided into three main layers: Layer 1 consists of ocean floor sediments and averages 0.5 km in thickness; Layer 2 consists largely of basalt and is 1.0 to 2.5 km thick; and Layer 3 is assumed to consist of gabbro and is about 5 km thick. Scientists involved in the Deep Sea Drilling Project (DSDP) have given the impression that the basalt (Layer 2) found at the base of many deep-sea drill holes is basement, and that there are no further, older sediments below it. However, the DSDP scientists were apparently motivated by a strong desire to confirm seafloor spreading (Storetvedt, 1997). Of the first 429 sites drilled (1 968-1 977), only 165 (38%) reached basalt, and some penetrated more than one basalt. All but 12 of the 165 basalt penetrations were called "basement," including 19 sites where the upper contact of the basalt with the sediments was baked (Meyerhoff et al., 1992a). Baked contacts suggest that the basalt is an intrusive sill, and in some cases this has been confirmed, as the basalts turned out to have radiometric dates younger than the overlying sediments (e.g., Macdougall, 1971). One hundred one sedimentbasalt contacts were never recovered in cores, and therefore never actually seen, yet they were still assumed to be depositional contacts. In 33 cases, depositional contacts were observed, but the basalt sometimes contained sedimentary clasts, suggesting that there might be older sediments below. Indeed, boreholes that have penetrated Layer 2 to some depth have revealed an alternation of basalts and sedimentary rocks (Anderson et al., 1982; Hall and Robinson, 1979). Kamen-Kaye (1970) warned that before drawing conclusions on the youth of the ocean floor, rocks must be penetrated to depths of up to 5 km to see whether there are Triassic, Paleozoic, or Precambrian sediments below the so-called "basement." Plate tectonics predicts that the age of the oceanic crust should increase systematically with distance from the midocean ridge crests. Claims by DSDP scientists to have confirmed this are not supported by a detailed review of the

Plate Tectonics: A Paradigm Under Threat

325

drilling results. The dates exhibit a very large scatter, which becomes even larger if dredge hauls are included (Figure 5). On some marine magnetic anomalies, the age scatter is tens of millions of years (Meyerhoff et al., 1992a). On one seamount just west of the crest of the East Pacific Rise, the radiometric dates range from 2.4 to 96 million years. Although a general trend is discernible from younger sediments at ridge crests to older sediments away from them, this is in fact to be expected, because the crest is the highest and most active part of the ridge; older sediments are likely to be buried beneath younger volcanic rocks. The basalt layer in the ocean crust suggests that magma flooding was once oceanwide, but volcanism was subsequently restricted to an increasingly narrow zone centered on the ridge crests. Such magma floods were accompanied by progressive crustal subsidence in large sectors of the present oceans, beginning in the Jurassic (Beloussov, 1980; Keith, 1993). The numerous finds in the Atlantic, Pacific, and Indian Oceans of rocks far older than 200 million years-many of them continental in nature-provide strong evidence against the alleged youth of the underlying crust. In the Atlantic, rock and sediment age should range from Cretaceous (120 million years) adjacent to the continents to very recent at the ridge crest. During legs 37 and 43 of the DSDP, Paleozoic and Proterozoic igneous rocks were recovered in cores on the Mid-Atlantic Ridge and the Bermuda Rise, yet not one of these occurrences of ancient rocks was mentioned in the Cruise Site Reports or Cruise Synthesis Reports (Meyerhoff et al., 1996a). Aumento and Loncarevic (1969) reported that 75% of 84 rock samples dredged from the Bald Mountain region just west of the Mid-Atlantic Ridge crest at 45ON consisted of continental-type rocks and commented that this was a "remarkable phenomenon"; so remarkable, in fact, that they decided to classify these rocks as "glacial erratic~" to give them no further consideration. Another way of dealing with and "anomalous" rock finds is to dismiss them as ship ballast. However, the Bald Mountain locality has an estimated volume of 80 km3, so it is hardly likely to have been rafted out to sea on an iceberg or dumped by a ship! It consists of granitic and silicic metamorphic rocks ranging in age from 1,690 to 1,550 million years and is intruded by 785-million-year mafic rocks (Wanless et al., 1968). Ozima et al. (1976) found basalts of Middle Jurassic age (169 million years) at the junction of the rift valley of the Mid-Atlantic Ridge and the Atlantis fracture zone (30°N), an area where basalt should theoretically be extremely young, and stated that they were unlikely to be ice-rafted rocks. Van Hinte and Ruffman (1995) concluded that Paleozoic limestones dredged from Orphan Knoll in the northwest Atlantic were in situ and not ice rafted. In another attempt to explain away anomalously old rocks and anomalously shallow or emergent crust in certain parts of the ridges, some plate tectonicists have argued that "nonspreading blocks" can be left behind during rifting and that the spreading axis and related transform faults can jump from

326

David Pratt

VENDIAN

836 MA

600

Om'

1460

CARADOCIAN TRMOEITES AND ORAPTOLITES

L

A 380GARB.

E
L

300-

TRILOBITES: PERMIAN OR OLDER

2 150CRET.
E

JUR.

u

3 r r o MA
8138 364 20
813:118

.lo6

8100a3e7
8 387

-1Wsb

d l 5 a=4

4: ;

a380 e3.O *386

0 adz
f14
KMI I

1,wo

1.500

z.doo DSDP DRLLSITE

0--0

LEO 3 WAS USE0 TO CONFmM SEA-FLOOR SPREAOINO

da

2.600 3 . ~ 0 MIEOOE HAUL DATES ARE PROVmD ONLY FOR PRE-LATE AIRASBY: LOCALITKS

Fig. 5. A plot of rock age versus distance from the crest of the Mid-Atlantic Ridge. The figure shows (to scale) rocks of all ages, whether from drill holes or dredge hauls. (Reprinted with permission from Meyerhoff et al., 1996a, fig. 2.35. Copyright by Kluwer Academic Publishers.)

norez, 1971). This hypothesis was invoked by Pilot et al. (1998) to explain the presence of zircons with ages of 330 and 1,600 million years in gabbros beneath the Mid-Atlantic Ridge near the Kane fracture zone. Yet another way of dealing with anomalous rock ages is to reject them as unreliable. For instance, Reynolds and Clay (1977), reporting on a Proterozoic date (635 million years) near the crest of the Mid-Atlantic Ridge, wrote that the age must be wrong because the theoretical age of the site was only about 10 million years. Paleozoic trilobites and graptolites have been dredged from the King's Trough area, on the opposite side of the Mid-Atlantic Ridge to Bald Mountain, and at several localities near the Azores (Furon, 1949; Smoot and Meyerhoff, 1995). Detailed surveys of the equatorial segment of the Mid-Atlantic Ridge have provided a wide variety of data contradicting the seafloor-spreading model, including numerous shallow-water and continental rocks, with ages of up to 3.74 billion years (Timofeyev et al., 1992; Udintsev, 1996; Udintsev et al., 1993). Melson, Hart, and Thompson (1972), studying St. Peter and Paul's Rocks at the crest of the Mid-Atlantic Ridge just north of the equator, found an 835-million-year rock associated with other rocks giving 350-, 450-, and 2,000-million-year ages, whereas according to the seafloor-spreading model, the rock should have been 35 million years old. Numerous igneous and metamorphic rocks giving late Precambrian and Paleozoic radiometric ages have been dredged from the crests of the southern Mid-Atlantic, Mid-Indian, and Carlsberg ridges (Afanas' yev, 1967). Precambrian and Paleozoic granites have been found in several "oceanic"

Plate Tectonics: A Paradigm Under Threat

327

plateaus and islands with anomalously thick crusts, including Rockall Plateau, Agulhas Plateau, the Seychelles, the Obruchev Rise, Papua New Guinea, and the Paracel Islands (Ben-Avraham et al., 1981; Sanchez Cela, 1999). In many cases, structural and petrological continuity exists between continents and anomalous "oceanic" crusts-a fact incompatible with seafloor spreading; this applies, for example, in the North Atlantic, where there is a continuous sialic basement, partly of Precambrian age, from North America to Europe. Major Precambrian lineaments in Australia and South America continue into the ocean floors, implying that the "oceanic" crust is at least partly composed of Precambrian rocks, and this has been confirmed by deepsea dredging, drilling, and seismic data, and by evidence for submerged continental crust (ancient paleolands) in the present southeast and northwest Pacific (Choi, 1997, 1998; see below). Marine Magnetic Anomalies Powerful support for seafloor spreading is said to be provided by marine magnetic anomalies-approximately parallel stripes of alternating high and low magnetic intensity that characterize much of the world's midocean ridges. According to the Morley-Vine-Matthews hypothesis, first proposed in 1963, as the fluid basalt welling up along the midocean ridges spreads horizontally and cools, it is magnetized by the earth's magnetic field. Bands of high intensity are believed to have formed during periods of normal magnetic polarity, and bands of low intensity during periods of reversed polarity. They are therefore regarded as time lines or isochrons. As plate tectonics became accepted, attempts to test this hypothesis or to find alternative hypotheses ceased. Correlations have been made between linear magnetic anomalies on either side of a ridge, in different parts of the oceans, and with radiometrically dated magnetic events on land. The results have been used to produce maps showing how the age of the ocean floor increases steadily with increasing distance from the ridge axis (McGeary and Plummer, 1998, fig. 4.19). As shown above, this simple picture can be sustained only by dismissing the possibility of older sediments beneath the basalt "basement" and by ignoring numerous "anomalously" old rock ages. The claimed correlations have been largely qualitative and subjective and are therefore highly suspect; virtually no effort has been made to test them quantitatively by transforming them to the pole (i.e., recalculating each magnetic profile to a common latitude). In one instance where transformation to the pole was carried out, the plate-tectonic interpretation of the magnetic anomalies in the Bay of Biscay was seriously undermined (Storetvedt, 1997). Agocs, Meyerhoff, and Kis (1992) applied the same technique in their detailed, quantitative study of the magnetic anomalies of the Reykjanes Ridge near Iceland and found that the correlations were very poor; the correlation coefficient along strike averaged 0.3 1 and that across the ridge 0.17, with lim-

328

David Pratt
Positive magnetic anomaly Negative magnetic anomaly

\

lRift crest at ridge

*EARTHQUAKE EPICENTERS. NOTE THAT LINEAR ANOMALIESCROSS MIDOCEAN RIDGE AT OFJLlWE ANGLE NORTH OF I C E L A N D

Fig. 6. Two views of marine magnetic anomalies. Top: A textbook cartoon. (Reprinted with permission from McGeary and Plummer, 1998. Copyright by The McGraw-Hill Companies.) Bottom: Magnetic anomaly patterns of the North Atlantic. (Reprinted with permission from Meyerhoff and Meyerhoff, 1972. Copyright by the American Geophysical Union.)

Linear anomalies are known from only 70% of the seismically active midocean ridges. Moreover, the diagrams of symmetrical, parallel, linear bands of anomalies displayed in many plate-tectonics publications bear little resemblance to reality (Beloussov, 1970; Meyerhoff and Meyerhoff, 1974b) (Figure 6). The anomalies are symmetrical to the ridge axis in less than 50% of the ridge system where they are present, and in about 21% of it, they are oblique to the trend of the ridge. In some areas, linear anomalies are present where a ridge system is completely absent. Magnetic measurements by instruments towed near the sea bottom have indicated that magnetic bands actually consist of many isolated ovals that may be joined together in different ways. The initial, highly simplistic seafloor-spreading model for the origin of magnetic anomalies has been disproven by ocean drilling (Hall and Robinson, 1979; Pratsch, 1986). First, the hypothesis that the anomalies are produced in the upper 500 m of oceanic crust has had to be abandoned. Magnetic intensities, general polarization directions, and often the existence of different polar-

Plate Tectonics: A Paradigm Under Threat

329

ity zones at different depths suggest that the source for oceanic magnetic anomalies lies in deeper levels of oceanic crust not yet drilled (or dated). Second, the vertically alternating layers of opposing magnetic polarization directions disprove the theory that the oceanic crust was magnetized entirely as it spread laterally from the magmatic center and strongly indicate that oceanic crustal sequences represent longer geologic times than is now believed. A more likely explanation of marine magnetic anomalies is that they are caused by fault-related bands of rock of different magnetic properties and have nothing to do with seafloor spreading (Choi, Vasil'yev, and Tuezov, 1990; Grant, 1980; Morris 1990; Pratsch, 1986). The fact that not all the charted magnetic anomalies are formed of oceanic crustal materials further undermines the plate-tectonic explanation. In the Labrador Sea, some anomalies occur in an area of continental crust that had previously been defined as oceanic (Grant, 1980). In the northwestern Pacific, some magnetic anomalies are likewise located within an area of continental crust-a submerged paleoland (Choi, Vasil'yev, and Tuezov, 1990; Choi, Vasil'yev, and Bhat, 1992). Magnetic anomaly bands strike into the continents in at least 15 places and "dive" beneath Proterozoic or younger rocks. Furthermore, they are approximately concentric with respect to Archean continental shields (Meyerhoff and Meyerhoff, 1972, 1974b). These facts imply that instead of being a "taped record" of seafloor spreading and geomagnetic field reversals during the past 200 million years, most oceanic magnetic anomalies are the sites of ancient fractures, which partly formed during the Proterozoic and have been rejuvenated since. The evidence also suggests that Archean continental nuclei have held approximately the same positions with respect to one another since their formation-which is utterly at variance with continental drift.
Subduction

Benioff zones are distinct earthquake zones that begin at an ocean trench and slope landward and downward into the earth. In plate tectonics, these deep-rooted fault zones are interpreted as "subduction zones" where plates descend into the mantle. They are generally depicted as 100-km-thick slabs descending into the earth either at a constant angle, or at a shallow angle near the earth's surface and gradually curving around to an angle of between 60" and 75". Neither representation is correct. Benioff zones often consist of two separate sections: an upper zone with an average dip of 33" extending to a depth of 70-400 km, and a lower zone with an average dip of 60" extending to a depth of up to 700 km (Benioff, 1954; Isacks and Barazangi, 1977). The upper and lower segments are sometimes offset by 100-200 km, and in one case by 350 km (Benioff, 1954, Smoot, 1997a). Furthermore, deep earthquakes are disconnected from shallow ones; very few intermediate earthquakes exist (Smoot, 1997a). Many studies have found transverse as well as vertical discontinuities and segmentation in Benioff zones (e.g., Carr, 1976;

330

David Pratt

500 600 700

0
KILOMETERS I500

1600

KILOMETERS

0

Fig. 7. Cross-sections across the Peru-Chile trench (left) and Bonin-Honshu arc (right), showing hypocenters. (Reprinted with permission from Benioff, 1954. Copyright by the Geological Society of America.)

Carr, Stoiber, and Drake, 1973; Ranneft, 1979; Spence, 1977; Swift and Carr, 1974; Teisseyre et al., 1974). The evidence therefore does not favor the notion of a continuous, downgoing slab (Figure 7). Plate tectonicists insist that the volume of crust generated at midocean ridges is equaled by the volume subducted. But whereas 80,000 km of midocean ridges are supposedly producing new crust, only 30,500 km of trenches exist. Even if we add the 9,000 km of "collision zones," the figure is still only half that of the "spreading centers" (Smoot, 1997a). With two minor exceptions (the Scotia and Lesser Antilles trenchlarc systems), Benioff zones are absent from the margins of the Atlantic, Indian, Arctic, and Southern Oceans. Many geological facts demonstrate that subduction is not taking place in the Lesser Antilles arc; if it were, the continental Barbados Ridge should now be 200-400 km beneath the Lesser Antilles (Meyerhoff and Meyerhoff, 1974a). Kiskyras (1990) presented geological, volcanological, petrochemical, and seismological data contradicting the belief that the African plate is being subducted under the Aegean Sea. Africa is allegedly being converged on by plates spreading from the east, south, and west, yet it exhibits no evidence whatsoever for the existence of subduction zones or orogenic belts. Antarctica, too, is almost entirely surrounded by alleged "spreading" ridges without any corresponding subduction zones but fails to show any signs of being crushed. It has been suggested that Africa and Antarctica may remain stationary while the surrounding ridge system migrates away from them, but this would require the ridge marking the "plate boundary" between Africa and Antarctica to move in opposite directions simultaneously (Storetvedt, 1997)! If up to 13,000 km of lithosphere had really been subducted in circum-Pacific deep-sea trenches, vast amounts of oceanic sediments should have been scraped off the ocean floor and piled up against the landward margin of the trenches. However, sediments in the trenches are generally not present in the volumes required, nor do they display the expected degree of deformation

Plate Tectonics: A Paradigm Under Threat

331

(Choi, 1999b; Gnibidenko, Krasny, and Popov, 1978; Storetvedt, 1997; Suzuki et al., 1997). Scholl and Marlow (1 974), who support plate tectonics, admitted to being "genuinely perplexed as to why evidence for subduction or offscraping of trench deposits is not glaringly apparent" (p. 268). Plate tectonicists have had to resort to the highly dubious notion that unconsolidated deep-ocean sediments can slide smoothly into a Benioff zone without leaving any significant trace. Moreover, fore-arc sediments, where they have been analyzed, have generally been found to be derived from the volcanic arc and the adjacent continental block, not from the oceanic region (Pratsch, 1990; Wezel, 1986). The very low level of seismicity, the lack of a megathrust, and the existence of flat-lying sediments at the base of oceanic trenches contradict the alleged presence of a downgoing slab (Dickins and Choi, 1998). Attempts by Murdock (1997), who accepts many elements of plate tectonics, to publicize the lack of a megathrust in the Aleutian trench (i.e., a million or more meters of displacement of the Pacific plate as it supposedly underthrusts the North American plate) have met with vigorous resistance and suppression by the plate-tectonics establishment. Subduction along Pacific trenches is also refuted by the fact that the Benioff zone often lies 80 to 150 km landward from the trench, by the evidence that Precambrian continental structures continue into the ocean floor, and by the evidence for submerged continental crust under the northwestern and southeastern Pacific, where there are now deep abyssal plains and trenches (Choi, 1987, 1998, 1999c; Smoot 1998b; Tuezov, 1998). If the "Pacific plate" is colliding with and diving under the "North American plate," there should be a stress buildup along the San Andreas Fault. The deep Cajon Pass drill hole was intended to confirm this but showed instead that no such stress is present (C. W. Hunt, 1992). In the active island-arc complexes of southeast Asia, the arcs bend back on themselves, forming hairpin-like shapes that sometimes involve full 180" changes in direction. This also applies to the postulated subduction zone around India. How plate collisions could produce such a geometry remains a mystery (Meyerhoff, 1995; H. A. Meyerhoff and Meyerhoff, 1977). Rather than being continuous curves, trenches tend to consist of a row of straight segments, which sometimes differ in depth by more than 4 km. Aseismic buoyant features (e.g., seamounts), which are frequently found at the juncture of these segments, are connected with increased deep-earthquake and volcanic activity on the landward side of the trench, whereas theoretically their "arrival" at a subduction zone should reduce or halt such activity (Smoot, 1997a). Plate tectonicists admit that it is hard to see how the subduction of a cold slab could result in the high heat flow or arc volcanism in back-arc regions or how plate convergence could give rise to back-arc spreading (Uyeda, 1986). Evidence suggests that oceanic, continental, and back-arc rifts are actually tensional structures developed to relieve stress in a strong compressional stress system, and therefore have nothing to do with seafloor spreading (Dickins, 1997).

332

David Pratt

An alternative view of Benioff zones is that they are very ancient contraction fractures produced by the cooling of the earth (Meyerhoff et al., 1992b, 1996a). The fact that the upper part of the Benioff zones usually dips at less than 45" and the lower part at more than 45' suggests that the lithosphere is under compression and the lower mantle under tension. Furthermore, because a contracting sphere fractures along great circles (Bucher, 1956), this would account for the fact that both the circum-Pacific seismotectonic belt and the Alpine-Himalayan (Tethyan) belt lie on approximate circles. Finally, instead of oceanic crust being absorbed beneath the continents along ocean trenches, continents may actually be overriding adjacent oceanic areas to a limited extent, as is indicated by the historical geology of China, Indonesia, and the western Americas (Krebs, 1975; Pratsch, 1986; Storetvedt, 1997). Uplift and Subsidence
Vertical Tectonics

Classical plate tectonics seeks to explain all geologic structures primarily in terms of simple lateral movements of lithospheric plates-their rifting, extension, collision, and subduction, But random plate interactions are unable to explain the periodic character of geological processes, i.e., the geotectonic cycle, which sometimes operates on a global scale (Wezel, 1992). Nor can they explain the large-scale uplifts and subsidences that have characterized the evolution of the earth's crust, particularly those occurring far from "plate boundaries" such as in continental interiors, and vertical oscillatory motions involving vast regions (Beloussov, 1980, 1990; Chckunov, Gordienko, and Guterman, 1990; Genshaft and Saltykowski, 1990; Ilich, 1972). The presence of marine strata thousands of meters above sea level (e.g., near the summit of Mount Everest) and the great thicknesses of shallow-water sediment in some old basins indicate that vertical crustal movements of at least 9 km above sea level and 10-1 5 km below sea level have taken place (Spencer, 1977). Major vertical movements have also taken place along continental margins. For example, the Atlantic continental margin of North America has subsided by up to 12 km since the Jurassic (Sheridan, 1974). In Barbados, Tertiary coals representing a shallow-water, tropical environment occur beneath deep-sea oozes, indicating that during the last 12 million years, the crust sank to over 4-5 km depth for the deposition of the ooze and was then raised again. A similar situation occurs in Indonesia, where deep-sea oozes occur above sea level, sandwiched between shallow-water Tertiary sediments (James, 1994). The primary mountain-building mechanism in plate tectonics is lateral compression caused by collisions-of continents, island arcs, oceanic plateaus, seamounts, and ridges. In this model, subduction proceeds without mountain building until collision occurs, whereas in the noncollision model subduction alone is supposed to cause mountain building. As well as being

Plate Tectonics: A Paradigm Under Threat

333

plate tectonics have pointed out (e.g., Cebull and Shurbet, 1990, 1992; Van Andel, 1998). The noncollision model fails to explain how continuous subduction can give rise to discontinuous orogeny, while the collision model is challenged by occurrences of mountain building where no continental collision can be assumed, and it fails to explain contemporary mountain-building activity along such chains as the Andes and around much of the rest of the Pacific rim. Asia supposedly collided with Europe in the late Paleozoic, producing the Ural mountains, but abundant geological field data demonstrate that the Siberian and East European (Russian) platforms have formed a single continent since Precambrian times (Meyerhoff and Meyerhoff, 1974a). McGeary and Plummer (1998) state that the plate tectonic reconstruction of the formation of the Appalachians in terms of three successive collisions of North America seems "too implausible even for a science fiction plot" (p. 114) but add that an understanding of plate tectonics makes the theory more palatable. Ollier (1 990), on the other hand, states that fanciful plate-tectonic explanations ignore all the geomorphology and much of the known geological history of the Appalachians. He also says that of all the possible mechanisms that might account for the Alps, the collision of the African and European plates is the most naive. The Himalayas and the Tibetan Plateau were supposedly uplifted by the collision of the Indian plate with the Asian plate. However, this fails to explain why the beds on either side of the supposed collision zone remain comparatively undisturbed and low-dipping, whereas the Himalayas have been uplifted, supposedly as a consequence, some 100 km away, along with the Kunlun Mountains to the north of the Tibetan Plateau. River terraces in various parts of the Himalayas are almost perfectly horizontal and untilted, suggesting that the Himalayas were uplifted vertically, rather than as the result of horizontal compression (Ahmad, 1990). Collision models generally assume that the uplift of the Tibetan Plateau began during or after the early Eocene (post-50 million years), but paleontological, paleoclimatological, paleoecological, and sedimentological data conclusively show that major uplift could not have occurred before earliest Pliocene time (5 million years ago) (Meyerhoff, 1995). There is ample evidence that mantle heat flow and material transport can cause significant changes in crustal thickness, composition, and density, resulting in substantial uplifts and subsidences. This is emphasized in many of the alternative hypotheses to plate tectonics (for an overview, see Yano and Suzuki, 1999), such as the model of endogenous regimes (Beloussov, 1980, 198 1, 1990, 1992; Pavlenkova, 1995, 1998). Plate tectonicists, too, increasingly invoke mantle diapirism as a mechanism for generating or promoting tectogenesis; there is now abundant evidence that shallow magma chambers are ubiquitous beneath active tectonic belts. The popular hypothesis that crustal stretching was the main cause of the for-

334

David Pratt

mation of deep sedimentary basins on continental crust has been contradicted by numerous studies; mantle upwelling processes and lithospheric density increases are increasingly being recognized as an alternative mechanism (Anfiloff, 1992; Artyushkov, 1992; Artyushkov and Baer, 1983; Pavlenkova, 1998; Zorin and Lepina, 1989). This may involve gabbro-eclogite phase transformations in the lower crust (Artyushkov, 1992; Haxby, Turcotte, and Bird, 1976; Joyner, 1967), a process that has also been proposed as a possible explanation for the continuing subsidence of the North Sea Basin, where there is likewise no evidence of large-scale stretching (Collette, 1968). Plate tectonics predicts simple heat-flow patterns around the earth. There should be a broad band of high heat flow beneath the full length of the midocean rift system and parallel bands of high and low heat flow along the Benioff zones. Intraplate regions are predicted to have low heat flow. The pattern actually observed is quite different. There are criss-crossing bands of high heat flow covering the entire surface of the earth (Meyerhoff et al., 1996a). Intraplate volcanism is usually attributed to "mantle plumesw-upwellings of hot material from deep in the mantle, presumably the core-mantle boundary. The movement of plates over the plumes is said to give rise to hotspot trails (chains of volcanic islands and seamounts). Such trails should therefore show an age progression from one end to the other, but a large majority show little or no age progression (Baksi, 1999; Keith, 1993). On the basis of geological, geochemical, and geophysical evidence, Sheth (1999) argued that the plume hypothesis is ill-founded, artificial, and invalid, and has led earth scientists up a blind alley. Active tectonic belts are located in bands of high heat flow, which are also characterized by several other phenomena that do not readily fit in with the plate-tectonics hypothesis. These include bands of microearthquakes (including "diffuse plate boundaries") that do not coincide with plate-tectonic-predicted locations; segmented belts of linear faults, fractures, and fissures; segmented belts of mantle upwellings and diapirs; vortical geological structures; linear lenses of anomalous (low-velocity) upper mantle that are commonly overlain by shallower, smaller low-velocity zones; the existence of bisymmetrical deformation in all foldbelts, with coexisting states of compression and tension; strike-slip zones and similar tectonic lines ranging from simple rifts to Verschluckungszonen ("engulfment zones"); eastward-shifting tectonicmagmatic belts; and geothermal zones. Investigation of these phenomena has led to the development of a major new hypothesis of geodynamics, known as surge tectonics, which rejects both seafloor spreading and continental drift (Meyerhoff, 1995; Meyerhoff et al., 1992b, 1996a). Surge tectonics postulates that all the major features of the earth's surface, including rifts, foldbelts, metamorphic belts, and strike-slip zones, are underlain by shallow (less than 80 km) magma chambers and channels (known as "surge channels"). Seismotomographic data suggest that surge channels form an interconnected worldwide network, which has been dubbed "the earth's

Plate Tectonics: A Paradigm Under Threat

335

cardiovascular system." Surge channels coincide with the lenses of anomalous mantle and associated low-velocity zones referred to above, and active channels are also characterized by high heat flow and microseismicity. Magma from the asthenosphere flows slowly through active channels at the rate of a few centimeters a year. Horizontal flow is demonstrated by two major surface features: linear, belt-parallel faults, fractures, and fissures; and the division of tectonic belts into fairly uniform segments. The same features characterize all lava flows and tunnels and have also been observed on Mars, Venus, and several moons of the outer planets. Surge tectonics postulates that the main cause of geodynamics is lithosphere compression, generated by the cooling and contraction of the earth. As compression increases during a geotectonic cycle, it causes the magma to move through a channel in pulsed surges and eventually to rupture it so that the contents of the channel surge bilaterally upward and outward to initiate tectogenesis. The asthenosphere (in regions where it is present) alternately contracts during periods of tectonic activity and expands during periods of tectonic quiescence. The earth's rotation, combined with differential lag between the more rigid lithosphere above and the more fluid asthenosphere below, causes the fluid or semifluid materials to move predominantly eastward. This explains the eastward migration through time of many magmatic or volcanic arcs, batholiths, rifts, depocenters, and foldbelts. The Continents It is a striking fact that nearly all the sedimentary rocks composing the continents were laid down under the sea. The continents have suffered repeated marine inundations, but because sediments were mostly deposited in shallow water (less than 250 m), the seas are described as "epicontinental." Marine transgressions and regressions are usually attributed mainly to eustatic changes of sea level caused by alterations in the volume of midocean ridges. Van Andel (1 994) points out that this explanation cannot account for the 100 or so briefer cycles of sea-level changes, particularly because transgressions and regressions are not always simultaneous all over the globe. He proposes that large regions or whole continents must undergo slow vertical, epeirogenic movements, which he attributes to an uneven distribution of temperature and density in the mantle, combined with convective flow. Some workers have linked marine inundations and withdrawals to a global thermal cycle, bringing about continental uplift and subsidence (Rutland, 1982; Sloss and Speed, 1974). Van Andel (1 994) admits that epeirogenic movements "fit poorly into plate tectonics" (p. 170) and are therefore largely ignored (Figures 8 and 9). Van Andel (1994) asserts that "plates" rise or fall by no more than a few hundred meters-this being the maximum depth of most "epicontinental" seas. However, this overlooks an elementary fact: huge thicknesses of sediments were often deposited during marine incursions, often requiring vertical crustal movements of many kilometers. Sediments accumulate in regions of

336

David Pratt

Fig. 8. Maximum degree of marine inundation for each Phanerozoic geological period for the former USSR and North America. The older the geological period, the greater the probability of the degree of inundation being underestimated due to the sediments having been eroded or deeply buried beneath younger sediments. (Reprinted with permission from Harrison et al., 1983. Copyright by the American Geophysical Union.)

subsidence, and their thickness is usually close to the degree of downwarping. In the unstable, mobile belts bordering stable continental platforms, many geosynclinal troughs and circular depressions have accumulated sedimentary thicknesses of 10-14 km, and in some cases of 20 km. Although the sedimentary cover on the platforms themselves is often less than 1.5 km thick, basins with sedimentary thicknesses of 10 km and even 20 km are not unknown (Beloussov, 198 1; Dillon, 1974; C. B. Hunt, 1992; Pavlenkova, 1998). Subsidence cannot be attributed solely to the weight of the accumulating sediments because the density of sedimentary rocks is much lower than that of the subcrustal material; e.g., the deposition of 1 km of marine sediment will cause only half a kilometer or so of subsidence (Holmes, 1965; Jeffreys, 1976). Moreover, sedimentary basins require not only continual depression of the base of the basin to accommodate more sediments, but also continuous uplift of adjacent land to provide a source for the sediments. In geosynclines, subsidence has commonly been followed by uplift and folding to produce mountain ranges, and this can obviously not be accounted for by changes in surface loading. The complex history of the oscillating uplift and subsidence of the crust appears to require deep-seated changes in lithospheric composition and density, as well as vertical and horizontal movements of mantle material. That density is not the only factor involved is shown by the fact that in regions of tectonic activity vertical movements often intensify gravity anomalies rather than acting to restore isostatic equilibrium. For example, the Greater Caucasus is overloaded, yet it is rising rather than subsiding (Beloussov, 1980; Jeffreys, 1976). In regions where all the sediments were laid down in shallow water, subsidence must somehow have kept pace with sedimentation. In eugeosynclines, on the other hand, subsidence proceeded faster than sedimentation, resulting

Plate Tectonics: A Paradigm Under Threat

337

300 OS. AMER.
250
A N. AMER. *EUROPE

i

+ASIA
x
x x AFRICA

A AUSTRALIA

x
A

*AVERAGE

x

x

Fig. 9. Sea-level changes for six continents. For each time interval, the sea-level elevations for the various continents differ widely, highlighting the importance of vertical tectonic movements on a regional and continental scale. (Reprinted with permission from Hallam, 1977. Copyright by Nature.)

in a marine basin several kilometers deep. Examples of eugeosynclines prior to the uplift stage are the Sayans in the early Paleozoic, the eastern slope of the Urals in the early and middle Paleozoic, the Alps in the Jurassic and early Cretaceous, and the Sierra Nevada in the Triassic (Beloussov, 1980). Plate tectonicists often claim that geosynclines are formed solely at plate margins at the boundaries between continents and oceans. However, there are many examples of geosynclines having formed in intracontinental settings (Holmes, 1965), and the belief that the ophiolites found in certain geosynclinal areas are invariably remnants of oceanic crust is contradicted by a large volume of evidence (Beloussov, 198 l ;Bhat, 1987; Luts, 1990; Sheth, 1997).
The Oceans

In the past, sialic clastic material has been transported to today's continents from the direction of the present-day oceans, where there must have been considerable areas of land that underwent erosion (Beloussov, 1962; Dickins, Choi, and Yeates, 1992). For instance, the Paleozoic geosyncline along the seaboard of eastern North America, an area now occupied by the Appalachian mountains, was fed by sialic clasts from a borderland ("Appalachia") in the adjacent Atlantic. Other submerged borderlands include the North Atlantic Continent or Scandia (west of Spitsbergen and Scotland), Cascadia (west of the Sierra Nevada), and Melanesia (southeast of Asia and east of Australia) (Gilluly, 1955; Holmes, 1965; Umbgrove, 1947). A million cubic kilometers of Devonian micaceous sediments from Bolivia to Argentina imply an extensive continental source to the west where there is now the deep Pacific Ocean (Carey, 1994). During Paleozoic-Mesozoic-Paleogene times, the Japanese

338

David Pratt

geosyncline was supplied with sediments from land areas in the Pacific (Choi, 1984, 1987). When trying to explain sediment sources, plate tectonicists sometimes argue that sediments were derived from the existing continents during periods when they were supposedly closer together (Bahlburg, 1993; Dickins, 1994a; Holmes, 1965). Where necessary, they postulate small former land areas (microcontinents or island arcs), which have since been either subducted or accreted against continental margins as "exotic terranes" (Choi, 1984; Kumon et al., 1988; Nur and Ben-Avraham, 1982). However, mounting evidence is being uncovered that favors the foundering of sizable continental landmasses, whose remnants are still present under the ocean floor (see below). Oceanic crust is regarded as much thinner and denser than continental crust: The crust beneath oceans is said to average about 7 km thick and to be composed largely of basalt and gabbro, whereas continental crust averages about 35 km thick and consists chiefly of granitic rock capped by sedimentary rocks. However, ancient continental rocks and crustal types intermediate between standard "continental" and "oceanic" crust are increasingly being discovered in the oceans (Sanchez Cela, 1999), and this is a serious embarrassment for plate tectonics. The traditional picture of the crust beneath oceans being universally thin and graniteless may well be further undermined in the future, as oceanic drilling and seismic research continue. One difficulty is to distinguish the boundary between the lower oceanic crust and upper mantle in areas where high- and low-velocity layers alternate (Choi, Vasil'yev, and Bhat, 1992; Orlenok, 1986). For example, the crust under the Kuril deep-sea basin is 8 km thick if the 7.9 kmls velocity layer is taken as the crust-mantle boundary (Moho), but 20-30 km thick if the 8.2 or 8.4 kmls layer is taken as the Moho (Tuezov, 1 998). Small ocean basins cover an area equal to about 5% of that of the continents and are characterized by transitional types of crust (Menard, 1967). This applies to the Caribbean Sea, the Gulf of Mexico, the Japan Sea, the Okhotsk Sea, the Black Sea, the Caspian Sea, the Mediterranean, the Labrador Sea and Baffin Bay, and the marginal (back-arc) basins along the western side of the Pacific (Beloussov and Ruditch, 196 1 ; Choi, 1984; Grant, 1992; Ross, 1974; Sheridan, 1974). In plate tectonics, the origin of marginal basins, with their complex crustal structure, has remained an enigma, and there is no basis for the assumption that some kind of seafloor spreading must be involved; rather, they appear to have originated by vertical tectonics (Storetvedt, 1997; Wezel, 1986). Some plate tectonicists have tried to explain the transitional crust of the Caribbean in terms of the continentalization of a former deep ocean area, thereby ignoring the stratigraphic evidence that the Caribbean was a land area in the Early Mesozoic (Van Bemmelen, 1972). There are over 100 submarine plateaus and aseismic ridges scattered throughout the oceans, many of which were once subaerially exposed (Dickins, Choi, and Yeates, 1992; Nur and Ben-Avraham, 1982; Storetvedt, 1997)

Plate Tectonics: A Paradigm Under Threat

339

Fig. 10. Worldwide distribution of oceanic plateaus (black). (Reprinted with permission from Storetvedt, 1997. Copyright by Fagbokforlaget and K. M. Storetvedt.)

(Figure 10). They make up about 10% of the ocean floor. Many appear to be composed of modified continental crust 20-40 km thick-far thicker than "normal" oceanic crust. They often have an upper 10-15-km crust with compressional-wave velocities typical of granitic rocks in continental crust. They have remained obstacles to predrift continental fits and have therefore been interpreted as extinct spreading ridges, anomalously thickened oceanic crust, or subsided continental fragments carried along by the "migrating" seafloor. If seafloor spreading is rejected, they cease to be anomalous and can be interpreted as submerged, in situ continental fragments that have not been completely "oceanized." Shallow-water deposits ranging in age from mid-Jurassic to Miocene, as well as igneous rocks showing evidence of subaerial weathering, were found in 149 of the first 493 boreholes drilled in the Atlantic, Indian, and Pacific Oceans. These shallow-water deposits are now found at depths of 1-7 km, demonstrating that many parts of the present ocean floor were once shallow seas, shallow marshes, or land areas (Orlenok, 1986; Timofeyev and Kholodov, 1984). From a study of 402 oceanic boreholes in which shallowwater or relatively shallow-water sediments were found, Ruditch (1990) concluded that there is no systematic correlation between the age of shallowwater accumulations and their distance from the axes of the midoceanic ridges, thereby disproving the seafloor-spreading model. Some areas of the oceans appear to have undergone continuous subsidence, whereas others experienced alternating episodes of subsidence and elevation. The Pacific

340

David Pratt

Ocean appears to have formed mainly from the Late Jurassic to the Miocene, the Atlantic Ocean from the Late Cretaceous to the end of the Eocene, and the Indian Ocean during the Paleocene and Eocene. In the North Atlantic and Arctic Oceans, modified continental crust (mostly 10-20 km thick) underlies not only ridges and plateaus, but most of the ocean floor; only in deep-water depressions is typical oceanic crust found. Because deep-sea drilling has shown that large areas of the North Atlantic were previously covered with shallow seas, it is possible that much of the North Atlantic was continental crust before its rapid subsidence (Pavlenkova, 1995, 1998; Sanchez Cela, 1999). Lower Paleozoic continental rocks with trilobite fossils have been dredged from seamounts scattered over a large area northeast of the Azores. Furon (1 949) concluded that the continental cobbles had not been carried there by icebergs and that the area concerned was a submerged continental zone. Bald Mountain, from which a variety of ancient continental material has been dredged, could certainly be a foundered continental fragment. In the equatorial Atlantic, shallow-water and continental rocks are ubiquitous (Timofeyev et al., 1992; Udintsev, 1996). There is evidence that the midocean ridge system was shallow or partially emergent in Cretaceous to Early Tertiary time. For instance, in the Atlantic, subaerial deposits have been found on the North Brazilian Ridge (Bader et al., 197 I), near the Romanche and Vema fracture zones adjacent to equatorial sectors of the Mid-Atlantic Ridge (Bonatti and Chermak, 198 1 ; Bonatti and Honnorez, 197 I), on the crest of the Reykjanes Ridge, and in the Faeroe-Shetland region (Keith, 1993) (Figure 11). Oceanographic and geological data suggest that a large part of the Indian Ocean, particularly the eastern part, was land ("Lemuria") from the Jurassic until the Miocene. The evidence includes seismic and palynological data and subaerial weathering, which suggest that the Broken and Ninety East Ridges were part of an extensive, now sunken landmass; extensive drilling, seismic, magnetic, and gravity data pointing to the existence an Alpine-Himalayan foldbelt in the northwestern Indian Ocean, associated with a foundered continental basement; data that continental basement underlies the Scott, Exmouth, and Naturaliste plateaus west of Australia; and thick Triassic and Jurassic sedimentation on the western and northwestern shelves of the Australian continent, which shows progradation and current direction indicating a western source (Dickins, 1994a; Udintsev, Illarionov, and Kalinin, 1990; Udintsev and Koreneva, 1982; Wezel, 1988). Geological, geophysical, and dredging data provide strong evidence for the presence of Precambrian and younger continental crust under the deep abyssal plains of the present northwest Pacific (Choi, Vasil'yev, and Bhat, 1992; Choi, Vasil'yev, and Tuezov, 1990). Most of this region was either subaerially exposed or very shallow sea during the Paleozoic to Early Mesozoic and first became deep sea about the end of the Jurassic. Paleolands apparently existed on both sides of the Japanese islands. They were largely emergent during the Paleozoic-Mesozoic-Paleogene but were totally submerged during Paleogene to

Plate Tectonics: A Paradigm Under Threat

341

Fig. 1 I . Areas in the Atlantic Ocean for which past subsidence has been established. Subsided areas are shaded. (Reprinted with permission from Dillon, 1974. Copyright by the AAPG, whose permission is required for further use.)

Miocene times. Those on the Pacific side included the great Oyashio paleoland and the Kuroshio paleoland. The latter, which was as large as the present Japanese islands and occupied the present Nankai Trough area, subsided in the Miocene, at the same time as the upheaval of the Shimanto geosyncline, to which it had supplied vast amounts of sediments (Choi, 1984, 1987; Harata et al., 1978; Kumon et al., 1988). There is also evidence of paleolands in the southwest Pacific around Australia (Choi, 1997) and in the southeast Pacific during the Paleozoic and Mesozoic (Bahlburg, 1993; Choi, 1998; Isaacson, 1975; Isaacson and Martinez, 1995) (Figure 12). After surveying the extensive evidence for former continental land areas in the present oceans, Dickins, Choi, and Yeates (1 992) concluded,
We are surprised and concerned for the objectivity and honesty of science that such data can be overlooked or ignored. There is a vast need for future Ocean Drilling Program initiatives to drill below the base of the basaltic ocean floor crust to confirm the real composition of what is currently designated oceanic crust. (p. 198)

Conclusion
Plate tectonics-the reigning paradigm in the earth sciences-faces some very severe and apparently fatal problems. Far from being a simple, elegant,

342

David Pratt

Fig. 12. Former land areas in the present Pacific and Indian Oceans. Only those areas for which substantial evidence already exists are shown. Their exact outlines and full extent are as yet unknown. G1, Seychelles area; G2, Great Oyashio Paleoland; G3, Obruchev Rise; G4, Lemuria; S1, area of Ontong-Java Plateau, Magellan Sea Mounts, and Mid-Pacific Mountains; S2, Northeast Pacific; S3, Southeast Pacific including Chatham Rise and Campbell Plateau; S4, Southwest Pacific; S5, area including South Tasman Rise; S6, East Tasman Rise and Lord Howe Rise; S7, Northeast Indian Ocean; S8, Northwest Indian Ocean. (Reprinted with permission from Dickins, 1994a, 1994b. Copyright by J. M. Dickins.)

all-embracing global theory, it is confronted with a multitude of observational anomalies and has had to be patched up with a complex variety of ad hoc modifications and auxiliary hypotheses. The existence of deep continental roots and the absence of a continuous, global asthenosphere to "lubricate" plate motions have rendered the classical model of plate movements untenable. There is no consensus on the thickness of the "plates" and no certainty as to the forces responsible for their supposed movement. The hypotheses of largescale continental movements, seafloor spreading, and subduction, as well as the relative youth of the oceanic crust are contradicted by a substantial volume of data. Evidence for significant amounts of submerged continental crust in the present-day oceans provides another major challenge to plate tectonics. The fundamental principles of plate tectonics therefore require critical reexamination, revision, or rejection.

Acknowledgments
I would kike to thank Ismail Bhat, Dong Choi, Mac Dickins, Hetu Sheth, and Chris Smoot for helpful comments and discussions.

Plate Tectonics: A Paradigm Under Threat

343

References
Afanas'yev, G. D. (1967). New data on relationship between earth's crust and upper mantle. International Geology Review, 9, 1,513-1,536. Agocs, W. B., Meyerhoff, A. A., & Kis, K. (1992). Reykjanes Ridge: Quantitative determinations from magnetic anomalies. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 221-238). Lubbock, TX: Texas Tech University Press. Ahmad, F. (1990). The bearing of paleontological evidence on the origin of the Himalayas. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 129-142). Athens, Greece: Theophrastus Publications, S. A. Anderson, D. L., Tanimoto, T., & Zhang, Y. (1992). Plate tectonics and hotspots: The third dimension. Science, 256, 1,645-1,65 1. Anderson, R. N., Honnorez, J., Becker, K., Adamson, A. C., Alt, J. C., Emmermann, R., Kempton, P. D., Kinoshita, H., Laverne, C., Mottl, M. J., & Newmark, R. L. (1982). DSDP hole 504B, the first reference section over 1 km through Layer 2 of the oceanic crust. Nature, 300, 589-594. Anfiloff, V. (1992). The tectonic framework of Australia. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 75-109). Lubbock, TX: Texas Tech University Press. Antipov, M. P., Zharkov, S. M., Kozhenov, V. Ya., & Pospelov, I. I. (1990). Structure of the MidAtlantic Ridge and adjacent parts of the abyssal plain at lat. 13ON. International Geology Review, 32, 468-478. Artyushkov, E. V. (1992). Role of crustal stretching on subsidence of the continental crust. Tectonophysics, 215, 187-207. Artyushkov, E. V., & Baer, M. A. (1983). Mechanism of continental crust subsidence in fold belts: The Urals, Appalachians and Scandinavian Caledonides. Tectonophysics, 100, 5-42. Aumento, F., & Loncarevic, B. D. (1969). The Mid-Atlantic Ridge near 45"N., 111. Bald Mountain. Canadian Journal of Earth Sciences, 6, 1 1-23. Bader, R. G., Gerard, R. D., Hay, W. W., Benson, W. E., Bolli, H. M., Rothwell, W. T., Ruef, M. H., Riedel, W. R., & Sayles, F. L. (1971). Leg 4 of the Deep Sea Drilling Project. Science, 172, 1,197-1,205. Bahlburg, H. (1993). Hypothetical southeast Pacific continent revisited: New evidence from the middle Paleozoic basins of northern Chile. Geology, 21, 909-912. Baksi, A. K. (1999). Reevaluation of plate motion models based on hotspot tracks in the Atlantic and Indian Oceans. Journal of Geology, 107, 13-26. Barron, E. J., Harrison, C. G. A., & Hay, W. W. (1978). A revised reconstruction of the southern continents. American Geophysical Union Transactions, 59, 436-439. Beloussov, V. V. (1 962). Basic problems in geotectonics. New York: McGraw-Hill. s Beloussov, V. V. (1970). ~ ~ a i nthet hypothesis of ocean-floor spreading. Tectonophysics, 9, 489-5 11. Beloussov, V. V. (1980). Geotectonics. Moscow: Mir. Beloussov, V. V. (1 98 1). Continental Endogenous Regimes. Moscow: Mir. Beloussov, V. V. (1 990). Certain trends in present-day geosciences. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 3-1 5). Athens, Greece: Theophrastus Publications, S. A. Beloussov, V. V. (1992). Endogenic regimes and the evolution of the tectonosphere. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 411-420). Lubbock, TX: Texas Tech University Press. Beloussov, V. V., & Ruditch, E. M. (1961). Island arcs in the development of the earth's structure (especially in the region of Japan and the Sea of Okhotsk). Journal of Geology, 69,647-658. Ben-Avraham, Z., Nur, A., Jones, D., & Cox, A. (1981). Continental accretion: From oceanic plateaus to allochthonous terranes. Science, 213, 47-54. Benioff, H. (1 954). Orogenesis and deep crustal structure-Additional evidence from seismology. Geological Society of America Bulletin, 65, 385-400. Bhat, M. I. (1 987). Spasmodic rift reactivation and its role in pre-orogenic evolution of the Himalayan region. Tectonophysics, 134, 103-1 27. Bonatti, E. (1990). Subcontinental mantle exposed in the Atlantic Ocean on St Peter-Paul islets. Nature, 345, 800-802.

344

David Pratt

Bonatti, E., & Chermak, A. (1981). Formerly emerging crustal blocks in the Equatorial Atlantic. Tectonophysics, 72, 165-180. Bonatti, E., & Crane, K. (1982). Oscillatory spreading explanation of anomalously old uplifted crust near oceanic transforms. Nature, 300, 343-345. Bonatti, E., & Honnorez, J. (1971). Nonspreading crustal blocks at the Mid-Atlantic Ridge. Science, 174, 1,329-1,33 1. Briggs, J. C. (1 987). Biogeography andplate tectonics. Amsterdam: Elsevier. Brooks, C. E. P. (1 949). Climate through the ages. London: Ernest Benn. Bucher, W. H. (1933). The deformation of the earth k crust. Princeton, NJ: Princeton University Press. Bucher, W. H. (1956). Role of gravity in orogenesis. Geological Society ofAmerica Bulletin, 67, 1,295-1,318. Bullard, E. C., Everett, J. E., & Smith, A. G. (1965). The fit of the continents around the Atlantic. In A symposium on continental drift (Series A, 258, pp. 41-51). London: Royal Society of London Philosophical Transactions. Butler, R. F., Gehrels, G. E., McClelland, W. C., May, S. R., & Klepacki, D. (1989). Discordant paleomagnetic data from the Canadian Coast Plutonic Complex: Regional tilt rather than large-scale displacement? Geology, 17, 69 1-694. Carey, S. W. (1994). Creeds of physics. In Barone, M., & Selleri, F. (Eds.), Frontiers offundamentalphysics (pp. 241-255). New York: Plenum. Carr, M. J . (1 976). Underthrusting and Quaternary faulting in northern Central America. Geological Society ofAmerica Bulletin, 87, 825-829. Carr, M . J., Stoiber, R. E., & Drake, C. L. (1973). Discontinuities in the deep seismic zones under the Japanese arcs. Geological Society ofAmerica Bulletin, 84, 2,9 17-2,930. Cebull, S. E., & Shurbet, D. H. (1990). Fundamental problems with the plate-tectonic explanation of orogeny. In Barto-Kyriakidis, A. (Ed.), Critical aspects of theplate tectonics theory (Vol. 2, pp. 435-444). Athens, Greece: S. A. Theophrastus Publications. Cebull, S. E., & Shurbet, D. H. (1992). Conventional plate tectonics and orogenic models. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 111-1 17). Lubbock, TX: Texas Tech University Press. Chatterjee, S., & Hotton, N., I11 (1986). The paleoposition of India. Journal of Southeast Asian Earth Sciences, 1, 145-189. Chekunov, A. V., Gordienko, V. V., & Guterman, V. G. (1 990). Difficulties of plate tectonics and possible alternative mechanisms. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 397-433). Athens, Greece: Theophrastus Publications, S. A. Choi, D. R. (1984). Late Permian-early Triassic paleogeography of northern Japan: Did microplates accrete to Japan? Geology, 12, 728-73 l . Choi, D. R. (1987). Continental crust under the NW Pacific Ocean. Journal ofPetroleum Geology, 10,425-440. Choi, D. R. (1997). Geology of the oceans around Australia, parts 1-111. New Concepts in Global Tectonics Newsletter, 3, 8-1 3. Choi, D. R. (1998). Geology of the southeast Pacific, parts 1-3. New Concepts in Global Tectonics Newsletter, 7, 11-15; 8, 8-13; 9, 12-14. Choi, D. R. (1999a). Oceanic lineaments and major structures in Central America. New Concepts in Global Tectonics Newsletter, 11, 2 1-22. Choi, D. R. (1999b). Geology of East Pacific: Middle America trench. New Concepts in Global Tectonics Newsletter, 12, 10-1 6. . Choi, D. R. ( 1 9 9 9 ~ )Precambrian structures in South America: Their connection to the Pacific and Atlantic Oceans. New Concepts in Global Tectonics Newsletter, 13, 5-7. Choi, D. R., Vasil'yev, B. I., & Bhat, M. I. (1992). Paleoland, crustal structure, and composition under the northwestern Pacific Ocean. In Chatterjee, S., & Hotton, N. I11 (Eds.), New concepts in global tectonics (pp. 179-191). Lubbock, TX: Texas Tech University Press. Choi, D. R., Vasil'yev, B. I., & Tuezov, I. K. (1990). The great Oyashio paleoland: A PaleozoicMesozoic landmass in the northwestern Pacific. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 197-213). Athens, Greece: Theophrastus Publications, S. A. Collette, B. J. (1968). On the subsidence of the North Sea area. In Donovan, D.T. (Ed.), Geology ofsheIfseas (pp. 15-30). Edinburgh, Scotland: Oliver & Boyd. Dickins, J. M. (1987). Tethys-A geosyncline formed on continental crust? In McKenzie, K. G .

Plate Tectonics: A Paradigm Under Threat
(Ed.), Shallow Tethys 2: International Symposium, Wagga Wagga, 1986 (pp. 149-158). Rotterdam, The Netherlands: A. A. Balkema. Dickins, J. M. (1994a). What is Pangaea? In Embry, A. F., Beauchamp, B., & Glass, D. G. (Eds.), Pangaea: Global environments and resources (Memoir 17, pp. 67-80). Calgary, Alberta: Canadian Society of Petroleum Geologists. Dickins, J. M. (1994b). The nature of the oceans or Gondwanaland, fact and fiction. In Gondwana nine, Ninth International Gondwana Symposium, Hyderabad, India, 1994 (pp. 387-396). Rotterdam, The Netherlands: A. A. Balkema. The southern margin of Tethys. In Gondwana nine, Ninth International Dickins, J. M. (1994~). Gondwana Symposium, Hyderabad, India, 1994 (pp. 1,125-1,134). Rotterdam, The Netherlands: A. A. Balkema. Dickins, J. M. (1997). Rift, rifting. New Concepts in Global Tectonics Newsletter, 4, 3-4. Dickins, J. M., & Choi, D. R. (1997). Editorial. New Concepts in Global Tectonics Newsletter, 5,
1-2.

I

Dickins, J. M., & Choi, D. R. (1998). Fatal flaw-Who are the culprits? New Concepts in Global Tectonics Newsletter, 8, 1-2. Dickins, J. M., Choi, D. R., & Yeates, A. N. (1992). Past distribution of oceans and continents. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 193-199). Lubbock, TX: Texas Tech University Press. Dickinson, W. R., & Butler, R. F. (1998). Coastal and Baja California paleomagnetism reconsidered. Geological Society of America Bulletin, 110, 1,268-1,280. Dietz, R. S., & Holden, J. C. (1970). The breakup of Pangaea. Scientific American, 223, 3 0 4 1 . Dillon, L. S. (1974). Neovolcanism: A proposed replacement for the concepts of plate tectonics and continental drift. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 167-239). Tulsa, OK: American Association of Petroleum Geologists. Dott, R. H., Jr., & Batten, R. L. (1981). Evolution of the earth (3rd ed.). New York: McGraw-Hill. Dziewonski, A. M., & Anderson, D. L. (1984). Seismic tomography of the earth's interior. American Scientist, 72,483-494. Dziewonski, A. M., & Woodhouse, J. H. (1987). Global images of the earth's interior. Science, 236,37-48. Eyles, N., & Eyles, C. H. (1993). Glacial geologic confirmation of an intraplate boundary in the Parana basin of Brazil. Geology, 21, 459-462. Fallon, F. W., & Dillinger, W. H. (1992). Crustal velocities from geodetic very long baseline interferometry. Journal of Geophysical Research, 97, 7,129-7,136. Forte, A. M., Dziewonski, A. M., & O'Connell, R. J. (1995). Continent-ocean chemical heterogeneity in the mantle based on seismic tomography. Science, 268, 386-388. Furon, R. (1949). Sur les trilobites draguks a 4225 m de profondeur par le Talisman (1 883). Paris, AcadCmie des Sciences, Comptes Rendus, 228, 1,509-1,5 10. (For translation, see Schneck, 1974.) Gay, S. Parker, Jr. (1973). Pervasive orthogonal fracturing in earth k continental crust. Salt Lake City, UT: American Stereo Map Co. Genshaft, Yu. S., & Saltykowsky, A. Ya. (1990). Continental volcanism, xenoliths, and "plate tectonics." In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 2, pp. 267-280). Athens, Greece: Theophrastus Publications, S. A. Gilluly, J. (1955). Geologic contrasts between continents and ocean basins. In Crust of the Earth (Special paper 62, pp. 7-1 8). Boulder, CO: Geological Society of America. Gnibidenko, H. S., Krasny, M. L., & Popov, A. A. (1978). Tectonics of the Kuril-Kamchatka deep-sea trench. Eos, 59, 1,184. Gordon, R. G., & Stein, S. (1992). Global tectonics and space geodesy. Science, 256, 333-342. Gossler, J., & Kind, R. (1996). Seismic evidence for very deep roots of continents. Earth and Planetary Science Letters, 138, 1-13. Grad, H. (1971). Magnetic properties of a contained plasma. New York Academy of Science Annals, 182(17), 636-650. Grand, S. P. (1987). Tomographic inversion for shear velocity beneath the North American plate. Journal of Geophysical Research, 92, 14,065-14,090. Grant, A. C. (1980). Problems with plate tectonics: The Labrador Sea. Bulletin of Canadian Petroleum Geology, 28, 252-278. Grant, A. C. (1992). Intracratonic tectonism: Key to the mechanism of diastrophism. In Chatter-

346

David Pratt

jee, S., & Hotton, N., 111 (Eds.), New concepts in global tectonics (pp. 65-73). Lubbock, TX: Texas Tech University Press. Gregory, J. W. (1899). The plan of the earth and its causes. The Geographical Journal, 13, 225-250. Gregory, J. W. (1 90 1). The plan of the earth and its causes. American Geologist, 27, 100-119, 137-147. Gregory, J. W. (1929). The geological history of the Atlantic Ocean. Quarterly Journal of Geological Society, 85. 68-122. Hall, J. M., & Robinson, P. T. (1979). Deep crustal drilling in the North Atlantic Ocean. Science, 204, 573-586. Hallam, A. (1 976). How closely did the continents fit together? Nature, 262, 94-95. Hallam, A. (1977). Secular changes in marine inundation of USSR and North America through the Phanerozoic. Nature, 269, 769-772. Hallam, A. (1979). A decade of plate tectonics. Nature, 279, 478. Harata, T., Hisatomi, K., Kumon, F., Nakazawa, K., Tateishi, M., Suzuki, H., & Tokuoka, T. (1978). Shimanto geosyncline and Kuroshio paleoland. Journal of Physics of Earth, 26(S~ppl.), 357-366. Harrison, C. G. A., Miskell, K. J., Brass, G. W., Saltzman, E. S., & Sloan, J. L. (1983). Continental hypsography. Tectonics, 2, 357-377. Haxby, W. F., Turcotte, D. L., & Bird, J. M. (1976). Thermal and mechanical evolution of the Michigan Basin. Tectonophysics, 36, 57-75. Hodych, J. P., & Bijaksana, S. (1993). Can remanence anisotropy detect paleomagnetic inclination shallowing due to compaction? A case study using Cretaceous deep-sea limestones. Journal of Geophysical Research, 98, 22,429-22,44 1. Holmes, A. (1965). Principles ofphysical geology (2nd ed.). London: Thomas Nelson and Sons. Hunt, C. B. (1992). Geochronology. Encyclopaedia Britannica (1 5th ed., Vol. 19, pp. 824-826). Hunt, C. W. (Ed.). (1992). Expandinggeospheres. Calgary, Alberta: Polar Publishing. Ilich, M. (1972). New global tectonics: Pros and cons. American Association of Petroleum Geologists Bulletin, 56, 360-363. Irving, E., & Archibald, D. A. (1990). Bathozonal tilt corrections to paleomagnetic data from mid-Cretaceous plutonic rocks: Examples from the Omineca belt, British Columbia. Journal of Geophysical Research, 95, 4,579-4,585. Isaacson, P. E. (1975). Evidence for a western extracontinental land source during the Devonian period in the Central Andes. Geological Society of America Bulletin, 86, 39-46. Isaacson, P. E., & Martinez, E. D. (1995). Evidence for a middle-late Paleozoic foreland basin and significant paleolatitudinal shift, central Andes. In Tankard, A. J., Soruco, R. S., & Welsink, H. J. (Eds.), Petroleum basins ofSouth America (Memoir 62, pp. 231-249). Tulsa, OK: American Association of Petroleum Geologists. Isacks, B. L., & Barazangi, M. (1977). Geometry of Benioff zones: Lateral segmentation and downwards bending of the subducted lithosphere. In Talwani, M., & Pitman, W. C., I11 (Eds.), Island arcs, deep sea trenches and back-arc basins (Maurice Ewing Series 1, pp. 99-1 14). Washington, DC: American Geophysical Union. James, P. (1994). The tectonics of Geoid changes. Calgary, Alberta: Polar Publishing. Jeffreys, H. (1974). Theoretical aspects of continental drift. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 395-405). Tulsa, OK: American Association of Petroleum Geologists. Jeffreys, H. (1976). The earth: its origin, history andphysical constitution (6th ed.). Cambridge, UK: Cambridge University Press. Jordan, T. H. (1975). The continental tectosphere. Reviews of Geophysics and Space Physics, 13, 1-12. Jordan, T. H. (1978). Composition and development of the continental tectosphere. Nature, 274, 544-548. Jordan, T. H. (1 979). The deep structure of the continents. Scientific American, 240, 70-82. Joyner, W. B. (1967). Basalt-eclogite transition as a cause for subsidence and uplift. Journal of Geophysical Research, 72, 4,9774,998. Kamen-Kaye, M. (1970). Age of the basins. Geotimes, 115, 6-8. Kashfi, M. S. (1992). Geological evidence for a simple horizontal compression of the crust in the Zagros Crush Zone. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 119-130). Lubbock, TX: Texas Tech University Press.

Plate Tectonics: A Paradigm Under Threat

347

Katterfeld, G. H., & Charushin, G. V. (1973). General grid systems of planets. Modern Geology, 4,243-287. Keith, M. L. (1993). Geodynamics and mantle flow: An alternative earth model. Earth-Science Reviews, 33, 153-337. Kent, D. V., & Smethurst, M. A. (1998). Shallow bias of paleomagnetic inclinations in the Paleozoic and Precambrian. Earth and Planetary Science Letters, 160, 391-402. Khudoley, K. M. (1974). Circum-Pacific Mesozoic ammonoid distribution: Relation to hypotheses of continental drift, polar wandering, and earth expansion. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 295-330). Tulsa, OK: American Association of Petroleum Geologists. Kiskyras, D. A. (1990). Some remarks on the plate tectonics concept with respect to geological and geophysical problems in the Greek area. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 2 15-244). Athens, Greece: Theophrastus Publications, S. A. Krebs, W. (1 975). Formation of southwest Pacific island arc-trench and mountain systems: Plate or global-vertical tectonics? American Association of Petroleum Geologists Bulletin, 59, 1,639-1,666. Kumon, F., Suzuki, H., Nakazawa, K., Tokuoka, T., Harata, T., Kimura, K., Nakaya, S., Ishigami, T., & Nakamura, K. (1988). Shimanto belt in the Kii peninsula, southwestern Japan. Modern Geology, 12, 71-96. Le Grand, H. E. (1988). Drifting continents and shifting theories. Cambridge, UK: Cambridge University Press. Lerner-Lam, A. L. (1988). Seismological studies of the lithosphere. Lamont-Doherty Geological Observatory, Yearbook, pp. 50-55. Lowman, P. D., Jr. (1985). Plate tectonics with fixed continents: A testable hypothesis-I. Journal of Petroleum Geology, 8, 373-388. Lowman, P. D., Jr. (1986). Plate tectonics with fixed continents: A testable hypothesis-11. Journal ofPetroleum Geology, 9, 71-87. Lowman, P. D., Jr. (1992a). Plate tectonics and continental drift in geologic education. In Chatterjee, s . , & Hotton, N., 111 (Eds.), New concepts in global tectonics (pp. 3-9). Lubbock, TX: Texas Tech University Press. Lowman, P. D., Jr. (1992b). Geophysics from orbit: The unexpected surprise. Endeavour, 16, 50-58. Luts, B. G. (1990). Types of ophiolitic formations (are they remnants of oceanic crust?). In BartoKyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 2, pp. 281-305). Athens, Greece: Theophrastus Publications, S. A. Lyttleton, R. A., & Bondi, H. (1992). How plate tectonics may appear to a physicist. Journal of the British Astronomical Association, 102, 194-195. Lyustikh, E. N. (1967). Criticism of hypotheses of convection and continental drift. Geophysical Journal of Royal Astronomical Society, 14, 347-352. MacDonald, G. J. F. (1963). The deep structure of continents. Reviews of Geophysics, 1, 587-665. Macdougall, D. (1971). Deep sea drilling: Age and composition of an Atlantic basaltic intrusion. Science, 171, 1,244-1,245. Manabe, S., & Broccoli, A. J. (1990). Mountains and arid climates of middle latitudes. Science, 247, 192-195. Mantura, A. J. (1972). New global tectonics and "the new geometry." American Association of Petroleum Geologists Bulletin, 56, 2,451-2,455. Maxwell, J. C. (1974). The new global tectonics-An assessment. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 24-42). Tulsa, OK: American Association of Petroleum Geologists. McGeary, D., & Plummer, C. C. (1998). Physicalgeology: Earth revealed (3rd ed.). Boston, MA: WCB, McGraw-Hill. McKenzie, K. G. (1987). Tethys and her progeny. In McKenzie, K. G. (Ed.), Shallow Tethys 2, International Symposium, Wagga Wagga, 1986 (pp. 501-523). Rotterdam, The Netherlands: A. A. Balkema. McLeish, A. (1992). Geological science. Walton-on-Thames, UK: Thomas Nelson and Sons. Melson, W. G., Hart, S. R., & Thompson, G. (1972). St. Paul's Rocks, equatorial Atlantic: Petrogenesis, radiometric ages, and implications on sea-floor spreading. In Shagam, R., Hargraves, R. B., Morgan, W. J., Van Houten, F. B., Burk, C. A., Holland, H. D., & Hollister, L. C. (Eds.),

348

David Pratt

Studies in earth and space sciences (Memoir 132, pp. 241-272). Boulder, CO: Geological Society of America. Menard, H. W. (1967). Transitional types of crust under small ocean basins. Journal of Geophysical Research, 72, 3,061-3,073. Merrill, R. T., McElhinny, M. W., & McFadden, P. L. (1996). The magnetic$eld of the earth. San Diego, CA: Academic Press. Meyerhoff, A. A. (1 970a). Continental drift: Implications of paleomagnetic studies, meteorology, physical oceanography, and climatology. Journal of Geology, 78, 1-5 1. Meyerhoff, A. A. (1970b). Continental drift, 11: High latitude evaporite deposits and geologic history of Arctic and North Atlantic oceans. Journal of Geology, 78,406-444. Meyerhoff, A. A. (1974). Crustal structure of northern North Atlantic Ocean-A review. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 423-433). Tulsa, OK: American Association of Petroleum Geologists. Meyerhoff, A. A. (1995). Surge-tectonic evolution of southeastern Asia: A geohydrodynamics approach. Journal of Southeast Asian Earth Sciences, 12, 143-247. Meyerhoff, A. A., & Hatten, C. W. (1 974). Bahamas salient of North America. In Burk, C. A., & Drake, C. L. (Eds.), The geology of continental margins (pp. 429-446). Berlin, Germany: Springer-Verlag. Meyerhoff, A. A., & Meyerhoff, H. A. (1972). The new global tectonics: Age of linear magnetic anomalies of ocean basins. American Association of Petroleum Geologists Bulletin, 56, 337-359. Meyerhoff, A. A., & Meyerhoff, H. A. (1974a). Tests of plate tectonics. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 43-145). Tulsa, OK: American Association of Petroleum Geologists. Meyerhoff, A. A., & Meyerhoff, H. A. (1974b). Ocean magnetic anomalies and their relations to continents. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 41 1-422). Tulsa, OK: American Association of Petroleum Geologists. Meyerhoff, A. A., & Teichert, C. (1971). Continental drift, 111: Late Paleozoic glacial centers and Devonian-Eocene coal distribution. Journal of Geology, 79, 285-321. Meyerhoff, A. A., Kamen-Kaye, M., Chen, C., & Taner, 1. (1991). China-Stratigraphy, paleogeography and tectonics. Dordrecht: Kluwer. Meyerhoff, A. A., Agocs, W. B., Taner, I., Morris, A. E. L., & Martin, B. D. (1992a). Origin of midocean ridges. In Chatterjee, S., & Hotton, N., 111(Eds.), New concepts in global tectonics (pp. 151-178). Lubbock, TX: Texas Tech University Press. Meyerhoff, A. A., Taner, I., Morris, A. E. L., Martin, B. D., Agocs, W. B., & Meyerhoff, H. A. (1992b). Surge tectonics: A new hypothesis of earth dynamics. In Chatterjee, S., & Hotton, N., 111(Eds.), New concepts in global tectonics (pp. 309-409). Lubbock, TX: Texas Tech University Press. Meyerhoff, A. A., Taner, I., Morris, A. E. L., Agocs, W. B., Kaymen-Kaye, M., Bhat, M. I., Smoot, N. C., & Choi, D. R. (1996a). Surge tectonics: A new hypothesis of global geodynamics (Meyerhoff Hull, D., Ed.). Dordrecht: Kluwer. Meyerhoff, A. A., Boucot, A. J., Meyerhoff Hull, D., & Dickins, J. M. (1996b). Phanerozoic faunal &floral realms of the earth: The intercalary relations of the Malvinokaffric and Gondwana faunal realms with the Tethyan faunal realm (Memoir 189). Geological Society of America. Meyerhoff, H. A., & Meyerhoff, A. A. (1977). Genesis of island arcs. In Internationalsymposium on geodynamics in Southwest Pacific Noumea, Novelle Cal Edonie (pp. 357-370). Paris: ~ d i tions Technip. Morris, A. E. L., Taner, I., Meyerhoff, H. A., & Meyerhoff, A. A. (1990). Tectonic evolution of the Caribbean region: Alternative hypothesis. In Dengo, G., & Case, J. E. (Eds.), The Caribbean Region (pp. 433-457). Boulder, CO: Geological Society of America. Munk, W. H., & MacDonald, G. J. F. (1975). The rotation of the earth. Cambridge, UK: Cambridge University Press. Murdock, J. N. (1997). Overview of the history of one man's challenges to strict plate tectonics. New Concepts in Global Tectonics Newsletter, 4, 23-25. Nafe, J. E., & Drake, C. L. (1969). Floor of the North Atlantic-Summary of geophysical data. In Kay, M. (Ed.), North Atlantic-Geology and continental drift (Memoir 12, pp. 59-87). Tulsa, OK: American Association of Petroleum Geologists.

Plate Tectonics: A Paradigm Under Threat

349

I

Nitecki, M. H., Lemke, J. L., Pullman, H. W., & Johnson, M. E. (1978). Acceptance of plate tectonic theory by geologists. Geology, 6, 661-664. Nur, A., & Ben-Avraham, Z. (1982). Displaced terranes and mountain building. In Hsii, K. J. (Ed.), Mountain buildingprocesses (pp. 73-84). London: Academic Press. O'Driscoll, E. S. T. (1 980). The double helix in global tectonics. Tectonophysics, 63, 3 9 7 4 17. Ollier, C. D. (1990). Mountains. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 2, pp. 211-236). Athens, Greece: Theophrastus Publications, S. A. Orlenok, V. V. (1986). The evolution of ocean basins during Cenozoic time. Journal of Petroleum Geology, 9, 207-216. Ozima, M., Saito, K., Matsuda, J., Zashu, S., Aramaki, S., & Shido, F. (1976). Additional evidence of existence of ancient rocks in the Mid-Atlantic Ridge and the age of the opening of the Atlantic. Tectonophysics, 31, 59-71. Pavlenkova, N. 1. (1 990). Crustal and upper mantle structure and plate tectonics. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 73-86). Athens, Greece: Theophrastus Publications, S. A. Pavlenkova, N. I. (1995). Structural regularities in the lithosphere of continents and plate tectonics. Tectonophysics, 243, 223-229. Pavlenkova, N. I. (1 996). General features of the uppermost mantle stratification from long-range seismic profiles. Tectonophysics, 264, 261-278. Pavlenkova, N. I. (1998). Endogenous regimes and plate tectonics in northern Eurasia. Physics and Chemistry of the Earth, 23, 799-810. Pilot, J., Werner, C.-D., Haubrich, F., & Baumann, D. (1 998). Paleozoic and Proterozoic zircons from the Mid-Atlantic Ridge. Nature, 393, 676-679. Polet, J., & Anderson, D. L. (1995). Depth extent of cratons as inferred from tomographic studies. Geology, 23,205-208. Pollack, H. N., & Chapman, D. S. (1977). On the regional variation of heat flow, geotherms, and lithospheric thickness. Tectonophysics, 38, 279-296. Pratsch, J. C. (1986, July 14). Petroleum geologist's view of oceanic crust age. Oil and Gas Journal, 112-116. Pratsch, J. C. (1990). Relative motions in geology: Some philosophical differences. Journal of Petroleum Geology, 13, 229-234. Ranneft, T. S. M. (1979). Segmentation of island arcs and application to petroleum geology. Journal of Petroleum Geology, 1, 35-53. ~r Reynolds, P. H., & Clay, W. (1977). Leg 37 basalts and gabbro: K-Ar and 4 0 ~ r - 3 9 dating. In Aumento, F., Melson, W. G., et al. (Eds), Initial reports of the Deep Sea Drilling Project (Vol. 37, pp. 629-630). Washington, DC: U.S. Government Printing Office. Ross, D. A. (1974). The Black Sea. In Burk, C. A., & Drake, C. L. (Eds.), The geology of continental margins (pp. 669-682). Berlin, Germany: Springer-Verlag. Ruddiman, W. F., & Kutzbach, J. E. (1989). Effects of plateau uplift on late Cenozoic climate. Eos, 70, 294. Ruditch, E. M. (1990). The world ocean without spreading. In Barto-Kyriakidis, A. (Ed.), Critical aspects of theplate tectonics theory (Vol. 2, pp. 343-395). Athens, Greece: Theophrastus Publications, S. A. Rutland, R. W. R. (1982). On the growth and evolution of continental crust: A comparative tectonic approach. Journal and Proceedings of the Royal Society uf New South Wales, 115, 33-60. Sanchez Cela, V. (1999). Formation of Ma6c-ultramafic rocks in the crust: need for a new upper mantle. Zaragoza: Universidad de Zaragoza. Saull, V. A. (1986). Wanted: Alternatives to plate tectonics. Geology, 14, 536. Saxena, M. N., & Gupta, V. J. (1990). Role of foredeep tectonics, geological and palaeontological data, gravity tectonics in the orogeny and uplift of the Himalayas, vis-a-vis continental drift and plate tectonics concepts. In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 105-128). Athens, Greece: Theophrastus Publications, S. A. Schneck, M. C. (1974). Mid-Atlantic trilobites. Geotimes, 19, 16. Scholl, D. W., & Marlow, M. S. (1974). Global tectonics and the sediments of modern and ancient trenches: some different interpretations. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 255-272). American Association of Petroleum Geologists.

350

David Pratt

Scotese, C. R., Gahagan, L. M., & Larson, R. L. (1988). Plate tectonic reconstructions of the Cretaceous and Cenozoic ocean basins. Tectonophysics, 155, 27-48. Seyfert, C. K. (1998). The earth: Its properties, composition, and structure. Encyclopaedia Brit a n n i c ~CD-ROM, 1994-1998. , Shapiro, M. N. (1990). Is the opening of the Atlantic manifested in the structure of the northern framing of the Pacific Ocean? In Barto-Kyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 1, pp. 179-196). Athens, Greece: S. A. Theophrastus Publications. Sheridan, R. E. (1 974). Atlantic continental margin of North America. In Burk, C. A., & Drake, C. L. (Eds.), The geology of continental margins (pp. 39 1-407). Berlin, Germany: Springer-Verlag. Sheth, H. C. (1997). Ophiolites: Another paradox. New Concepts in Global Tectonics Newsletter, 5, 5-6. Sheth, H. C. (1999). Flood basalts and large igneous provinces from deep mantle plumes: Fact, fiction, and fallacy. Tectonophysics, 311, 1-29. Shirley, K. (1998a). Ocean floor mapped from space. AAPG Explorer, October, 20-23. Shirley, K. (1998b). New maps show continent ties. AAPG Explorer, November, 20-2 1. Skinner, B. J., & Porter, S. C. (1995). The dynamic earth: An introduction to physical geology (3rd ed.). New York: John Wiley & Sons. Sloss, L. L., & Speed, R. C. (1974). Relationships of cratonic and continental-margin tectonic episodes. In Dickinson, W. R. (Ed.), Tectonics and sedimentation (Special Publication 22, pp. 98-119). Tulsa, OK: Society of Economic Paleontologists and Mineralogists. Smiley, C. J. (1 974). Analysis of crustal relative stability from some late Paleozoic and Mesozoic floral records. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 33 1-360). Tulsa, OK: American Association of Petroleum Geologists. Smiley, C. J. (1976). Pre-Tertiary phytogeography and continental drift-Some apparent discrepancies. In Gray, J., & Boucot, A. J. (Eds.), Historical biogeography, plate tectonics, and the changing environment (pp. 3 1 1 319). Corvallis, OR: Oregon State University Press. Smiley, C. J. (1992). Paleofloras, faunas, and continental drift: Some problem areas. In Chatterjee, s., & Hotton, N., 111 (Eds.), New concepts in global tectonics (pp. 241-257). Lubbock, TX: Texas Tech University Press. Smith, A. G., & Hallam, A. (1970). The fit of the southern continents. Nature, 225, 139-144. Smith, A. G., Hurley, A. M., & Briden, J. C. (198 1). Phanerozoic paleocontinental world maps. Cambridge, UK: Cambridge University Press. Smith, 1). E., Kolenkiewicz, R., Nerem, R. S., Dunn, P. J., Torrence, M. H., Robbins, J. W., Klosko, S. M., Williamson, R. G., & Pavlis, E. C. (1994). Contemporary global horizontal crustal motion. Geophysics Journal International, 11 9, 5 11-520. Smoot, N. C. (1997a). Aligned buoyant highs, across-trench deformation, clustered volcanoes, and deep earthquakes are not aligned with plate-tectonic theory. Geomorphology, 18, 199-222. Smoot, N. C. (1997b). Magma floods, microplates, and orthogonal intersections. New Concepts in Global Tectonics Newsletter, 5, 8-1 3. Smoot, N. C. (1998a). The trans-Pacific Chinook Trough megatrend. Geomorphology, 24, 333-351. Smoot, N. C. (1998b). WNW-ESE Pacific lineations. New Concepts in Global Tectonics Newsletter, 9, 7-11. Smoot, N. C. (1999). Orthogonal intersections of megatrends in the Western Pacific ocean basin: A case study of the Mid-Pacific mountains. Geomorphology, 30, 323-356. Smoot, N. C., & Meyerhoff, A. A. (1995). Tectonic fabric of the Atlantic Ocean floor: Speculation vs. reality. Journal of Petroleum Geology, 18, 207-222. Spence, W. (1977). The Aleutian arc: Tectonic blocks, episodic subduction, strain diffusion, and magmatic generation. Journal of Geophysical Research, 82, 2 13-230. Spencer, E. W. (1 977). Introduction to the structure of the earth (2nd ed.). New York: McGrawHill. Stanley, S. M. (1989). Earth and life through time (2nd ed.). New York: W.H. Freeman and Company. Steers, J. A. (1950). The unstable earth (5th ed.). London: Methuen. Stocklin, J. (1989). Tethys evolution in the Afghanistan-Pamir-Pakistan region. In Sengor, A. M. C. (Ed.), Tectonic evolution of the Tethyan Region (pp. 241-264). Dordrecht: Kluwer. Storetvedt, K. M. (1992). Rotating plates: New concept of global tectonics. In Chatterjee, S., &

Plate Tectonics: A Paradigm Under Threat

35 1

Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 203-220). Lubbock, TX: Texas Tech University Press. Storetvedt, K. M. (1997). Our evolving planet: Earth history in new perspective. Bergen, Norway: Alma Mater. Suzuki, Y., Harada, I., Iikawa, K., Kobayashi, K., Nomura, N., Oda, K., Ogawa, Y., Watanabe, F., & Yamazaki, K. (1997). Geological structure of northeast Honshu, Japan in contradiction to the plate tectonics. New Concepts in Global Tectonics Newsletter, 5, 17-1 9. Swift, S. A., & Carr, M. J. (1974). The segmented nature of the Chilean seismic zone. Physics of the Earth and Planetary Interiors, 9, 183-19 1. Tarling, D. H. (1971). Gondwanaland, palaeomagnetism and continental drift. Nature, 229, 17-2 1. Tarling, D. H. (1982). Land bridges and plate tectonics. Geobios, MCmoire SpCcial, 6, 361-374. Teichert, C. (1974). Marine sedimentary environments and their faunas in Gondwana area. In Kahle, C. F. (Ed.), Plate tectonics-Assessments and reassessments (Memoir 23, pp. 361-394). Tulsa, OK: American Association of Petroleum Geologists. Teichert, C., & Meyerhoff, A. A. (1972). Continental drift and the marine environment. Montreal: 24th International Geology Conference, pp. 339-349. Teisseyre, R., Wojtczak-Gadomska, B., Vesanen, E., & Maki, M-L. (1974). Focus distribution in South American deep-earthquake regions and their relation to geodynamic development. Physics of the Earth and Planetary Interiors, 9, 290-305. Timofeyev, P. P., & Kholodov, V. N. (1984). The problem of existence of oceans in geologic history. Doklady Akademii Nauk. SSSR, Earth Science Sections, 276, 61-63. Timofeyev, P. P., Udintsev, G. B., Agapova, G. V., Antipov, M. P., Boyko, N. I., Yeremeyev, V. V., Yefimov, V. N., Kurentsova, N. A., & Lyubimov, V. V. (1992). Equatorial segment of the MidAtlantic Ridge as a possible structural barrier between the North and South Atlantic. USSR Academy of Sciences Transactions (Doklady), Earth Science Sections, 312, 133-135. Tuezov, I. K. (1 998). Tectonics, structure, geodynamics and geological nature of the west Pacific active margin, parts I & 2. New Concepts in Global Tectonics Newslettev, 7, 16-23; 8, 20-24. Udintsev, G. B. (Ed.). (1996). Equatorial segment of the Mid-Atlantic Ridge. IOC Technical Series No. 46, UNESCO. Udintsev, G. B., & Koreneva, E. V. (1982). The origin of aseismic ridges of the eastern Indian Ocean. In Scrutton, R. A., & Talwani, M. (Eds.), The oceanfloor (pp. 204-209). Chichester: John Wiley & Sons. Udintsev, G. B., Illarionov, V. K., & Kalinin, A. V. (1990). The West Australian ridge. In BartoKyriakidis, A. (Ed.), Critical aspects of the plate tectonics theory (Vol. 2, pp. 307-341). Athens, Greece: Theophrastus Publications, S. A. Udintsev, G. B., Kurentsova, N. A., Pronina, N. V., Smirnova, S. B., & Ushakova, M. G . (1993). Finds of continental rocks and sediments of anomalous age in the equatorial segment of the Mid-Atlantic Ridge. USSR Academy of Sciences Transactions (Doklady), Earth Science Sections, 312, 111-114. Umbgrove, J. H. F. (1 947). Thepulse of the earth (2nd ed.). The Hague: Martinus Nijhoff. Uyeda, S. (1986). Facts, ideas and open problems on trench-arc-backarc systems. In Wezel, F. C. (Ed.), The origin of arcs (pp. 435-460). Amsterdam: Elsevier. Van Andel, T. H. (1984). Plate tectonics at the threshold of middle age. Geologie en Mijnbouw, 63, 337-341. Van Andel, T. H. (1994). New views on an oldplanet: A history ofglobal change (2nd ed.). Cambridge, UK: Cambridge University Press. Van Andel, T. H. (1998). Plate Tectonics. Encyclopaedia Britannica, CD-ROM, 1994-1998. Van Bemmelen, R. W. (1972). Geodynamic models: An evaluation and a synthesis. Amsterdam: Elsevier. Van der Linden, W. J. M. (1977). How much continent under the ocean? Marine Geophysical Research, 3,209-224. Van der Voo, R. (1998). A complex field. Science, 281, 791-792. Van Hinte, J. E., & Ruffman, A. (1995). Palaeozoic microfossils from Orphan Knoll, NW Atlantic Ocean. Scripta Geologica, 109, 1-63. Voisey, A. H. (1958). Some comments on the hypothesis of continental drift. In Continental drif-A symposium (pp. 162-17 1). Hobart: University of Tasmania. Wanless, R. K., Stevens, R. D., Lachance, G. R., & Edmonds, C. M. (1968). Age determinations

'

352

David Pratt

and geological studies. K-Ar isotopic ages, report 8. Geological Survey of Canada, Paper 672, pt. A, pp. 140-141. Wezel, F.-C. (1986). The Pacific island arcs: Produced by post-orogenic vertical tectonics? In Wezel, F.-C. (Ed.), The origin of arcs (pp. 529-567). Amsterdam: Elsevier. Wezel, F.-C. (1988). A young Jura-type fold belt within the central Indian Ocean. Bollettino di Oceanologia Teorica ed Applicata, 6, 75-90. Wezel, F.-C. (1992). Global change: shear-dominated geotectonics modulated by rhythmic earth pulsations. In Chatterjee, S., & Hotton, N., I11 (Eds.), New concepts in global tectonics (pp. 421-439). Lubbock, TX: Texas Tech University Press. Wicander, R., & Monroe, J. S. (1999). Essentials of geology (2nd ed.). Belmont, CA: Wadsworth Publishing. Wyllie, P. J. (1 976). The way the earth works. New York: John Wiley & Sons. Yano, T., & Suzuki, Y. (1999). Current hypotheses on global tectonics. New Concepts in Global Tectonics Newsletter, 10, 4. Zoback, M. L., Zoback, M. D., Adams, J., Assumpq50, M., Bell, S., Bergman, E. A., Bliimling, P., Brereton, N. R., Denham, D., Ding, J., Fuchs, K., Gay, N., Gregersen, S., Gupta, H. K., Gvishiani, A., Jacob, K., Klein, R., Knoll, P., Magee, M., Mercier, J. L., Miiller, B. C., Paquin, C., Rajendran, K., Stephansson, O., Suarez, G., Suter, M., Udias, A., Xu, Z. H., & Zhizhin, M. (1989). Global patterns of tectonic stress. Nature, 341, 291-298. Zorin, Yu. A., & Lepina, S. V. (1989). On the formation mechanism of postrift intracontinental sedimentary basins and the thermal conditions of oil and gas generation. Journal of Geodynamics, 11, 13 1-142.

Journal of Scientific Exploration, Vol. 14, No. 3, pp. 353-364, 2000

0892-33 10100
O 2000 Society for Scientific Exploration

The Effect of the "Laying On of Hands" on Transplanted Breast Cancer in Mice
WILLIAM F. BENGSTON
St. Joseph S College, Patchogue, NY 11772 e-mail: wbengston @sjcny.edu

2216 NE Douglas Street Newport, Oregon 97365

Abstract- After witnessing numerous cases of cancer remission associated with a healer who used "laying on of hands" in New York, one of us (W.B.) "apprenticed" in techniques alleged to reproduce the healing effect. We obtained five experimental mice with mammary adenocarcinoma (code: H2712; host strain: C3HlHeJ; strain of origin: C3H/HeHu), which had a predicted 100% fatality between 14 and 27 days subsequent to injection. Bengston treated these mice for 1 hour per day for 1 month. The tumors developed a "blackened area," then they ulcerated, imploded, and closed, and the mice lived their normal life spans. Control mice sent to another city died within the predicted time frame. Three replications using skeptical volunteers (including D.K.) and laboratories at Queens College and St. Joseph's College produced an overall cure rate of 87.9% in 33 experimental mice. An additional informal test by Krinsley at Arizona State resulted in the same patterns. Histological studies indicated viable cancer cells through all stages of remission. Reinjections of cancer into the mice in remission in Arizona and New York did not take, suggesting a stimulated immunological response to the treatment. Our tentative conclusions: Belief in laying on of hands is not necessary in order to produce the effect; there is a stimulated immune response to treatment, which is reproducible and predictable; and the mice retain an immunity to the same cancer after remission. Future work should involve testing on various diseases and conventional immunological studies of treatment effects on experimental animals.
Keywords: mammary cancer remission - cancer remission - healing laying on of hands -healer -alternative medicine.

Introduction
Researchers who have studied psi phenomena in general and healing in particular have been plagued by the apparent unreliability of the phenomena. Added to this problem are unresolved questions of the role and necessity of belief, and the question of whether subjects can be taught to produce significant effects. Our research on healing addresses these issues and appears to present a reliable and potentially efficacious production of healing taught to, and produced by, nonbelieving subjects.

354

William F. Bengston & David Krinsley

Previous Research
There is a growing body of research into what has been variously termed "anomalous" or "paranormal healing," "healing with intent," "spiritual healing," "Therapeutic Touch," and "laying on of hands," to name but a few. There are by now so many terms used interchangeably that some researchers do not even distinguish among them (Bunnell, 1999). Compilations of controlled studies (Benor, 1992; Murphy, 1992) often cluster previous work by the subject of the intended healing. Benor (1992), for example, discusses healing action on enzymes, cells in the laboratory, fungilyeasts, bacteria, plants, singlecell organisms, animals, electrodermal activity, and human physical problems among the areas that have been submitted to controlled scientific study. And although some studies in each of these areas have produced significant results, they are nevertheless dogged by unresolved questions of reliability. Benor (1992) reports that psi phenomena in general tend to be demonstrated only in the first series of experiments but not in attempted replications. In healing research, for example, Snel (1980) reported significantly inhibited growth of mouse leukemia cells in tissue culture, but not on attempted replication. Even thoroughly studied and practiced healer Oskar Estebany has shown an inability to reproduce significant effects of trypsin in vitro when he was personally not at ease (Smith, 1972). Healers are reported to be unable to produce sufficiently consistent results, frustrating some researchers into claiming that it is virtually impossible to establish a repeatable experiment in which healing occurs in the same combination more than once (Benor, 1990). And because healing does not conform even statistically in regularly repeatable fashion, skeptics can argue that claims for healing efficacy probably present chance variations, rather than responses to healing treatment (Benor, 1990). The question of the role of belief in producing psi phenomena has also been debated. The well-known sheep-goat effect (Schmeidler, 1945; Schmeidler and Murphy, 1946; Palmer, 197 1) does not seem to carry over consistently into healing. Many presume that some sort of faith is a requisite for positive results in healing, and some even argue that skeptical observers may inhibit the effectiveness of the treatment (Benor, 1990). But there is not agreement on the issue. Grad (1961) found that skeptical medical students treating cages of mice produce slower healing than the untreated control group. On the other hand, Krieger's study of Therapeutic Touch (1979) found that belief in the effectiveness of healing does not affect its success. There is also disagreement about whether healing can be taught or whether it is entirely an innate ability. Reviewing some biographies of well-known healers, Benor (1992) notes that the Russian healer Yefin Shubentsov believes it is a physiological process that can be taught to anyone. Dolores Krieger's Therapeutic Touch method claims that to become a healer, a person must have clear intentions, motivation to help, and an ability to understand personal motivation for wanting to heal. On the other hand, both Oskar Estebany and Olga Worrell felt that healing could not be developed by study. Experimentally,

Effect of the "Laying on of Hands"

355

though, Nash (1980, 1984) found that subjects not known to be healers could be taught to significantly affect the growth of bacteria in cultures. Our research addresses these issues of reliability, belief, and "teachability" in healing experimental mice, and by extension, the question of the efficacy of healing is raised. Certainly all healing work in experimental animals must acknowledge the pioneering work of Grad (1961, 1965, 1976), who set the stage and the standards for work in this area. In the first controlled experiments, Grad studied the Hungarian healer Oskar Estebany's ability to accelerate the healing rate of mice with one-half by one-inch wounds. Estebany held the cages of mice twice daily for 15 minutes. The treated group healed significantly more rapidly than the untreated group (Grad, 196 1). Grad also induced goiters in mice by feeding them an iodine-deficient diet (Grad, 1976). The thyroid glands of mice treated by a healer twice daily for 15 months grew significantly more slowly than those of the control mice. This effect was also obtained when Estebany did not treat the mice directly with his hands, but instead held in his hands cotton cuttings, which were placed in contact with the mice in the cages. There has been little work done in the area of cancer in live animals. Onetto and Elguin (1966) experimented in inhibiting tumor growth in mice that had been injected subcutaneously with a tumoral suspension. They found the area, weight, and volume of tumor growth in one group of 30 tumorigenic mice was significantly less than that of 30 untreated control mice. Interestingly, a second group of 30 mice was treated in an attempt to increase tumor growth, but these mice did not differ from the control mice. Null (198 1) gave 50 healers two mice each to screen for their ability to prolong the lives of mice injected with cancer cells. Only one healer produced total tumor regression in one of his mice, and the other survived longer than predicted. The one successful healer was then asked to twice replicate the healing on 10 mice. The healer was able to extend the average survival of the treated mice to a statistically significant number of days beyond that of the control group (Grad, 1976).

The Present Research
Our research grew out of an attempt to empirically test a New York-based healer. This individual claimed that without the benefit of study or training, he was naturally able to perform psychometry, or token object reading, as well as healings by laying on of hands. Over the course of several years, Bengston watched hundreds of people being treated for conditions covering a wide range of afflictions. Some conditions such as long-term diabetes seemed to respond slowly while others such as cancer appeared to respond almost immediately. Among the most interesting observations was that the entire process did not involve belief of any sort. The person being healed was not asked to believe in anything, and the healer himself did not espouse belief. Truly, the healings could be considered "faithless" on the part of all concerned. Anecdotally, it

356

William F. Bengston & David Krinsley

appeared that nonbelievers responded more dramatically than believers. Conventional medical tests determined the success or failure of treatment. Over the course of months of questioning by Bengston about the process by which the healer was able to treat others, techniques were developed wherein the healer claimed that complete skeptics could be trained to reproduce healing effects. The techniques did not involve belief of any sort, nor did they include meditation, focused visualization, spiritual discipline, or lifestyle changes. The initial techniques involved a series of routine mental tasks that were not directly intended to produce healing. Subsequent to mastery, these would be followed by laying on of hands. The mental techniques required several weeks of practice to achieve sufficient mastery to move to the laying on of hands techniques. We wanted to test the authenticity of the healings in a rigorously controlled setting that would allow for completely unambiguous results. For obvious reasons, "clinical" work with humans involves less than ideal conditions, so we asked the healer if he would come into a laboratory and attempt to treat laboratory animals. He initially agreed, but just before the start of our experiment, he refused to participate. At that point, Bengston, who had apprenticed the longest and spent the most time learning the techniques, reluctantly became the first experimental healer.

Methods and Data
The First Experiment
Krinsley was a professor at Queens College of the City University of New York. He had arranged for a disinterested professor of biology who was doing conventional cancer research to prepare experimental animals. Her area of expertise was mammary cancer, so she was familiar with mammary adenocarcinoma and obtained from The Jackson Laboratory a "standard" mammary adenocarcinoma (code H2712; host strain C3JlHeJ; strain of origin C3HIHeHu). The normal progression after the mouse is injected is the development of a nonmetastatic palpable and visible tumor that grows so large that it crushes the internal organs of the host. Host survival in the conventional literature was 100% fatality between 14 and 27 days after injection. The experimental procedure was planned as follows: Bengston was to place his hands around the outside of a standard laboratory plastic cage containing six mice for 1 hour per day while applying the healing technique, beginning 3 days after injection. At no time were the mice to be directly touched. Six control mice were kept in a separate laboratory in the same building. One experimental mouse died of natural causes before treatment began, so only five mice were actually treated. Our initial hope was that we might get a significant difference in survival between the experimental animals and their controls. Remission was not seriously considered. Our results were totally beyond expectation. About 10 days into the procedure, the experimental mice began to develop a "blackened area" on their tu-

Effect o f the "Laying on of Hands"

Fig. I . Typical mouse 14 days after rnlcctlon.

rnors (Figures 1 and 2). At this point, Beng\ton presumed that the experiment was failing and wantcd to call it off. Krinsley convinced him to continue, reasoning that there was nothing to lose. Approximately I week later, the blackened areas "ulcerated" as if they had bccn split open (Figures 3 and 4). In some

Fig. 2. Twenty-two days after 111jectlor1

358

William F. Bengston & David Krinsley

Fig. 3. Twenty-eight days after injection.

cases, the ulceration grew extremely large (Figure 5 ) then appeared to implode (not shown), and the wound closed. The mice then lived their normal life span of approximately 2 years. T the figures, the index card notation "A-3" identin fies the mouse, and the day number indicates elapsed time since injection. In

Fig. 4. Thirty-five days after injection.

Effect of the "Laying on of Hands"

359

Figure I (Day 14), the tumor is visible o n the left posterior dorsal aspect of the mouse. On Day 22 (Figure 2 ) , the tumor is clearly larger but has developed an encrusted area on i t \ surt'ace (nlost posterior aspect of the tumor). This is the earregression. liest indication of lu111o1Days 28, 35, and 38 (Figures 3 through 5 ) illustrate the next significant stage. The tumor appears to be resorbed internally and remains clear of infection. From this stage on, the tumor regresses completely (not shown), and the mouse lives its norsilal life span. The control mice presented us with sorne llniyue challenges. In the initial stages o f developing the experimental procedure, the healer warned that he could not be near or see the control mice, or they, too, would go into remission. Although skeptical, we agreed to keep the control mice in another laboratory. When Rengston became the substitute healer, we relaxed this protocol. After two control mice had died "on schedule '-- that is, between 14 and 17 days after ir~jectior~-Bengsttrr? went to see the remaining four. They exhibited normal tumor progression patterns and were obviously in the last stages of the disease. However, after Hengston observeci the four control mice in their cage, several days later, they too developed the blackened area, the tumor ulcerated, and the mice went into full remission, although they lagged behind the regularly treated experimental mice in remission rate.
7

The Set-ond Experitucrlt The results of our first experiment clearly amazed and confounded us, and we immediately set out to replicate the procedure. Krinsley offered to try the tech-

360

William F. Bengston & David Krinsley

nique, and he solicited a skeptical faculty volunteer from Queens College who had neither belief nor experience with any sort of paranormal phenomena. Bengston approached a half dozen students at St. Joseph's College to act as volunteers and selected the two most skeptical students to serve as healers. The two students also had no previous experience with anomalous healing phenomena, did not believe in the legitimacy of healing, and reported afterward that they believed Bengston was actually conducting a study on student gullibility. Bengston trained the four volunteers for several hours once a week for 6 weeks. Between training sessions, the volunteers were assigned practice mental tasks. Each volunteer was then given one cage with two mice. One experimental mouse died of natural causes 2 days after treatment began, reducing the actual number of experimental mice to seven. There were six control mice in an adjacent laboratory in the same building. All seven experimental mice developed the remission pattern and lived their normal life span. Without our knowledge, and despite warnings to not do so, after two control mice had died, the faculty volunteer at Queens College began daily observations of the remaining four. All four of the remaining control mice then went into remission.

The Third Experiment
This experiment produced the most puzzling results. We moved the study to the Brooklyn campus of St. Joseph's College, where we convinced Carol Hayes, the extremely skeptical chairperson of the biology department, to perform the procedure. She agreed as long as she could pick some volunteers. She selected three undergraduate biology majors; Bengston selected two volunteer healers: one undergraduate sociology major and one child study major. As in the second experiment, all participants had no previous paranormal experiences and were nonbelievers in the legitimacy of laying on of hands. They were trained by Bengston in an identical manner to the second experiment. In this run, we attempted to solve the problem of control remissions and to find out if every volunteer could individually produce remissions. Thus, each volunteer was given one mouse to treat in the laboratory and one mouse to treat at home. We reasoned that because exposure to a trained individual appeared to produce remissions in previous experiments, in order to test whether each individual was effective, he or she had to remit their "private" mouse at home. It followed from the previous results that if any student could produce remissions, then all five experimental laboratory mice should also remit. There were two control groups. The first was a cage with six mice in an adjacent laboratory in the same building. The second control group was a cage with four untreated mice sent to a laboratory in another city known only to the experimental biologist. The results of this run have frustrated attempts to discern a pattern. All five experimental mice taken home by the students remitted. But in the laboratory, all three of the experimental mice treated by the biology majors died within the

Effect of the "Laying on of Hands"

361

expected time frame. Only the sociology and child study majors were able to remit their mouse in the laboratory. Even as the biology majors' experimental mice died, the biology students began to look in on the control mice in the adjacent lab after three had died, and the remaining three control mice remitted. All four of the control mice sent to a distant city died well within the expected 27-day maximum. The Fourth Experiment The fourth experiment took place entirely in a laboratory at the Brooklyn campus of St. Joseph's College. As in the third experiment, the mice were prepared and monitored by Carol Hayes, the chairperson of the biology department. In the previous experiments, any mouse that developed a "blackened area" followed our now "classic pattern" to full remission. None of the mice that died developed this blackened area. As such, we were now confident that about 3 weeks after injection, we could spot with certainty which mice would be completely cured and which would die. Thus, we decided to repeat the experiment; this time sacrificing all the mice 38 days after injection. We selected 38 days because at this point, some of the mice would still have large ulcerations and some would have already closed wounds and be fully remitted. Six student volunteers were given a cage with two mice each. One of the students was a biology major who had failed to remit his mouse in the laboratory in the third experiment and wanted to try again; two more students had been volunteers in a previous run and wanted to do it again out of sheer disbelief at the results they had obtained. Bengston chose three new and skeptical volunteers who were not biology majors. Unknown to us, the experimental biologist elected to not inject one mouse to observe any behavioral changes (there were none); one mouse was given two separate injections. In all, there were 11 viable experimental mice with cancer. Eight mice on site served as the first control group. And four mice were sent to a laboratory in another city to serve as the second control group. Ten of eleven of the experimental mice were in various stages of the remission process when they were sacrificed on Day 38 after injection. One experimental mouse never developed the blackened area and died on Day 30 after injection. Although 30 days is technically beyond the predicted 100% fatality, within 27 days, we considered this mouse to have not responded to treatment. Seven of eight on-site control mice, which were regularly looked in on by the student volunteers, remitted. All four of the control mice sent to a laboratory in another city died within the expected time frame. After the mice were sacrificed, we sent tissue samples out to an independent laboratory for histological analysis. Viable mammary adenocarcinoma cells were present at all stages of remission. Only those mice whose ulcerations were completely closed were free of cancer. A summary of the four experi-

William F. Bengston & David Krinsley
TABLE Summary of Remission Patterns
Experiment Experiment 1 Experimental mice Control mice on site Experiment 2 Experimental mice Control mice on site Experiment 3 Experimental mice Control mice on site Control mice off site Experiment 4 Experimental mice Control mice on site Control mice off site Overall results Experimental Mice Control mice on site Control mice off site N No. of remissions Remissions
(%)

Discussion and Conclusions
We offer several preliminary conclusions. First, the treatment was successful in curing mammary adenocarcinoma. Second, it is apparent that stated belief in anomalous phenomena in general, or healing through laying on of hands in particular, is not necessary to produce healing of mammary adenocarcinoma in laboratory mice. None of the experimental healers were believers, though as the experiments progressed, they clearly hoped that their mice would live. Despite the attachment they felt toward their mice, most could be considered at least fairly strong skeptics. We cannot generalize that anyone can produce these remissions, since our sample of volunteers was clearly a nonrepresentative convenience sample. Actually, we have never tested whether believers or people who have allegedly experienced or produced previous anomalous phenomena are also able to produce remissions of this type of tumor. Third, given the peculiar situation that biology majors were unable to produce remissions in the laboratory but were able to do so at home (the third experiment), it is also possible that systematic intellectual activity (these students kept scientific logs) is antagonistic to the production of healing effects. Fourth, it seems likely that there is a stimulated immune response to treatment. There are several reasons to draw this conclusion. The discovery in the fourth experiment that there were viable mammary adenocarcinoma cells during tumor remission is consistent with an immune response (though cutting off of the blood supply to the tumor might produce similar effects).

Effect of the "Laying on of Hands"

Fig. 6. Example ol'large ulceration

Some of' the ulcerations grew extre~nely large (Figure 6), yet no mozise ever developed any gros.s signs c?finfection,P V C J ~ ~ t + l ~ t ~ xli ~ e the ulceration alone Z ltzt' t of should huve been enough to kill it. Finally, and perhaps most persuasively, we were unaware that the experimental biologist at St. Joseph's College had reinjected several remitted mice months after the experiments were over. Without ,further treatment, tlzese mice were immune to thr. nlatnmury a&nocarcinoma. Finally, we may conclucie that we are apparently able to cure mammary adenocarcinoma in experimental mice on demand. The reliability of the procedure has been established (we have also produced remissions at Arizona State University, though the results are not reported here).
Future Reseclrch

We find ourselves in a somewhat unique position, because the bane ofprevio ~ l research into anomalous phenomena ha\ often been the lack of predictabils ity o f results. Several interesting possibilities emerge. In terms of mammary adenocarcinorna, conventional immunological studies of mice in future experiments might yield data suggestive of the process by which remissions occur. Are there any new antibodies or an increased production of antibodies occurririg during tumor regression? If so, long-term goals might include attempting to stimulate the identified immunological reaction without the laying on of hands. In short, the results of these healing techniques might provide useful information about how to reproduce the remissions using more conventional therapies. Other types of cancer need to be studied to see whether they also respond to

364

William F. Bengston & David Krinsley

the healing techniques used here. To date, experimentally we have only treated mammary adenocarcinoma. Finally, because we have established such a degree of reliability, the healing techniques can be used to try to discern what is happening between the healer and the animal. Experiments can be designed to shed light on the mechanisms by which anomalous healing actually occurs. For example, in producing remissions in other species of mammals, we have anecdotally observed that the speed of remission is a function of the size of the animal. Larger animals remit more slowly than smaller ones. Are the different response rates due to metabolism rate, the mass of the animal, or the spreading of a healing energy over a wider area? Would 50 mice simultaneously treated remit at a slower rate than 25? The possibilities for research are almost endless.

Acknowledgments
We gratefully acknowledge the input of Bernard Grad, Eugene Carpenter, Ted and Diane Mann, Joann LaScala, and Daniel J. Benor.

References
Benor, D. (1990). A psychiatrist examines fears of healing. Journal of the Society for Psychical Research, 56, 287-299. Benor, D. (1992). Healing research-Research in healing (Vol. 1). Munich, Germany: Helix Editions. Bunnell, T. (1999). The effect of "healing with intent" on pepsin enzyme activity. Journal of Scientific Exploration, 13(2), 139-148. Grad, B., Cadoret, R. J., & Paul, G. I. (1961). The influence of an unorthodox method of treatment on wound healing in mice. International Journal of Parapsychology, 3, 5-24. Grad. B. (1965). Some biological effects of laying on of hands: A review of experiments with animals and plants. Journal of the American Society for Psychical Research, 59, 95-1 27. Grad, B. (1976). The biological effects of "laying on of hands" on animals and plants: Implications for biology. In Schmeidler, Gertrude (Ed.), Parapsychology: Its relation to physics, biology, psychology and psychiatry (pp. 76-89). Metuchen, NJ: Scarecrow Press. Krieger, D. ( I 979). The therapeutic touch: How to use your hands to help or heal. Englewood Cliffs, NJ: Prentice-Hall. Murphy, M. (1992). The future of the body: Explorations into the further evolution of human nature-Chapter 13. Los Angeles: Jeremy Tarcher. Nash, C. (1982). Psychokinetic control of bacterial growth. Journal of the Society for Psychical Research, 51, 2 17-221. Nash, C. (1984). Test of psychokinetic control of bacterial mutation. Journal of the American Society for Psychical Research, 78, 145-1 52. Null, G. ( 198 1). Healers or hustlers? Part 1V. SelfHelp Update, Spring. Onetto, B., & Elguin, G. (1966). Psychokinesis in experimental tumorigenesis. Journal of Parapsychology, 30, 220. Palmer, J. (197 1). Scoring in ESP tests as a function of belief: The sheep-goat effect. Journal of the American Society for Psychical Research, 65, 373-408. Schmeidler, G. (1945). Separating the sheep from the goats. Journal of the American Society for Psychical Research, 39, 47-50. Schmeidler, G., & Murphy, G . (1946). The influence of belief and disbelief in ESP upon ESP scoring level. Journal of Experimental Psychology, 36, 271-276. Smith, J. (1972). Paranormal effects on enzyme activity. Human Dimensions, 1, 15-19. Snel, F. (1980). PK influences on malignant cell growth. Research Letter No. 10, Parapsychology Laboratory, University of Utrecht.

Journal of Scientific Exploration, Vol. 14, No. 3, pp. 365-382,2000

0892-3310100
0 2000 Society for Scientific Exploration

The Stability of Assessments of Paranormal Connections in Reincarnation-QpeCases

Division of Personality Studies, Department of Psychiatric Medicine University of Virginia, Charlottesville, VA 22908 e-mail: ips6r@virginia.edu

Department of Psychology University of Tasmania, G.PO. Box 252-30, Hobart, Tasmania, Australia

Abstract-The phrase "case of the reincarnation type" refers to any child who has information and/or other characteristics that suggest a paranormal connection between himself and a particular person who has died before he was born. By a "paranormal connection," we mean the communication of information without the recognized sensory channels. Initially unintentionally, several cases previously studied by one of us (I.S.) were investigated by the other (J.K.) about 20 years after I.S.'s first investigation. Additional cases were then intentionally reinvestigated to test the stability of the paranormality assessments of 15 cases. All but one of the cases investigated 20 years later received the same or a lower paranormality rating. These results support the view that such case studies, even if carried out 2 decades after the relevant events occurred, do not generate inflated paranormality assessments. Keywords: paranormal phenomena -reincarnation -memories

Introduction
For the last 40 years, investigators have been examining cases in which a child is said to remember a previous life. For most of the cases that were investigated by one of the authors (I.S.) (Stevenson, 1974) and later by Mills (1989), Pasricha (1990), Haraldsson (199 I), and the other author (J.K.) (Keil, 199 I), interviews were conducted several years after the most important events had taken place. The "most important events" are nearly always ones that suggest the transmission of information without the usual sensory channels, i.e., paranormal processes. Some cases that seem important examples suggesting paranormal processes did not come to the attention of the above investigators until more than a decade had passed. The question must be asked whether in spite of such long intervals, relevant information remains sufficiently stable, or whether these case studies become significantly distorted, and in particular, whether informants, with the passage of time, tend to emphasize more heavily aspects of a case suggesting paranormal processes.

366

Ian Stevenson & Jiirgen Keil

In an attempt to answer this question, 4 cases in Thailand, 10 in Turkey, and 1 case in Myanmar were investigated after (on average) more than 20 years. The study developed initially from the accidental reinvestigation of some cases by J.K. that had already been studied by I.S. about 2 decades earlier. J.K. only became aware of this after he had more or less completed his investigations. He found that the informants did not seem to be influenced by the earlier investigations, and some of them mentioned only toward the end of the interviews that an American professor had conducted a similar study years earlier. We then decided to reinvestigate some further cases to compare the results of 15 cases. It could be argued that we should be primarily concerned in this paper with the study of memory over time. However, many aspects of memory, although relevant to the comparison of paranormality assessments, cannot be readily evaluated. A full appraisal of the memories of informants for these cases would require a larger series of cases than we report here. Nevertheless, we will cite, in a later section of this paper, some of the relevant literature on long-term memory. For cases of the reincarnation type, the paranormality components in the connection between a person who has previously died (the previous personality) and a child (the subject) is the most important aspect. Although for most case studies, the strength of these components is not directly assessed; it is almost always assumed that such components may be present. Otherwise, the case report would not really be relevant to the reincarnation hypothesis. As will be outlined in the Discussion section, an objective rating scale for paranormality in these cases should be developed, but we have not done this yet. The rating scores reported here are based on our more subjective evaluations. By presenting in some detail half of the cases that were compared, we hope that readers can make their own assessments.

Memory Aspects
Although the main feature of this study is the stability of judgments about paranormal components of a case, we cannot isolate this question from the aspects of memory that are inevitably involved when informants report events that happened many years earlier. It must be acknowledged that memories generally diminish over time. Bartlett (1932) also showed that informants may add new material without being aware of these changes. On the other hand, Prince (1918, 1919) reported two instances in which accounts of experiences written 15 and 20 years apart differed in only one detail. Similarly, Parsons (1962) found that a description of a building written 6 years after it was seen was correct in 18 out of 21 details. Two studies of reports of experiences near death are directly pertinent to the question of the embellishment of reports with the passage of time. Greyson (1983) developed a scale for such experiences that indicates by increasing

Paranormality in Reincarnation Cases

367

scores the complexity-i.e., the amount of detail-in a report. He found no significant correlation between the score on the scale, indicating the complexity of the report, and the elapsed time between the experience and its report. Using data from a different series of near-death experiences, Alvarado and Zingrone (1 997198) also found no significant correlation between complexity of the report of an experience and the elapsed time between the experience and its report. Vehicular accidents are often regarded as examples that show that the memories of several witnesses may differ substantially. However, vehicular accidents contain sequences of events that happen quickly and then end. Although our case studies may also include some dramatic sequences, such as when someone died in an accident or was killed intentionally, the events that were reported to us were already relatively settled and fixed in the minds of our informants before the first investigations were carried out. The memory aspects of our case studies are similar to what are termed "autobiographical memories." Baddeley (1998) reviewed the evidence for the accuracy of autobiographical memories and concluded that normal people can recall earlier events in their lives reasonably accurately over a long interval. Distortions and omissions can occur due to emotional factors and the perceived unpleasantness of the events. Disagreements about the recollections of accidentsapart from distortions due to personal involvement-are probably often due to the relatively large number of relevant details, which are perceived during a short period. Cases of the reincarnation type are not always regarded by the families involved as definite manifestations of rebirths of previous personalities. In this sense, they differ from the happenings associated with such definite events as marriages and funerals. Nevertheless, events associated with the assumed rebirth of a child are, in many cases, discussed by family members, friends, and more distant relatives quite frequently. An investigation of events that took place, say, 15 years ago does not usually involve obtaining information from people who have not talked about these events since they happened. Discussions of the events among family members will tend to enhance retention and accessibility of the informants' memories for them.

Method
For the first cases to be compared, J.K. had not seen I.S.'s notes before he conducted his own investigation. In the second phase of the project, I.S. furnished J.K. with the names and addresses of subjects whose cases he had investigated many years earlier. J.K. then went to the informants for these cases and, conducting new interviews, made records of what the informants then remembered about the cases. J.K. did not read I.S.'s notes before he made his investigations. He had perhaps read reports of some of I.S.'s cases that had been published, but he had retained in his conscious memory no details of these that influenced his investigations. When J.K. had completed his inquiries, we

368

Ian Stevenson & Jiirgen Keil

compared the two accounts for the following differences: loss of essential details (in the J.K. reports compared to the I.S. ones), distorted accounts to J.K. of what were obviously the same events described earlier to I.S., and mention to J.K. of important details not told to I.S. We also noted reports of events to J.K. that differed in no essential detail from reports of the same events to I.S. Because we were primarily interested in the stability of information relevant to the assessment of paranormal processes, J.K. concentrated on interviews. He did not extend his inquiries, as I.S. often did, to the study of relevant documents, such as hospital records and postmortem reports. As J.K. had learned in connection with other cases in Turkey, such records are nearly always discarded within a decade. In Thailand, termites frequently produce the same outcome. Because of the relatively small number of cases and relatively large number of uncontrolled variables, we did not attempt to compare specific memory aspects. We tried to keep them in mind, however, when for each investigation we rated the strength of the paranormality component on a scale of 1 to 10. For these ratings, we assigned the descriptive values listed below to coordinate I.S.'s and J.K.'s assessments and to provide the reader with an illustration of how we judged the strength of each case. (However, comparisons between the earlier and later investigations could also be carried out with a different set of descriptive values.) A rating of 1 means no suggestion of paranormal processes. A rating of 2 to 3 means there are indications that a paranormal component may be involved, but the probability is equally high that the apparently paranormal features could be due to chance or other normal processes. A rating of 4 to 7 indicates that paranormal processes increasingly outweigh alternative explanations without reaching the equivalent of the .05 significance level. A rating of 8 means the presence of a paranormal component is suggested as being equivalent to the .05 significance level. A rating of 9 is similar to a rating of 8, but at the .O1 significance level. A rating of 10 is similar to 9, but at the .OO1 significance level. In mentioning figures of probability, we do not mean to minimize the subjectivity of our judgments about the element of paranormality in the cases. Moreover, it is not possible to list, in a kind of hierarchical order, conditions that would help to exclude normal communications. Nevertheless, the following conditions (taken from Keil, 1991) are likely to strengthen a claim for the presence of paranormal aspects. In this list and elsewhere, the letter S stands for the subject, PP for the previous personality, and PL for the (claimed) previous life. 1. The S and the S's family do not know and have no contact with the PP's family until the S has made definite and potentially verifiable verbal statements.

Paranormality in Reincarnation Cases

369

2. The S makes numerous unambiguous statements, which are relatively independent of each other, about a PP and a PL that can be verified. 3. The S provides information about something not known to anybodyexcept, in the past, to the PP-that can be verified, such as some item hidden by the PP and recovered by the S. 4. Statements made by the S-preferably before the PP's family is involved-are noted by more than one member of the S's family and preferably by other people not belonging to the family. 5. Without any opportunity to learn or imitate, the S is able to do something that corresponds to some activity the PP was able to perform. For example, the S is able to speak a language or dialect that is not spoken by people with whom the S had contact. but 6. Similar to (9, the S has some opportunity to learn and imitate. Nevertheless, the S seems far more proficient than would normally be expected. 7. The S has birthmarks and/or malformations that correspond to injuries or other peculiarities of the PP. (Several birthmarks on the S that correspond to specific injuries sustained by the PP are more impressive than a single birthmark that corresponds imprecisely to a large injury on the PP.) 8. Unusual behavior, such as phobias or preferences of the S manifested at an early age, that do not make sense in terms of the S's experiences in his or her life, but that correspond to some important event in the PP's life or feature of the PP's character. 9. Although more difficult to evaluate, the intensity and spontaneity with which statements are made and emotions are expressed also have a bearing on the assessment.

This list could be extended. We can also integrate its items, to some extent, if we remember that there are two main criteria in operation on the basis of which we can assess paranormality. First, we need to appraise the number and the complexity of relatively independent statements and other characteristics that correspond to verifiable statements, features, and facts associated with the PP; a small number of statements based on fantasies could, by chance, agree with some aspects of the PP's life. In addition, we need to evaluate the barriers-geographical and social-that may have made it impossible or at least unlikely that seemingly paranormal connections between the PP and the S occurred by normal means. For the relatively subjective evaluation of the reports, we had to take account of the following difficulties:

1. For several cases, J.K. could not interview the same informants who had provided information to I.S. Some of I.S.'s informants had died, some had moved and could not be contacted, and some could no longer remember anything related to the relevant events.

370

Ian Stevenson & Jiirgen Keil

2. Additional information recorded by J.K. (compared with statements recorded by I.S.) may be due to (a) embellishments; (b) differences in the time available for an interview (the informants had the information but did not refer to it); (c) different informants; and (d) I.S. investigated the case when some children were still quite young. A child may have made additional statements after I.S.'s visit, and these may have included paranormal information.
A comparison of the two versions of the normal factual aspects of a case may provide some reassurance when similar sets of data were obtained. However, minor discrepancies and/or omissions among these data are generally relatively unimportant unless they have a bearing on the paranormality assessment. For instance, if during J.K.'s interview, the informants could not remember the names of some relatives that were recorded by I.S., we did not regard this as a serious sign of instability unless the names were relevant to statements or events that suggested some paranormal process. For most cases, we agreed on the paranormality ratings; if we disagreedusually by 1 point on the scale-we discussed the reports until we reached agreement.

Principal Features of 15 Cases Selected for This Comparison
Of the 15 cases we compared, 10 were from Turkey, 4 from Thailand, and 1 from Myanmar (formerly Burma). Five of the subjects were female and 10 were male. There were no cases of the sex-change type in the group, i.e., cases in which the subject claimed to remember the life of a person of the opposite sex. The I.S. cases were first investigated between 1966 and 1984 (median: 1971). The J.K. cases were first investigated between 1990 and 1996 (median: 1995). In both groups, the "first investigation" was typically followed by further interviews in later years when we wished to interview additional informants or talk again with an earlier informant. The mean interval between the first investigations of I.S. and J.K. was 22 years, and the median interval was 23 years. Particularly with I.S. and sometimes with J.K., the investigations continued for several years or even longer after the initial inquiries began an investigation. For example, I.S. began investigating the case of Bongkuch Promsin in 1966, but he had further interviews in the 1960s and 1970s, and he last met Bongkuch and his family in 1980. J.K. began his investigation of this case in 1995, only 15 years after I.S.'s last contact with the family. There was thus a much shorter interval for many cases between I.S.'s last contact with the case and J.K.'s first one. It remains true, however, that in most cases, I.S. obtained the substantial information about the case when he first met the families concerned.

Paranormality in Reincarnation Cases
TABLE Paranormality Scale Ratings Rating of I.S. investigation Niirsel Karaali (Turkey) Cengiz Elma (Turkey) Zeynep Emel Celik (Turkey) Dellil Beyaz (Turkey) Semihe Atasoy (Turkey) Necati Caylak (Turkey) Nasir Toksoz (Turkey) Cemal Kurt (Turkey) Faris Yuyucuer (Turkey) Cemil Fahrici (Turkey) Ratana Wongsombat (Thailand) Bongkuch Promsin (Thailand) Anurak Sithipan (Thailand) Chanai Choomalaiwong (Thailand) Myint Myint Zaw (Myanmar) Mean: Note: I.S. = Ian Stevenson; J.K. = Jiirgen Keil. Rating of J.K. investigation

371

Results
Summary
The Table lists the 15 cases we compared and gives the ratings on the paranormality scale for the data of the two investigations. The mean ratings (rounded) on the scale were 6.5 for the data of the I.S. investigation and 5.5 for those of the J.K. investigation. In only one case did we assign a higher paranormality rating for the data of the J.K. investigation than for those of the I.S. investigation. This was the case of Anurak Sithipan. The informants mentioned to J.K. an incident in which Anurak showed paranormal knowledge of an object that had belonged to the PP in the case, his older deceased brother. They had not mentioned this item to I.S.
Eight Case ~ e ~ o r t s '

The Niirsel Karaali Case (Turkey). The subject, Niirsel, was regarded as the rebirth of the previous personality, Vesile Gorur. In 1970, I.S. obtained some preliminary information about this case from the PP's younger brother Cemil

Ian Stevenson has published detailed reports of his investigations of most of the cases reviewed here. For the benefit of readers wishing details of what he learned about these cases, we have given the references to his reports.

372

Ian Stevenson & Jurgen Keil

Goriir. In 1973, one of I.S.'s collaborators interviewed the PP's father, Cabir Goriir, and in 1975, the S 's father, Siileyman Karaali. In 1975, I.S. interviewed a musician, Hasan Eyup Demirel, as well as the S's mother, Vesile Karaali, and Nurse1 herself. In 1977, I.S. interviewed the PP's father, Cabir Gorur. The PP's mother was also present during this interview and volunteered some information. The PP died at the age of 16 in 1962. The S was born in a different village in 1963. The two villages were 18 kilometers apart. The two families were not related and the parents of the S and the PP had no contacts. The S's grandfather, however, did know the PP's father from their military service. When the S became able to speak, she told her father that he was not her father and that her father was in another village, which she named. She stated some names of other members of this family and threatened to run away if she were not taken to them. In 1990, J.K. interviewed the S, her husband, and her mother, as well as two siblings of the PP. I.S. and his Turkish collaborator obtained more details than J.K., but major events were presented in a very similar way in the 1970s and in 1990. Once contact was established between the families from the information the S provided, she continued to visit the PP's relatives. In 1990, she remembered that as a young child, she knew the names of the PP's relatives. (This had been included in I.S.'s notes.) In the meantime, the S had seen the PP's relatives so often that she knew their names because of these contacts. Both I.S. and J.K. were told that at birth, the S's fingers were colored in agreement with the henna color that was applied to the PP's hands after the PP had died. I.S. was told that in about 1967 (at the age of 4%), the S recognized (in her village) some musicians who had played at the PP's funeral. This was not mentioned during the later study of J.K. In 1973 as well as 1990, I.S. and J.K. learned that the S had recognized the PP's old house. In 1977, I.S. was told that the S could correctly sort out the PP's clothes when mixed with other clothes. J.K. learned of this without substantial change in 1990. In 1990, J.K. was told that the S distinguished six of the PP's clothes from 40 other clothes not belonging to the PP but mixed with those that did. I.S. in 1977 had not been given figures such as these but was told only that the "S could distinguish perfectly the clothes of the PP from those of the other members of the family." Was the addition of numbers to the report an embellishment? We cannot say, but the addition, if such it is, does not alter the rating on the paranormality scale. Nursel's correct statements as a young child before contact was established with the PP's relatives cannot be easily explained as normal information and fairly strongly suggest a paranormal connection between her and the PP. The assessments in this respect are similar for the earlier and later investigations. The earlier investigations managed to find more people and details that confirmed the suggested paranormal connections. In 1990, some details were

Paranormality in Reincarnation Cases

373

added; these may include some embellishments but may also be factual additions. One detail-the S's recognitions of the musicians from the PP's village-was lost. Generally, these differences did not have a bearing on the assessment of the paranormality component. We rated I.S.'s report as 8 and J.K.'s report as 7. The Delliil Beyaz Case (Turkey). The subject, Delliil, was regarded as the rebirth of the previous personality, Zehide Kose (Stevenson, 1997). The S was born in 1970, within 1 month of the PP's death. She had a birthmark on the top of her head. When the S was born, there was a suggestion, based on a dream, that she might be a rebirth case, but there were no specific indications of this. The S's mother told a collaborator of I.S. in 1975 that she had a dream connecting the S with the PP's village. However, during a second interview later in the same year, the S's mother denied that this dream referred to the PP's village. There was no other information in the dream that could link the S to the PP. In 1975, I.S. interviewed the S, the S's mother, the S's grandmother, as well as the PP's husband, the PP's oldest daughter and her husband, and a nephew of the PP's husband. In 1983, a collaborator of I.S. returned once more to the S with some questions prepared by I.S. In 1991 and 1992, J.K. interviewed the S and the S's mother. The PP's brother Mithat was also present in 199 1. During a further visit in 1994, J.K. interviewed the PP's daughter Nuriye Sen. Most of the information was collected in 1975 by I.S. and in 1991, by J.K. I.S. did not receive any information that the S's relatives had any connections with the PP's relatives. J.K. heard later that the S's relatives had not met the PP, but they did know some of the PP's relatives. There was no contact between the two immediate families when the S was born. Both I.S. and J.K. were told that the S had a birthmark on her head. They also both learned that the S had told her parents how she (the PP) died by falling through an opening from a flat roof while hanging out some washing. (The PP had died of "head injury," a fact relevant to the S's birthmark.) Similarly, informants told both of them that the S had correctly mentioned the name of the PP and the names of two or three of the PP's relatives and the name of the PP's village. The S's father probably knew the PP's village, but the S's mother claimed that she did not know the name of this village, which is probably correct. The S conveyed most of the information related to the PP when she was less than 2 years old. She had started to talk about a previous life as soon as she had learned to talk. The S's statements referring to the PP were correct, and a nephew of the PP, who was visiting a neighbor of the S's family, heard about her statements. This nephew, I.S. was told, arranged the first meeting between the two families when the S was about 2 years old. J.K. did not hear these details, but he was told that the PP's relatives heard that the S was talking about a previous life in

374

Ian Stevenson & Jiirgen Keil

a way that suggested that she was referring to the PP. Both I.S. and J.K. were told that the S recognized various people and objects when she visited the PP's home for the first time. The details vary slightly, but J.K. could not interview the same relatives of the PP who had met I.S. There were no signs of embellishment in the more recent accounts of this case. I.S. obtained medical records, which confirmed the death of the PP that informants had indicated to us, but the informants did not clearly link the S's birthmark with the PP's fatal fall. There was, however, agreement that the PP died from head injuries. Although some connections existed prior to the S's birth, it seems unlikely that the information conveyed by the S prior to the first meeting of the two families could have come about by normal means. The various recognitions at a later stage carry some weight but are more doubtful indicators of paranormal processes. We rated the paranormality component as 8 for I.S.'s accounts and 7 for J.K.'s. The Semihe Atasoy Case (Turkey). The subject, Semihe, who was born in 1963, was regarded as the rebirth of the PP, Nesime Dogruel, who died in 1960 at the age of 35 (Stevenson, 1997). The two families are not related, but they were acquainted. They did not meet on a regular basis but occasionally exchanged visits. The PP's husband had shot and killed her. I.S. was told that the S's parents did not know this, and that they only became aware of the circumstances of the PP's death when the S started to talk at the age of 2. This means that for a period of approximately 5 years, the S's parents apparently were not aware that the PP had been killed by the PP's husband. Even if the S's parents met some other members of the PP's family only once or twice a year, it is surprising that they were not aware of the circumstances of the PP's death. Both I.S. and J.K. heard that the S had two birthmarks, which agree with gunshot wounds of entry and exit on the PP. In fact, the PP was shot four times and a further birthmark on the S's elbow (only mentioned to I.S.) may correspond to the wound caused by a bullet. With the help of an autopsy report (only available in 1977), I.S. carefully reconstructed the entry and exit marks on the PP's body. The entry position of two bullets and the exit position of one bullet seem to be in fairly good agreement with the S's birthmarks, but because the PP was hit by four bullets, the agreement may be due to chance. If one shot fired at the PP is regarded as a link that accounts for the S's birthmarks, the question must also be asked why only one bullet and not four resulted in corresponding birthmarks. Based on similar inconsistencies in connection with other cases, I.S. (1997) suggested that events prior to death, such as loss of consciousness by the PP after the first wounding, could explain why only some of the PP's wounds correspond to birthmarks on the S. Nevertheless, the limitations in the correspondence between the PP's wounds and the S's birthmarks must be regarded as a negative

Paranormality in Reincarnation Cases

375

feature when the overall probability of a paranormal component for this case is estimated. I.S. visited this case for the first time in 1977. He interviewed the S and the S's parents. He also interviewed the PP's brother Salih. In 1994, J.K. interviewed the S and her father. In 1996, he interviewed the S's mother, who was not present in 1994. The PP's brother Salih had died in 1993. The S still had contact with one of the PP's daughters, but this relationship then seemed fairly independent of the S's connection with a previous life. Nevertheless, the S told J.K. that she still had direct memories of being the PP. J.K. could not find any relatives or friends of the PP who might have been able to provide relevant information about the PP's life. Both I.S. and J.K. heard that the PP was shot when the PP's husband returned home drunk. Only J.K. was told that the PP questioned the PP's husband about his late arrival home. I.S. was told that the PP was shot twice. Only one shot was mentioned to J.K. Both I.S. and J.K. heard that the S had two birthmarks in agreement with the way the PP was shot. When I.S. visited the S in 1977, the birthmarks were still faintly visible. They were not visible in 1994. However, there was no doubt in 1994 (in the minds of the informants) that the S had had two birthmarks (corresponding to the entry and exit wounds of a bullet) when the S was young. I.S. heard of another birthmark on the S's arm, which was not mentioned to J.K. Both I.S. and J.K. were told that the S did not visit any of the PP's relatives until after the S had started to talk about a previous life. I.S. was told that when the S was 10 she met the PP's relatives for the first time. The account given to J.K. in 1994 was less definite and indirectly suggested that the S might have. met the PP's relatives at a younger age. In 1996, the S's mother also said that the S was 10 when this first meeting took place. (J.K. simply asked whether she could remember how old the S was when the S met the PP's relatives for the first time. Ten as a possible age was never mentioned by J.K.) Both I.S. and J.K. were told that the two families are unrelated, that they met only occasionally, and that the connection between the PP and the S was only discovered when the S started to talk about the PP. The reports, mainly obtained 17 years apart, are in good agreement. For this case, it is difficult to estimate a paranormality rating on our scale. The birthmarks do not exactly correspond to only one bullet entry and exit. Although the S's parents did not regard the S as a rebirth case until the S started to talk about a previous life, the two families had had some contacts. The news of the PP's murder may have reached the S even if the S's parents were not aware of it. On the other hand, the S had referred to these events at a very young age. We estimate that a rating of 5 is appropriate for both reports. The Necati Caylak Case (Turkey). The subject, Necati, was regarded as the rebirth of the PP, Abdiilkerim Hadduroglu (Stevenson, 1980). The two families, who lived 8 kilometers apart in different villages, are not related and did

376

Ian Stevenson & Jiirgen Keil

not know each other until the S started to talk about a previous life. The PP died in 1963 in a car accident. The S was born about 1 month later according to the information that I.S. obtained. In 1995, J.K. was told that the S was born in 1965. In 1995, when J.K. studied the case, the S was away, working in Saudi Arabia. His relatives (J.K.'s informants) probably only estimated his year of birth. In Turkey, many families do not attach much importance to birthdays and the age of a person is not regarded as important information. I.S. met the S for the first time in 1967, and this meeting confirmed that the S was born around 1963, and not in 1965. I.S. visited relevant informants in 1967, 1971, and twice in 1973. He interviewed the S, the S's parents, and two of the S's older brothers, as well as the PP's wife, the PP's father, the PP's stepmother, two of the PP's sons, five friends and neighbors of the PP (some of them distantly related to the PP), and the driver of the car in which the PP died. In 1995, J.K. interviewed the S's mother and two brothers of the S, Ekrem and Mehmet. Ekrem, who provided most of the information, had also been interviewed by I.S. As mentioned, the S himself was working in Saudi Arabia. In addition, J.K. interviewed the PP's youngest son (also an informant during I.S.'s visits), and the PP's sister Kadife. Both I.S. and J.K. were told that the PP was killed in a car accident at a bridge and that the S talked about this accident, giving correct details, when he was less than 3 years old. Both heard that the S was afraid of the bridge and refused to go near it. When the S came to this bridge for the first time, he also mentioned a number of names including the PP's first name. I.S. heard more details than J.K., which generally strengthened the paranormality hypothesis of this case. However, I.S. also became aware of several inconsistencies that could only be partly resolved through further investigations. Apparently, the S mentioned a wrong family name for the PP. The wrong name may have had some relevance but could not directly be associated with the PP. J.K. was only told that the S mentioned the names of the PP's father, mother, and wife. The two accounts agree with each other, but I.S. obtained many more details supporting a paranormality component. The 1995 report contained no embellishments. The informants failed to mention to J.K. that the S had got the family name of the PP wrong, but most Turks would be satisfied with the S's correct statement of the names of members of the PP's family. In 1995, J.K. recorded a different name for the driver of the car in which the PP died. It is not clear whether J.K.'s informants referred to an alternative name (a distinct possibility in Turkey), whether the same person was remembered as the driver but referred to by a wrong name, or whether in 1995, a different person was regarded as the driver. This difference between the two reports has no bearing on the assessment of the paranormality component. The evidence for this component is fairly strong in I.S.'s report because of the many details and because the two families did not know each other. It is likely, though, that some of the details related to the accident were generally

Paranormality in Reincarnation Cases

377

known in both villages. Nevertheless, a paranormality rating of 7.5 for I.S.'s report and 5 for J.K.'s report seems appropriate. The Cemil Fahrici Case (Turkey). The subject, Cemil Fahrici, was regarded as the rebirth of the PP, Cemil Hayik (Stevenson, 1997). He was a well-known outlaw in the Samande, Hatay, region of Turkey. The PP died in 1935. The S was born about 2 or 3 days after the PP's death. The S's verbal account of the PP's life and death is interesting and may contain paranormal elements, but because these events were publicly known, the S may have heard about the PP's life by normal means. The late investigation of this case makes a more detailed assessment of verbal statements with respect to the paranormality hypothesis virtually impossible. When in 1966 I.S. saw the S for the first time, the S was already more than 30 years old. J.K. saw the S 26 years later, in 1994. Nevertheless, the case has some noteworthy features that suggest some degree of paranormality. The S has two birthmarks on his head that correspond to an entry and exit wound of a bullet. These marks appear to be in agreement with what is known about the death of the PP. (The PP shot himself by firing a bullet through his head when he could not escape from the police.) As a child, the S rejected his own name and indicated that he wanted to be called Cemil. The S's parents eventually agreed. The S was afraid of soldiers and policemen and was hostile toward them. As a young child, he pretended to shoot them with a stick. This behavior could also have been encouraged because of some general resentment toward soldiers and policemen, which was not uncommon at that time in the Hatay region of Turkey. Of the 10 informants interviewed by I.S., J.K. could only contact three. Most and perhaps all other informants had died in the meantime. One of these three, the S's sister, had recently lost her son and was too disturbed by this to answer questions. J.K. was able to interview the S and the PP's younger sister. (I.S. had also met this sister of the PP.) She mentioned a few details to J.K. that I.S. had not recorded. Generally, J.K. did not hear as many details as I.S., but J.K. had only two informants. None of the additional details recorded by I.S. or J.K. had any bearing on the paranormality hypothesis. The S's birthmarks, the S's desire as a very young child to be called "Cemil," and the S's reaction to policemen and soldiers were still remembered when J.K. visited the S and the PP's sister in 1994. It seems fair to conclude that the basis for the paranormality assessment did not change after 17 years. The S's birthmarks had faded to some extent-which frequently happensand were less clearly visible when J.K. investigated the case. For this reason, it could be argued that J.K.'s rating should be marginally lower. We agree that if any change is contemplated, it should be in the direction of a slightly lower rating for the more recent report. On the paranormality scale, we rated this case-mainly on the basis of the S's birthmarks-as 7 for the records of I.S. and 6 for those of J.K..

378

Ian Stevenson & Jiirgen Keil

The Ratana Wongsombat Case (Thailand). The subject, Ratana, was regarded as the reincarnation of the PP, Kim Lan (Stevenson, 1983). Kim Lan died in September 1962, at the age of 68. The S was born in May 1964. She started to talk at a very early age, and her apparently accurate memories about some aspects of the PP's life were reported in a Bangkok newspaper in 1967 when the S was 3 years old. I.S. mainly studied this case in 1969, but he visited Ratana again during six further visits to Thailand, the last one occurring in 1980. J.K. studied this case in 1995. The S was raised (from soon after her birth) by her grandmother and stepgrandfather. The latter was particularly interested in Ratana's statements about a previous life. The S's family did not know the PP at all, but the S's step-grandfather knew a few people who previously had some contact with the PP when the PP visited a particular Wat (Buddhist temple) in Bangkok. However, the S's step-grandfather initially was not aware of this connection. I.S. interviewed 13 people including the S, the PP's daughter, Anan, and a nun from the Wat in Bangkok. J.K. also met another nun and a monk who had some limited contact with the PP but who had not been interviewed by I.S. The S's grandmother and the S's step-grandfather had died by 1995. It is likely that some of the other informants interviewed by I.S. had also died by then. Except for the S, J.K.'s other four informants were between 76 and 84 years old. The information provided by the S-when she was a young child-supports the paranormality hypothesis mainly because her family did not know the PP or the PP's relatives, and because the S gave many correct details, including the PP's name when the S was only 2 years old. In 1995, the S herself had no direct memories of the life of the PP, but remembered various details she had later heard from others, including some details that she had mentioned at a very young age. This is not at all unusual in these cases. There are no significant disagreements between the data collected by I.S. and J.K., but between one third and one half of the details was lost when J.K. studied this case. J.K.'s notes do not suggest any additional details that could be regarded as embellishments. Although a reduction of the reported details might be expected because of the advanced age of four of J.K.'s five informants, the paranormality assessment based on the more recent interviews must be rated somewhat lower. On the scale, we rated I.S.'s account as 8 and J.K.'s account as 7. The Anurak Sithipan Case (Thailand). In some areas of Thailand, as well as in Myanmar and parts of India, bodies are sometimes marked, usually shortly after death, but occasionally also before a person is pronounced dead, in order to enable the relatives to recognize a child with a similar birthmark as being the deceased reborn. Although this custom is generally known in areas where

Paranormality in Reincarnation Cases

379

I

such "experimental birthmarks" (Stevenson, 1997) agree by chance with the original marks. The present case provides an example of this practice. The subject, Anurak, was regarded as the reincarnation of his older brother Chachewan, who had drowned about 3 years before Anurak was born (Stevenson, 1997). Chachewan's body had been marked on the right elbow with charcoal. One informant said the mark was made with ink. There is good agreement between the notes obtained by I.S. in 1977 and those obtained by J.K. in 1994. The two reports agree that the PP's arm was marked near his elbow and that the S was born with a birthmark on the same elbow close to the site where the PP had been marked. On both occasions, the S's relatives claimed that other members of the S's family had no birthmarks like that of the S. According to I.S.'s examination, this is not quite correct, and we therefore have an example of increased emphasis on the birthmarks of the S and a tendency to ignore smaller birthmarks on other siblings. However, these errors were similar on both occasions. The informants for I.S. spontaneously mentioned that the S's birthmark was not exactly in the same position as the mark made on the PP. J.K. was not told of this discrepancy and this could be seen as an unconscious tendency to support the interpretation of reincarnation. On the other hand, after 17 years, the S's parents may have forgotten the small difference in location or may have thought that it was of no further importance. The two reports agree that the S spontaneously searched for the PP's Boy Scout uniform. The more recent statement about this detail provides a somewhat more elaborate framework but does not really add any information that would increase the strength of the paranormality component. There is also agreement about the S's familiarity with the PP's friend who came for a date with the S's sister. The earlier report included more details that could easily have been forgotten after 17 years. The more recent report includes one more incident reported by the S's parents: The S found a special spoon, which the PP had kept on a high shelf in a generally inaccessible place. This incident supports the paranormality component, and it could be argued that in some way, this incident was later invented to provide more support for the reincarnation hypothesis. However, if this incident is considered, together with other statements made by our informants, it seems more likely that the S really behaved in the way his parents reported, and that for some reason, this incident was forgotten or not mentioned when I.S. visited 17 years earlier. Although there are some differences in the records with respect to those events that have a bearing on the paranormality component, we do not regard them as significant. The strength of the paranormality component is broadly similar for the 1977 and 1994 investigations. We agreed that for this case, the paranormality rating for the more recent report should be at least as high as that for the earlier one and possibly higher. We agreed on a rating of 5 for I.S.'s report and a rating of 5.5 for J.K.'s report.

380

Ian Stevenson & Jiirgen Keil

The Bongkuch Promsin Case (Thailand). Bongkuch, the subject, was regarded as the rebirth of the PP, Chamrat, who died in 1954, at the age of 18 (Stevenson, 1983). The PP had been attacked from the back, stabbed, and killed. The two families were not related and did not know each other. The PP's relatives had never visited the S's village. The S's father had traveled to the small town where the PP lived. He knew some people there, but not the PP or the PP's relatives, and after the PP was killed, the S's father did not hear anything about the murder. Contact between the two families was established only after the S had started to talk and had made a number of statements about the PP. The PP's relatives then learned about these statements. I.S. carried out a very detailed investigation (mainly in 1966) and interviewed 17 people including the S, the S's parents, 2 sisters of the S, the PP's father, the PP's sister, the PP's stepbrother, the PP's girlfriend, as well as 3 police officers associated with the investigation of the PP's murder. When J.K. investigated this case in 1995, the PP's parents and the S's father had died. J.K. interviewed the PP's half-brother Muan, the younger brother of the PP's mother, the S, the S's mother, and the S's sister Chorthip. I.S. had obtained a much more detailed record of the events, but both he and J.K. found that the first contact phase between the two families suggested a paranormal component. The S had made a number of statements. He mentioned the PP's name, possessions of the PP, such as a bicycle and a gold chain, and how PP had been killed. The PP's relatives confirmed these statements during their first meeting with S. They conducted a "test" by presenting the S with-according to I.S.'s notes-two bicycles or-according to J.K.'s notes-three bicycles, from which the S selected the PP's bicycle correctly. The increase from 2 bicycles to 3 most likely is an embellishment and not a simple error. Both I.S. and J.K. heard that the S used Laotian words not used in the S's family and preferred sticky rice (a distinctly different rice preparation) in agreement with the PP's background, which was Laotian and not Thai. I.S. recorded additional details relevant to the paranormality assessment of this case. For example, his informants told him that the S remembered the names of eight people from the previous life and that he had referred to the PP's watch, gold ring, and knife, and to the way the PP was dressed when he was killed. There was some disagreement among the three police officers about whether the S's statement that the PP wore shorts was correct. There is also less certainty about whether such statements as the names of eight people were made before the S could have heard some of them. Nevertheless, some of the information obtained only by I.S. strengthens the paranormality assessment. J.K. was told that the S had a birthmark on his neck apparently in agreement with the way the PP was stabbed. It is unlikely that this is a later invention without any factual basis. On the other hand, I.S. had inquired about birthmarks and had not received any information that the S had any birthmarks. It is possible that the S's birthmark had faded-assuming that the S indeed had a birthmark-and that this was conveyed to I.S. in such a way that I.S. assumed

Paranormality in Reincarnation Cases

381

that the S never had one. Nevertheless, the later statement about the birthmark strengthens the paranormality assessment of J.K.'s report. However, based on the larger number of details recorded by I.S. that support the paranormality component, we rated the I.S. report as 8 and the J.K. report as 7.5.

Discussion
The results show no evidence that information obtained about these cases produce higher paranormality assessments over time. Some new details were added, but generally, more details were lost. Embellishments may be recognized if the paranormal components of a particular event are more