Blogg

Concerning a technology that could help avoid another Columbia disaster

02.05.2014 09:04
 
1
Concerning a technology that could help avoid another Columbia disaster
Paul A. LaViolette, Ph.D.
The Starburst Foundation
starcode@aol.com
March 2003
One technology that could prevent a Columbia-type hazard from happening in the future
would be to apply a high voltage charge to the Space Shuttle hull during reentry, in particular
to the leading edge of its wing. The ion sheath so formed would create a buffer zone around
the craft, ionizing, repelling and deflecting oncoming air molecules and thereby preventing
them from directly impacting and heating the hull. The technology is not new. It has been
researched decades ago, as is described below. This electrification technology would provide
a second line of defense to the heating problem, supplementing the refractory tile layer that is
currently used.
Prior history of development of the air frame electrification technology:
Airframe electrification was first suggested by Thomas Townsend Brown who did
extensive research on it during the 1940's and 1950's. Referring to Brown's work of placing a
high voltage charge on the leading edge of the air frame, Dr. Mason Rose wrote in 1952 that
the positive field which is traveling in front of the craft "acts as a buffer wing which starts
moving the air out of the way... acts as an entering wedge which softens the supersonic
barrier, thus allowing the material leading edge of the craft to enter into a softened pressure
area."
Brown's work was spotlighted in a 1956 air intelligence report entitled "Electrogravitics
Systems" issued by Aviation Studies Intl. a UK based intelligence think tank. I obtained a
copy of this report from Wright Patterson Air Force base Technical Library in 1985. The
report along with excerpts about this technology that appeared in various past issues of the
Aviation Studies newsletter, has been reprinted in a book by the same name entitled
Electrogravitics Systems.1 The report listed the names of many aerospace corporations that
were actively researching this air frame electrification technology in the early 1950's.
Northrop corporation was one of these companies. At an aerospace sciences meeting
held in New York in January of 1968, scientists from Northrop's Norair Division reported that
they were beginning wind tunnel studies on the aerodynamic effects of applying high-voltage
charges to the leading edges of high-speed aircraft bodies.2,3 Echoing what Mason Rose had
described over a decade earlier, they said they expected that the applied electrical potential
would produce a corona glow that would propagate forward from the craft's leading edges to
ionize and repel air molecules upwind of the aircraft. The resulting repulsive electrical forces
would condition the air stream so as to lower drag, reduce heating, and soften or eliminate the
supersonic boom. According to author William Moore, the results were significant in that
when high voltage DC was applied to a wing-shaped structure subjected to a supersonic flow,
seemingly new "electro-aerodynamic" qualities appeared which resulted in significant air drag
reduction on the structure and the virtual elimination of friction-caused aerodynamic heating.
It is claimed that the B-2 bomber electrifies its wing-leading edge as a means of assisting
its propulsion.4 This appears to have been an extension of research that Northrop had carried
out in the 60's, Northrop being the prime contractor for the design and construction of the B-
2's airframe. I discussed the application of this technology to the B-2 in a conference paper I
presented in 1993, and which was later reprinted in Electrogravitics Systems.
2
Airframe electrification used in the B-2 bomber to reduce air drag and hull friction. (Photo
courtesy of Bobbi Garcia).
Power generation:
The electrostatic charge applied to the wing leading edge may be supplied either by a
flame jet generator powered by the shuttle's own main engine or by flow through wind pods
mounted on the wing surface. The latter alternative would have the advantage that it would
require no fuel consumption. That is, the energy would be supplied by the reentry plasma
wind striking the shuttle. The "wind jet" generator would operate much like a Van de Graff
generator where the air stream is analogous to the charge carrying belt in the Van de Graff.
The air stream would enter the pod on the upwind end and leave the pod on the downwind
end. At the pod's entrance negative ions would be injected into the air stream and as this
flowed toward the rear end of the pod, it would carry these ions to a progressively higher
potential difference where a portion of the ions would be collected by a conductive grid, the
remainder being allowed to flow away toward the rear of the craft. The grid would recycle
this current turning an initial 50,000 volt starter current into a multi megavolt current. Since
the negative ions are forcefully carried away from the craft by the reentry wind, the shuttle
would acquire a very high voltage positive charge in excess of 10 million volts, reaching a
maximum at the wing's leading edge which would be connected to the generator's positive
terminal.
This plasma jet would operate much like the flame jet generator that Townsend Brown has
described in his 1962 electrokinetic generator patent (No. 3,022,430), where the hot combustion
gases are here replaced by the reentry airflow captured by the wind pods. In fact, in his
patent Brown stated that any kind of flowing nonconductive gas would serve as a substitute
for combustion gases.
Reentry effects:
One effect of hull electrification would be to accelerate the speed of the craft. That is,
hull electrification would not only reduce hull heating but also air friction drag against the
craft. Consequently, the craft would take longer to decelerate as it entered the atmosphere.
This could be accommodated by arranging for the craft to have a longer air flight path, e.g.,
passing through the atmosphere at a slightly lower angle. Alternatively, if more braking is
desired at any given time during reentry, the voltage electrifying the wing may be reduced or
altogether shut off, thereby engaging once again air friction surface heating. By alternately
turning the electrification on and off, hull surface temperatures may be kept minimal while
frictional deceleration is employed. This ease of controlling reentry deceleration by the flick
of a switch may be found to be superior to controlling the forward pitch of the craft, as is
currently done.
3
Prior contact with NASA suggesting this technology:
In 1990 I had participated in NASA's Space Exploration Outreach Project and had
submitted an idea entitled "Electrogravitics: An energy-efficient means of spacecraft
propulsion" (submission category: Space transportation, launch vehicles, and propulsion).
My paper informed NASA about T. T. Brown's work and about the 1956 Aviation Studies
Intl. report mentioned above. I suggested that NASA aggressively pursue electrogravitics for
propulsion. Although my submission was not summarized in the final report submitted to
NASA. It had unfortunately been omitted by Rand Corp. contract employees who found it,
in their opinion, irrelevant to NASA's objectives. One other participant had also submitted a
suggestion that NASA look into applying Brown's electrogravitic technology. But that too
was omitted from the main report. Attempts were made to obtain the computer records from
this project giving the reasons why this technology was deemed unsuitable, but these tapes
were reportedly "missing."
In 1992, and again in 1993, I contacted Mr. Charles Morris, Jr. who was at that time
heading NASA's National Aero-Space Plane (NASP) program and had encouraged him to
have NASA look into electrogravitics. I sent him a lot of material about Townsend Brown's
research, including the Aviation Studies International report. In our telephone conversations
we had discussed the issue of reentry heating of the hull, which was apparently a problem that
NASP was grappling with. In a letter I had sent to him in September 27, 1993, I specifically
had pointed out that "electrostatic charging of the plane's leading edge would have the added
benefit of reducing air friction heating of the hull surface." But nothing came of this. Mr.
Morris later informed me that he was unable to generate any interest at NASA to look further
into this.
NASA had a 13 year advance warning. If their research programs had pursued this
technology and had applied it to the Space Shuttle, the lives of an entire crew could have been
saved along with the hundreds of millions of dollars that are now going into the wreckage
recovery and the cost of putting the space program on hold.
Nevertheless, looking to the future, it is my hope that NASA will now undertake the
challenge and seriously research the use of this technology. Jonathan Campbell of NASA
Marshall Space Flight Center would be a good contact point for beginning such a project. He
has spent many years researching the application of high voltage charge to aerospace
propulsion. Although his requests for NASA internal funding of this line of research have in
the past been turned down, hopefully NASA will now give a higher priority to this work in the
wake of the Columbia disaster. As a start, wind tunnel experiments should be carried out
similar to those Northrop conducted 35 years ago and efforts should be made at developing a
high-voltage wind jet generator for application to the Space Shuttle. I would be glad to assist
NASA in this effort.
References
1. T. Valone (editor) Electrogravitics Systems: Reports on a New Propulsion Methodology.
Washington, D.C.: Integrity Research Institute, 1994.
2. "Northrop studying sonic boom remedy," Aviation Week & Space Technology, Jan. 22,
1968, p. 21.
3. "Sonic boom experiments," Product Engineering, 39, March 11, 1968, pp. 35-6.
4. W. B. Scott, "Black world engineers, scientists encourage using highly classified
technology for civil applications," Aviation Week & Space Technology, March 9, 1992,
pp. 66-67.
 

The Cosmic Ether: Introduction to Subquantum Kinetics

01.05.2014 22:55
 
1
Space, Propulsion & Energy Sciences International Forum - 2012,
Physics Procedia 38 (2012): 326– 349.
The Cosmic Ether: Introduction to Subquantum Kinetics
Paul A. LaViolette
© February 6, 2011
The Starburst Foundation, 1176 Hedgewood Lane, Niskayuna, NY 12309
Electronic address: starburstfound@aol.com
Abstract: A general overview of the physics and cosmology of subquantum kinetics is presented,
together with its more recently developed explanation for quantum entanglement. Subquantum
kinetics is shown to be able to account for the superluminal control of the parallel-antiparallel
orientation of particle spin mediated through electric potential soliton beam links established
between remotely positioned, mutually entangled particles.
The transmuting ether concept
Subquantum kinetics is a unified field theory whose description of microphysical phenomena
has a general systems theoretic foundation (LaViolette 1985a,b,c, 1994, 2010). It conceives
subatomic particles to be Turing wave patterns that self-organize within a subquantum medium
that functions as an open reaction-diffusion system. This medium, termed the transmuting ether,
like the Akashic field, forms the substrate from which all physical form in our universe emerge.
This ether, which requires more than three dimensions for its description, differs from 19th
century mechanical ethers in that it is continually active, its multifarious components transmuting,
reacting among themselves, and diffusing through space, these interweaving processes binding
the ether into an organic unity.
Subquantum kinetics presents a substantially different paradigm from that of standard physics
which views particles as closed systems. Whether these be subatomic particles bound together
by force fields, or quarks bound together by gluons, physics has traditionally conceived nature at
its most basic level to be composed of immutable structures. Unlike living systems which require
a continuous flux of energy and matter with their environment to sustain their forms,
conventional physics has viewed particles as self-sufficient entities, that require no interaction
with their environment in order to continue their existence. Thus classical field theory leads to a
conception of space which Alfred North Whitehead has criticized as being one of mere simple
location, where objects simply have position without incorporating any reference to other regions
of space and other durations of time.
Whitehead instead advocated a conception of space manifesting prehensive unification, where
separate objects can be "together in space and together in time even if they be not
contemporaneous." The ether (or Akasha) of subquantum kinetics fulfills Whitehead's conception.
As shown below, it is precisely because of its nonlinear, reactive, and interactive aspect
that the transmuting ether of subquantum kinetics is able to spawn subatomic particles and
photons, manifested either as stationary or inherently propagating ether concentration patterns.
In the context of subquantum kinetics, the very existence of the physical world we see around us
is evidence of the dynamic organic unity that operates in the universal substratum below,
imperceptible to us and out of reach of direct detection by the most sophisticated instruments.
The notion of an ether, or of an absolute reference frame in space, necessarily conflicts with
the postulate of special relativity that all frames should be relative and that the velocity of light
should be a universal constant. However, experiments by Sagnac (1913), Graneau (1983),
2
Silvertooth (1987, 1989), Pappas and Vaughan (1990), Lafforgue (1991), and Cornille (1998), to
name just a few, have established that the idea of relative frames is untenable and should be
replaced with the notion of an absolute ether frame. Also a moderately simple experiment
performed by Alexis Guy Obolensky has clocked speeds as high as 5 c for Coulomb shocks
traveling across his laboratory (LaViolette, 2008a). Furthermore Podkletnov and Modanese
(2011) report having measured a speed of 64 c for a collimated gravity impulse wave produced
by a high voltage discharge emitted from a superconducting anode. These experiments not only
soundly refute the special theory of relativity, but also indicate that information can be
communicated at superluminal speeds.
However, subquantum kinetics does not negate the existence of "special relativistic effects"
such as velocity dependent clock retardation and rod contraction. Nor, in offering an alternative
to the space-time warping concept of general relativity, does it negate the reality of orbital
precession, the bending of starlight, gravitational time dilation, and gravitational redshifting.
These effects emerge as corollaries of its reaction-diffusion ether model (LaViolette 1985b, 1994,
2004, 2010). It should be added that subquantum kinetics has made a number of testable
predictions, twelve of which have been subsequently verified (LaViolette 1986, 1992, 1996,
2005, 2010); see Table 1.
Table 1
Twelve A Priori Predictions of Subquantum Kinetics that were Subsequently Verified
1) The prediction that the electric field in the core of a nucleon should be configured as a radially periodic
Turing wave pattern of progressively declining amplitude, and that a charged nucleon should have a Turing
wave pattern whose core electric potential is biased relative to the background electric potential (LaV,
1985b, 2008b).
2) The prediction that the universe is cosmologically stationary and that photons passing through intergalactic
regions of space should progressively decrease their energy, that is, that photons should continually
undergo a tired-light redshift effect (LaV, 1985c, 1986).
3) The prediction that photons traveling within galaxies should progressively increase their energy, that is,
blueshift their wavelengths, and consequently that the luminosity of planets and red dwarf stars should be
due to energy being spontaneously generated in their interiors (LaV, 1985c, 1992).
4) The prediction that the luminosity of brown dwarf stars should be due to the photon blueshifting effect
described in (3) (LaV, 1985c, 1996, 2010).
5) The anticipation of the Pioneer effect; the prediction that a spacecraft maser signal transponded through
interplanetary space should be observed to blueshift its wavelength at a rate of about one part in 1018 per
second (LaV, 1985c, 2005).
6) The prediction that blue supergiant stars rather than red giant stars should be the precursors of supernova
explosions (1985c, 1995).
7) The prediction that galactic core emissions should come from uncollapsed matter-creating stellar masses
(Mother stars), rather than from matter-accreting black holes (LaV, 1985c).
8) The prediction that stars in the vicinity of the Galactic center should be massive blue supergiant stars as
opposed to low mass red dwarf stars (LaV, 1985c).
9) The prediction that galaxies should progressively grow in size with the passage of time proceeding from
compact types such as dwarf ellipticals and compact spirals to mature spirals and giant ellipticals (LaV,
1985c, 1994).
10) The prediction that a monopolar electron discharge should produce a longitudinal electric potential wave
accompanied by a matter repelling gravity potential component (LaV, 1985b, 1994).
11) The prediction that the speed of the superluminal gravity wave component of a monopolar electron
discharge should depend on the potential gradient of the discharge (LaV, 2003, 2010).
12) The prediction that inertial mass should decrease with a rise of G potential or with an increase in negative
potential, and should increase with the reverse polarity (LaV, 1985b).
3
The systems dynamics of subquantum kinetics
Subquantum kinetics was inspired from research done on open chemical reaction systems such
as the Belousov-Zhabotinskii (B-Z) reaction (Zaikin and Zhabotinskii 1970, Winfree 1974) and
the Brusselator (Lefever 1968, Glansdorff and Prigogine 1971, Prigogine, Nicolis, and Babloyantz
1972, Nicolis and Prigogine 1977). Under the right conditions, the concentrations of the variable
reactants of the Brusselator reaction system can spontaneously self-organize into a stationary
reaction-diffusion wave pattern such as that shown in figure 1. These have been called Turing
patterns in recognition of Alan Turing who in 1952 was the first to point out their importance for
biological morphogenesis. Alternatively, Prigogine et al. (1972) have referred to them as
dissipative structures because the initial growth and subsequent maintenance of these patterns is
due to the activity of the underlying energy-dissipating reaction processes. In addition, the B-Z
reaction is found to exhibit propagating chemical concentration fronts, or chemical waves which
may be easily reproduced in a school chemistry laboratory; see figure 2.
The Brusselator, the simpler of the two reaction systems, is defined by the following four
kinetic equations:
A ——❿
k1 X, a)
B + X ——❿
k2 Y + Z, b)
2X + Y ——❿
k3 3X, c)
(1)
X ——❿
k4 Ω. d)
The capital letters specify the concentrations of the various reaction species, and the ki denote
the kinetic constants for each reaction. Each reaction produces its products on the right at a rate
equal to the product of the reactant concentrations on the left times its kinetic constant. Reaction
species X and Y are allowed to vary in space and time, while A, B, Z and Ω are held constant.
Figure 1. One-dimensional computer simulation of the concentrations of the
Brusselator's X and Y variables (diagram after R. Lefever 1968).
Figure 2. Chemical waves formed by the Belousov-Zhabotinskii reaction (photo
courtesy of A. Winfree).
4
This system defines two global reaction pathways which cross-couple to produce an X-Y
reaction loop. One of the cross-coupling reactions, (1-c), is autocatalytic and prone to produce a
nonlinear increase of X, which is kept in check by its complementary coupling reaction (1-b).
Computer simulations of this system have shown that, when the reaction system operates in its
supercritical mode, an initially homogeneous distribution of X and Y can self-organize into a
wave pattern of well-defined wavelength in which X and Y vary reciprocally with respect to one
another as shown in figure 1. In other words, these systems allow order to spontaneously
emerge (entropy to decrease) by virtue of the fact that they function as open systems; the
Second Law of Thermodynamics only applying to closed systems.
The Model G ether reaction system
Subquantum kinetics postulates a nonlinear reaction system similar to the Brusselator that
involves the following five kinetic equations termed Model G (LaViolette, 1985b):
A ——❿
k1
k-1
➛—— G, a)
G ——❿
k2
k-2
➛—— X, b)
B + X ——❿
k3
k-3
➛—— Y + Z, c) (2)
2X + Y ——❿
k4
k-4
➛—— 3X, d)
X ——❿
k5
k-5
➛—— Ω. e)
The kinetic constants ki denote the relative propensity for the reaction to proceed forward, and
k-i denote the relative propensity for the corresponding reaction to proceed in the reverse
direction. The forwarded reactions are mapped out in figure 3. Since the forward kinetic
constants have values much greater than the reverse kinetic constants, the reactions have the
overall tendency to proceed irreversibly to the right. Nevertheless, the reverse reactions, in
particular that associated with reaction (2-b), play an important role. Not only does this one
allow Model G to establish electro-gravitic field coupling, but as described below, it also allows
the spontaneous formation of material particles in an initially subcritical ether.
Whereas the Brusselator and B-Z reaction conceive of a chemical medium consisting of various
reacting and diffusing molecular species, subquantum kinetics conceives of a space-filling etheric
medium consisting of various reacting and diffusing etheric species termed etherons. Being
present as various etheron types (or states) labeled A, B, X, and so on, these diffuse through
space and react with one another in the manner specified by Model G. Model G is in effect the
recipe, or software, that generates the physical universe. Etherons should not be confused with
quarks. Whereas quark theory proposes that quarks exist only within the nucleon, with just
three residing within each such particle, subquantum kinetics presumes that etherons are far more
ubiquitous, residing not only within the nucleon, but also filling all space with a number density
of over 1025 per cubic fermi, where they serve as the substrate for all particles and fields.
The self closing character of the X-Y reaction loop, which is readily evident in figure 3, is what
allows Model G and the Brusselator to generate ordered wave patterns. Model G is similar to
the Brusselator with the exception that a third intermediary variable, G, is added with the result
that steps (2-a) and (2-b) now replace step (1-a) of the Brusselator, all other steps remaining the
same. The third variable was introduced in order to give the system the ability to nucleate selfstabilizing
localized Turing patterns within a prevailing subcritical environment. This autogenic
particle formation ability is what allows Model G to become a promising candidate system for
5
Figure 3. The Model G ether reaction system
investigated by subquantum kinetics.
the generation of physically realistic subatomic structures.
Based on reaction equation system (2) we may write the following set of partial differential
equations to depict how all three reaction intermediates G, X and Y vary as a function of space
and time in three dimensions:

  
 ∂G(x, y, z, t)
∂t = k1A - k2G + Dg∇2G
∂X(x, y, z, t)
∂t = k2G + k4X2Y - k3BX - k5X + Dx ∇2X
∂Y(x, y, z, t)
∂t = k3BX - k4X2Y + Dy ∇2Y
(3)
where the Dg , Dx and Dy values represent the diffusion coefficients of the respective reactant
variables.
A homogeneous distribution of the G, X, and Y reaction intermediates would correspond to a
spatial vacuum devoid of matter and energy. Variations in the concentrations of these three
variables would correspond to the formation of observable electric and gravitational potential
fields, and wave patterns formed by these fields, in turn, would constitute observable material
particles and energy waves. The etherons themselves would remain unobservable. Subquantum
kinetics identifies the G concentration with gravitational potential, where G concentrations
greater than the prevailing homogeneous steady-state concentration value, Go, would constitute
positive gravity potentials and G concentrations less than Go would constitute negative gravity
potentials. A negative G potential well, G ether concentration well, would correspond to a
matter-attracting negative gravity potential field, whereas a positive G potential hill would
correspond to a matter-repelling positive gravity potential field. The X and Y concentrations,
which are mutually interrelated in reciprocal fashion, are together identified with electric potential
fields. A configuration in which the Y concentration is greater than Yo and the X concentration is
less than Xo would correspond to a positive electric potential and the opposite polarity, low-
Y/high-X would correspond to a negative electric potential. Relative motion of an electric
potential field, or of an X-Y concentration gradient, would generate a magnetic (or
electrodynamic) force (LaViolette 1994, 2010). As Feynman, Leighton, and Sands (1964) have
shown, in standard physics magnetic force can be mathematically expressed solely in terms of the
effect that a moving electric potential field produces on a charged particle obviating the need for
magnetic potential field terms. Also relative motion of a gravity potential field, of a G
6
Figure 4. An expansion of the Model-G ether reaction scheme as it would appear disposed
along dimension T. G, X, and Y mark the domain of the physical universe.
concentration gradient, is predicted to generate a gravitodynamic force, the gravitational
equivalent of a magnetic force.
The subquantum kinetics ether functions as an open system, where etherons transform
irreversibly through a series of "upstream" states, including states A and B, eventually occupying
states G, X, and Y, and subsequently transforming into the Z and Ω states and from there
through a sequence of "downstream" states; see figure 4. This irreversible sequential
transformation is conceived as defining a vectorial dimension line termed the transformation
dimension. Our observable physical universe would be entirely encompassed by the G, X, and Y
ether states, which would reside at a nexus along this transformation dimension, the continual
etheron transformation process serving as the Prime Mover of our universe. According to
subquantum kinetics, the arrow of time, as physically observed in all temporal events, may be
attributed to the continuation of this subquantum transformative process. Subquantum kinetics
allows the possibility of parallel universes forming either "upstream" or "downstream" of our
own universe wherever the ether reaction stream intersects itself to form a reaction loop similar
to Model G. However, while there is a finite chance of such a material universe being spawned,
the possibility that it would actually form are vanishingly small since the ether reaction
parameters would need to adopt the proper precise values in order to spawn the necessary
nucleon building blocks.
Since etherons both enter and leave the etheron states that compose material bodies and energy
waves in our physical universe, our observable universe may be said to be open to the throughput
of etherons. That is, our universe would function as an open system. In such a system,
ordered field patterns may spontaneously emerge from initially homogeneous field distributions
or they may progressively dissolve back to the homogeneous state depending on the criticality of
the reaction system. In Model G, the system's criticality is determined by the value of the G
variable. Sufficiently negative G potentials create supercritical conditions that allow matter
formation and photon blueshifting while positive G potential values create subcritical conditions
that cause tired-light photon redshifting and in extreme instances particle dematerialization.
The transmuting ether of subquantum kinetics bears some resemblance to the ether concept of
Nikola Tesla. He proposed a gas-like ether that is acted on by a "life-giving creative force" which
when thrown into infinitesimal whorls gives rise to ponderable matter and that when this force
subsides and the motion ceases, matter disappears leaving only the ether. In subquantum kinetics,
this creative force or Prime Mover is termed etheric Force while the resulting transmutative
or reactive transformation of etherons from one state to another is termed etheric flux.
The transmuting ether also closely parallels the descriptions of Besant and Leadbeater (1919)
who as early as 1895 said "the ether is not homogeneous but consists of particles of numerous
kinds, differing in the aggregations of the minute bodies composing them." About the subatomic
particle, which they refer to as the "ultimate physical atom", they state: "It is formed by the flow
of the life-force and vanishes with its ebb. When this force arises in 'space'... [u.p.a] atoms
appear; if this be artificially stopped for a single atom, the atom disappears; there is nothing left.
Presumably, were that flow checked but for an instant, the whole physical world would vanish,
as a cloud melts away in the empyrean. It is only the persistence of that flow which maintains
7
the physical basis of the universe." Similarly, subquantum kinetics views our observable
physical universe as an epiphenomenal watermark generated by the activity of a higher
dimensional ether that functions as an open system.
Parthenogenesis: The creation of matter from zero-point fluctuations
According to subquantum kinetics, material particles nucleate from electric and gravity
potential fluctuations that spontaneously arise from the ether vacuum state. Since etherons react
and transform in a stochastic Markovian fashion, the etheron concentrations of all etheron
species will vary stochastically above and below their steady-state values, the magnitudes of
these fluctuations conforming to a Poisson distribution. It is known that such fluctuations are
present in the chemical species of reaction-diffusion systems such as the B-Z reaction and their
presence is also postulated in the theoretical Brusselator system. So the same would be true in
the Model G reactive ether. Hence subquantum kinetics predicts that stochastic electric and
gravity potential fluctuations should spontaneously arise throughout all of space, in regions both
where field gradients are present and where they are absent. This is in some ways analogous to
the concept of the zero-point energy (ZPE) background.
Provided that the kinetic constants and diffusion coefficients of the ether reactions are
properly specified to render the system subcritical but close to the critical threshold, a
sufficiently large spontaneously arising positive zero-point electric potential fluctuation (i.e., a
critical fluctuation consisting of a low X concentration or a high Y concentration), with further
growth, a further reduction of X and increase in Y, is able to break the symmetry of the initial
vacuum state to produce what is called a Turing bifurcation. That is, it is able to change the
initially uniform electric and gravity potential background field that defines the vacuum state into
a localized, steady-state periodic structure. In subquantum kinetics this emergent wave pattern
would form the central electric and gravity field structure of a nascent subatomic particle.
One advantage of Model G is that a positive electric potential fluctuation characterized by a
negative X potential also generates a corresponding negative G potential fluctuation by virtue of
the reverse reaction X ➛—k—-2 G, and this in turn produces a local supercritical region allowing
the seed fluctuation to persist and grow in size. Consequently, if the ether reaction system is
initially in the subcritical vacuum state, provided that it operates sufficiently close to the critical
threshold, eventually a fluctuation will arise that is sufficiently large to form a supercritical region
and nucleate a subatomic particle (e.g., a neutron). Thus spontaneous matter and energy creation
is allowed in subquantum kinetics.
This parthenogenic, order-through-fluctuation process is shown in figure 5 which presents
successive frames from a 3D computer simulation of equation system (3) (Pulver and LaViolette,
2011). Spherical symmetry was imposed as an arbitrary assumption to reduce the computing
time necessary to carry out the simulation. The duration of the simulation consists of 100
arbitrary time units and the reaction volume measures 100 arbitrary spatial units, from -50 to
+50, with one fifth of the volume being displayed in the graph. Vacuum boundary conditions are
assumed. These space and time units are dimensionless, meaning that the units of measure are
not specified. To initiate the particle's nucleation, a negative subquantum X ether fluctuation
-ϕx(r) was introduced at spatial coordinate r = 0. The rise and fall of the fluctuation
magnitude reaches its maximum value of -1 after 10 time units, or 10% of the way through the
simulation, and diminishes back to zero magnitude (flat line) by 20 units of time, or 20% of the
way through the simulation. The reaction system quickly generates a complementary positive Y
potential fluctuation +ϕy(r), which together comprise a positive electric potential fluctuation, and
also generates a negative G fluctuation -ϕg(r), which comprises a gravity potential well. This is
apparent in the second frame at t = 15 units. This central G-well generates a region that is
8
Figure 5. Sequential frames from a three-dimensional computer simulation of Model G
showing the emergence of an autonomous dissipative structure particle: t = 0 the initial
steady state; t = 15 growth of the positively charged core as the X seed fluctuation fades; t
= 18 deployment of the periodic electric field Turing wave pattern; and t = 35 the mature
dissipative structure particle maintaining its own supercritical core G-well. Simulation by M.
Pulver.
sufficiently supercritical to allow the fluctuation to rapidly grow in size and eventually develop
into an autonomous particulate dissipative structure which is seen fully developed in the last
frame at t = 35 units. Movies of this and other Model G simulations are posted at www.
starburstfound.org/simulations/archive.html.
The particle shown here would represent a neutron. It's electric field consists of a Gaussian
central core of high-Y/low-X polarity surrounded by a pattern of concentric spherical shells
where X and Y alternate between high and low extrema of progressively declining amplitude.
Being a reaction-diffusion wave pattern, we may appropriately name this periodicity the
particle's Turing wave (LaViolette, 2008b). The antineutron would have the opposite polarity,
9
high-X/low-Y centered on a G potential hill.
The positive Y potential field (negative X potential field) in the neutron's core corresponds to
the existence of a positive electric charge density, and the surrounding shell pattern which
alternates between low and high Y potentials constitutes shells of alternating negative and
positive charge density. On the average, however, these charge densities cancel out to zero in the
case of the neutron, which is why the Turing wave for the simulated neutron shown in figure 5
has no positive or negative bias with respect to the ambient zero potential.
The appearance of these positive and negative charge densities necessitates the simultaneous
appearance of the particle's inertial rest mass. The shorter the wavelength of the Turing wave,
and greater its amplitude (greater its etheron concentration wave amplitude), the greater will be
the inertial mass of the associated particle (LaViolette 1985b). Since acceleration requires a
structural shift and recreation of the particle's Turing-wave dissipative space structure, the
particle's resistance to acceleration, its inertia, should be proportional to the magnitude of its
Turing-wave charge densities; that is, proportional to the amount of negentropy that must be
restructured (LaViolette 2010).
Subquantum kinetics further requires that for Model G to be physically realistic, the values of
its kinetic constants, diffusion coefficients, and reactant concentrations should be chosen such
that the emergent Turing wave has a wavelength equal to the Compton wavelength, λ0, of the
particle it represents, this being related to particle rest mass energy Eo, or to its rest mass mo, by
the formula:
λ0 = h c/Eo = h /moc, (4)
where h is Planck's constant and c is the velocity of light. The Compton wavelength for the
nucleon calculates to be 1.32 fermis (λ0 = 1.32 × 10-13 cm). This prediction that a particle's core
electric field should have a Compton wavelength periodicity has since been confirmed by particle
scattering experiments; see below. Moreover, unlike the Schroedinger linear wave packet
representation of the particle, which has the unfortunate tendency to progressively dissipate over
time, the localized dissipative structures predicted by Model G maintain their coherence, the
underlying ether reaction-diffusion processes continuously combat the increase of entropy. Thus
the Schroedinger wave equation used in quantum mechanics offers a rather naive linear
approximation to representing microphysical phenomena, the quantum level being better
described by a nonlinear equation system such as Model G.
Since this Turing wave particle representation incorporates both particle and wave aspects, we
are able to dispense with the need to adopt a wave-particle dualism view of quantum interactions.
Moreover the Turing wave subatomic particle has been shown to quantitatively account for the
results of particle diffraction experiments, thereby eliminating paradoxes that arise in standard
theories that rely on deBroglie's pilot wave interpretation or Schroedinger's wave packet concept.
It also correctly yields Bohr's orbital quantization formula for the hydrogen atom while at the
same time predicting a particle wavelength for the ground state orbital electron that is ~1400
times smaller than Schroedinger's wave packet prediction (LaViolette, 1985, 2008b, 2010). This
more compact representation of the electron, allows the existence of smaller diameter sub-ground
state orbits having fractional quantum numbers. Several researchers, such as John Eccles and
Randall Mills, claim to have developed methods of inducing electron transitions to such subground
orbits and to thereby extract enormous quantities of energy from plain water (LaViolette,
2008b). So reformulating quantum mechanics on the basis of the subquantum kinetics Turing
wave concept, opens the door to understanding and developing new environmentally safe
technologies that could power our world.
In the course of dispensing with the Schroedinger wave packet and its associated probability
function describing the indeterminate position of a mass point, it is advisable to also throw out
10
the Copenhagen interpretation with its mysterious "collapse of the wave function" theorized to
take place when the quantum "entity" through measurement becomes determined to be either a
wave or a particle. In particular, Dewdney et al. (1985) have shown experimentally that the
position of the particle is defined in a real sense prior to its deBroglie scattering event and from
this conclude that in this particular case the wave-packet-collapse concept is flawed. More than
likely, we should also be able to avoid using this collapse concept also in experiments observing
the spin orientation of entangled particles or polarization of entangled photons. There appears to
be the growing realization that its widespread use is mainly a convenient mechanism to cover up
the fact that we currently have a poor understanding of the workings of the subquantum realm.
When a neutron spontaneously acquires positive charge and transforms into a proton, its X-Y
wave pattern acquires a positive bias similar to that shown in figure 6 (shaded region in the left
hand plot). Such a biasing phenomenon, which is seen in analysis of the Brusselator, is also
present in Model G when an existing ordered state undergoes a secondary bifurcation. The
transition of the neutron to the positively biased proton state is best understood by reference to
a bifurcation diagram similar to those used to represent the appearance of ordered states in
nonequilibrium chemical reaction systems, see figure 7. The emergence of the neutron from the
vacuum state is represented as a transition to the upper primary bifurcation branch which
branches past critical threshold ßc. Past threshold ß ' this primary branch undergoes a secondary
bifurcation with the emergence of the proton solution branch. This transition is observed in the
phenomenon of beta decay, which also involves the production of electron and antineutrino
particles not charted here; i.e., n →p + e- + – νο + γ.
The neutron's transition to the charged proton state involves an excess production rate of Y per
unit volume in its core coupled with a corresponding excess consumption rate of X per unit
volume. This causes a positive bias in its central Y concentration and a negative bias in its central
X concentration, which in turn extends radially outward to bias the particle's entire Turing wave
pattern. This extended field bias constitutes the particles long-range electric field. Analysis
shows that this potential bias declines as the inverse of radial distance just as classical theory
predicts. In fact, subquantum kinetics has been shown to reproduce all the classical laws of
electrostatics, as well as all the classical laws of gravitation (LaViolette, 1985b, 1994, 2010).
It should be kept in mind that the charge densities forming the proton's Turing wave pattern,
and that are associated with its inertial mass, are distinct from and additional to the charge
density that centrally biases its Turing pattern and produces the particle's long-range electric
Figure 6. Radial electrostatic potential profiles for a proton and antiproton, positively
charged matter state (left) and negatively charged antimatter state (right). The characteristic
wavelength equals the particle's Compton wavelength.
11
Figure 7. A hypothetical bifurcation diagram for the formation of nuclear particles. the
secondary bifurcation past bifurcation point ß ' creates electrostatic charge.
field. The former periodic densities emerge as a result of the particle's primary bifurcation from
the homogeneous steady-state solution, while the latter aperiodic bias emerges as a result of its
secondary bifurcation from an existing steady-state Turing solution.
Based on the results of the Sherwin-Rawcliffe experiment, we may infer that the creation and
later displacement of the particle's Turing wave field would be communicated outward essentially
instantaneously or at an exceedingly high superluminal velocity. The same would hold for the
outward moving event boundaries of a subatomic particle's long-range electric and gravity
potential fields. For their experiment, Sherwin and Rawcliffe (1960) performed mass spectrometry
measurements of a football-shaped Lu175 nucleus to check for the presence of linesplitting
and came up with a null result. This indicated that the mass of the lutetium nucleus
behaved as a scalar instead of a tensor which implied that its Coulomb field moved rigidly with
its nucleus and was thereby able to create instant action-at-a-distance (Phipps, 2009). Accordingly,
the conventional practice of retarding force actions at speed c would be inappropriate.
Subquantum kinetics leads to a novel understanding of force, acceleration, and motion. In
subquantum kinetics the energy potential field (ether concentration gradient) is regarded as the
real existent and the prime cause of motion, "force" being regarded as a derived manifestation.
That is, force is interpreted as the stress effect which the potential gradient produces on the
material particle due to the distortion it manifests on the field pattern space structure that
composes the particle. The particle relieves this stress by homeostatic adjustment which results
in a jump acceleration and relative motion.
Particle scattering confirmation
The Turing wave configuration of the nucleon's electric potential field predicted by subquantum
kinetics has been confirmed by particle scattering experiments that employ the recoilpolarization
technique. Kelly (2002) has obtained a good fit to particle scattering form factor
data by representing the radial variation of charge and magnetization density with a relativistic
Laguerre-Gaussian expansion; see figures 8-a and 9-a. The periodic character of this fit is more
apparent when surface charge density (r2ρ) is plotted as a function of radial distance as shown in
figures 8-b and 9-b. Kelly's charge density model predicts that the proton and neutron both have
a gausian shaped positive charge density core surrounded by a periodic electric field having a
wavelength approximating the Compton wavelength. Moreover he has noted that unless this
surrounding periodicity is included, his nucleon charge and magnetization density models do not
make as good a fit to form factor data.
Thus here we have a stunning confirmation of a central feature of the subquantum kinetics
12
Figure 8. a) Charge density profile for the neutron predicted by Kelly's preferred
Laguerre-Gaussian expansion models and b) the corresponding surface charge density
profile (after Kelly 2002, Fig. 5 - 7, 18).
Figure 9. a) Charge density profile for the proton predicted by Kelly's preferred
Laguerre-Gaussian expansion models and b) the corresponding surface charge density
profile (after Kelly 2002, Fig. 5 - 7, 18).
13
physics methodology whose prediction was first made in the mid 70's at a time when it was still
convention to regard the field in the core of the nucleon as rising sharply to a central cusp. Note
also that Kelly's model confirms the positive biasing of the proton's central field, the bias
increasing as the center of the particle is approached; compare the enhanced view shown in figure
9-b with figure 6. Furthermore, as in the subquantum kinetics model, Kelly's model shows the
amplitude of the nucleon's peripheral periodicity declining with increasing radial distance.
Simulations performed on Model G show that the amplitude of the Turing wave pattern
declines with increasing radial distance as 1/r4 at small radii (r < 2λ0 ), which approximates the
radial decline observed in the charge density maxima for Kelly's model. The Model G particle
Turing wave pattern declines more steeply at greater radial distances, declining as 1/r7 at r ≈ 4λ0
and 1/r 10 for r ≈ 6λ0 , which may be compared to standard theory which propose that the
nuclear force declines as Fn ∝ 1/r7. This localized particle wave pattern is possible only because
the extra G variable has been introduced into the Model G reaction system. It allows a particle to
self-nucleate in an initially subcritical environment while leaving distant regions of space in their
subcritical vacuum state. Thus if we quantify the amount of order or negentropy created by a
single seed fluctuation, integrate the total amount of field potential | ϕx| or | ϕy| forming the
particle wave pattern, we should find that it converges to a finite value, comparable to the idea of
a quantum of action. The two-variable Brusselator, on the other hand, fails to generate localized
structures. Simulations show that a seed fluctuation in the Brusselator produces order only if the
system initially operates in the supercritical state, which in turn causes its entire reaction volume
to become filled with a Turing wave pattern of maximum amplitude. Thus in the Brusselator a
single seed fluctuation potentially produces an infinite amount of negentropy or structure.
The confirmation of the Model G ether reaction model which has been forthcoming from
particle scattering experiment data leads us to conceive of the subatomic particle as an organized
entity, or system, whose form is created through the active interplay of a plurality of particulate
structures existing at a lower hierarchic level. Thus we find that the very structure of matter, its
observationally confirmed Turing wave character, stands as proof of an underlying Whiteheadian
dynamic and interactive stratum, one that ancient cultures variously named the Aether, Akasha,
Tao, or Cosmic Ocean. The physics of subquantum kinetics indeed has very ancient origins
(LaViolette, 2004).
Contemporary quark models fail to anticipate the periodic character of the nucleon's electric
field. No quark model can be devised after-the-fact that can reasonably account for this feature.
Quarks themselves, or the "gluons" theorized to bind them together, have no script to tell them
they should dance around in the complex manner that would be required in order to generate such
an extended periodic field pattern. Subquantum kinetics, the viable replacement for quark theory,
differs in several respects, one being the manner in which it handles the origin of mass, charge and
spin. Quark theory does not attempt to explain how inertial mass, electric charge or spin arise.
It merely assumes them to be physical attributes present in quarks in fractional form and which
in triplicate summation appear as corresponding properties detectable in the nucleon. By
comparison, etheron reactants of subquantum kinetics have no mass, charge, or spin. These are
properties which are predicted to arise only at the quantum level, and which amazingly emerge as
corollaries of the Model G reactions. Mass and spin, as properties of the subatomic particle,
emerge at the time the particle first comes into being, and charge, as noted earlier, emerges as a
secondary bifurcation of the primary Turing bifurcation.
The cosmology of continuous creation
As mentioned earlier, the energy potential seed fluctuations of subquantum kinetics, which
play a key role in nucleating particles throughout space, bear a strong similarity to the conven14
tional idea of zero-point energy fluctuations. But, there are some exceptions. In conventional
physics, ZPE fluctuations are theorized to have energies comparable to the rest mass energy of
subatomic particle and to emerge as particle-antiparticle pairs which rapidly annihilate one
another. As a result, it is fashionable to quote unimaginably high values of the order of 1036 to
10113 ergs/cm3 for the zero-point energy density of space. But because of their polarity pairing,
they are unable to nucleate matter. By comparison, subquantum kinetics stands in strong
opposition to the idea frequently circulated that the spatial vacuum is "seething with particles
and antiparticles." It theorizes far lower ZPE densities, on the order of less than 1 erg/cm3, or
less than the radiation energy density at 2000° K. Nevertheless, because they are unpaired, they
are potentially able to spawn material particles. But this only occurs when a fluctuation of
sufficiently large magnitude arises, the vast majority being far too small to attain the required
subquantum energy threshold.
We may assume that a large fraction of the ZPE fluctuations arising in the transmuting ether
are of sufficient magnitude as to qualify as the causal basis for quantum indeterminacy. Bohm
and Vigier (1954) have shown that random fluctuations in the motions of a subquantum fluid are
able to generate a field probability density |ψ|2 that provides an adequate causal interpretation of
quantum theory. Similar reasoning could be applied to subquantum kinetics, except that in
subquantum kinetics fluctuations arise as random concentration pulses (energy potential
fluctuations) rather than as random mechanical impulses. As described above, the ZPE
background arises as a direct result of the ether's regenerative flux and hence is conceived to be an
indication of the ether's open system character. At the same time, these emerging ZPE
fluctuations constitute the ether's incipient ability to create order.
Subquantum kinetics is incompatible with the idea of a big bang since a ZPE fluctuation large
enough to create all the matter and energy in the universe in a single event would be a virtual
impossibility. But one of the theory's advantages is that it does not need to resort to postulating
an adhoc big bang to account for physical creation. For, the Model G reaction-kinetic ether
allows primordial neutrons to continuously emerge throughout all space (LaViolette 1985c,
2010). Upon undergoing beta decay, these neutrons form proton-electron pairs, hence hydrogen
atoms. Since fluctuations have a greater probability of nucleating particles in the vicinity of
existing particles whose gravity well produces a fertile supercritical region, hydrogen will tend to
beget more hydrogen, and sometimes nuclei of higher atomic mass will form. Unlike the big bang
theory, primordial space is cold and so the gas in each locale eventually condenses into a
primordial planet. Since creation proceeds more rapidly within celestial masses, each planet
evolves into a Mother Star which produces daughter planets and stars, which with growth evolve
into a primordial star cluster and eventually into a dwarf elliptical galaxy. With the onset of core
explosions in the very old and massive Mother Star, expulsive activity causes spiral arms to form
changing the galaxy into a spiral and in late stages into a giant elliptical. Observations with the
Hubble telescope support this evolutionary scenario.
A mathematical analysis of the Model G reaction system shows that it has an inherent matterantimatter
bias in that only positive electric potential fluctuations are able to successfully
nucleate matter, antineutrons being unable to nucleate spontaneously from the vacuum state.
Thus subquantum kinetics explains why we live in a universe consisting primarily of matter as
opposed to antimatter. In conventional cosmology, this asymmetry problem is solved only with
great difficulty.
The cosmological redshift effect, which the big bang theory uses as its primary support and
which big bang theorists cite as evidence for cosmological expansion, is shown to be more
properly interpreted as a tired-light, energy-loss effect. That is, the tired-light interpretation has
been shown to make a better fit to cosmological data than the Doppler shift interpretation
15
(LaViolette 1986, 1995, 2010). This outcome is a benefit to subquantum kinetics since if the
transmuting ether were to expand with space its reactant concentrations would rapidly decrease
over time and its state of criticality would become drastically altered. Subquantum must be
conservative in this regard. It must assume that the ether is cosmologically stationary and that
galaxies, excepting their peculiar motions, are at rest relative to their local ether frame.
One success of subquantum kinetics is that there is no need to introduce an adhoc assumption
for the aforementioned photon energy loss used to account for the cosmological redshift. Rather,
the effect emerges as a prediction of subquantum kinetics (Table 1, prediction 2). That is, Model
G predicts that intergalactic space should be predominantly subcritical and hence that photons
emitted from distant galaxies should progressively lose energy as they travel. But unlike the big
bang theory, which requires that the entropy of the physical universe as a whole should
everywhere be progressively increasing, subquantum kinetics predicts that at least in the vicinity
of galaxies, where space is for the most part supercritical, entropy should be progressively
decreasing. Furthermore whereas big bang theory places an unreasonably short age limit on the
universe, one that conflicts with astronomical observation, subquantum kinetics allows the
material universe to be immeasurably large and to have an age of many trillions of years, offering
well enough time for galaxies to evolve. Subquantum kinetics also allows repeating aeonic cycles
of matter creation and matter dissolution, but with the ether itself being virtually immortal. This
very much resembles the idea of the cosmic cycles related in the ancient Hindu story of Vishnu.
For the same reason that subquantum kinetics allows entropy to progressively decrease in
supercritical islands of creation scattered throughout space, so too photons may either lose or
gain energy depending upon whether they are traveling through subcritical or supercritical
regions. But, such phenomena only appear as violations of the First Law of Thermodynamics in
standard physics which views the universe as a closed system. Even so, the photon redshifting
or blueshifting rates that subquantum kinetics predicts are so small as to be undetectable in the
laboratory, of the order of 10-18/s. Nevertheless the photon blueshifting phenomenon has been
measured locally in maser signals transponded through interplanetary space, a phenomenon
which has become popularly known as the Pioneer effect; see prediction 5 of Table 1 (LaViolette
1985c, 2005, 2010). Although extremely small, this photon blueshifting effect has enormous
consequences for stellar astrophysics; see predictions 3, 4, and 6 of Table 1. The genic energy
excess from photon blueshifting is able to account variously for the energy powering planets,
brown dwarfs, and red dwarf stars, as well as that powering nova, supernova, and galactic core
explosions.
Spin formation, nuclear bonding, and proximal entanglement
Experiment shows that when two electrons are brought into close proximity so that their spins
adopt compatible parallel and antiparallel orientations, that the particles retain their spin orientation
link even when separated by great distance. Thus if one particle becomes magnetically
forced into a particular spin orientation, its partner will correspondingly be found to have its spin
always oriented in the opposite direction. This phenomenon of particle linkage across great
distances of separation has become called quantum entanglement, and is a property that is
believed to characterize the Akashic field (Laszlo, 2004). In the case of entangled photons, the
polarization orientation of one photon has been observed to be linked to that of its partner even
when the pair is separated by 18 kilometers, and the orientation of one has been determined to
have been conveyed to the other at a speed in excess of 100,000 c (Brumfiel, 2008).
Subquantum kinetics provides an understanding of what spin is and why particle spins couple
with one another. It identifies particle spin as being a vortical movement of etherons that takes
place in the core of a subatomic particle. Because a subatomic particle's electric and gravity field
16
consists of a periodic steady-state concentration pattern extending outward from its core in a
symmetric fashion, etherons will continually diffuse radially inward and outward between the
core and its surrounding inner shell, and between each successive pair of adjacent spherical shells.
Of these etheron fluxes, those having the greatest magnitude will be those flowing to and from the
particle core. For example, in the case of a nucleon (proton or neutron) where the core maintains
a positive charge density, an X production rate deficit and Y production rate surplus, X
continually flows into the core from the adjacent high-X shell and Y continually flows out of the
core to this same shell, where Y maintains a low concentration. In macroscopic systems, an
inward inward diffusive flux can develop into a free vortex. In a similar fashion, an inward
diffusive X etheron flux could stimulate an X etheron vortex and this could also stimulate a
rotational wave pattern to propagate circumferentially around the core. Such waves would
appear as rotating modulations of the particle's X-Y concentration pattern, or rotating electric
fields, which would give rise to magnetic effects (LaViolette 1985b, 1994, 2010). Since the
particle's electric potential Turing wave is periodic, its spin magnetization is also expected to be
periodic, which is in agreement with Kelly's findings.
A spin ether vortex in the core of a nucleon would increase the rate of etheron transport
between the core and surrounding shell. This would have an effect similar to locally increasing
the X etheron diffusion coefficient, which in turn broadens the nucleon core (since the Turing
pattern wavelength increases with increasing radial diffusive flux). Simulations that increase the
etheron diffusion rate in the central core bear this out (LaViolette, 2010).
Figure 10 shows how two such spin-broadened nucleon structures might look when separated
by one Compton wavelength, λ0, the separation distance observed between the proton and
neutron in a deuteron. As shown here, the two particles are so close that their gaussian cores
overlap one another, which in turn implies that the outer portion of their spin vortices should
substantially intermingle. This close overlapping of the spin vortices automatically leads to a
restriction on the alignment of the spin direction of each particle with respect to the other. That
is, in order for the ether vortices to be mutually compatible, their etheric flows must be going in
the same direction in their zone of intersection.
One possibility is that the two vortices align with one another tail to nose. This is shown in
Figure 11-a, which is a hypothetical representation of how the proton (P) and neutron (N) spin
vortices align in the deuteron when forming the spin triplet state. Due to their axial alignment,
the spins of the two nucleons additively reinforce, accounting for why the deuteron is observed
to have twice the spin magnitude of either a solitary proton or a solitary neutron, i.e., a spin of 1,
rather than 1/2.
Figure 10. Two Model G Turing wave patterns separated by one Turing pattern Compton
wavelength.
17
Figure 11. Intermingling of the ether diffusive flux vortices of individual nucleons in: a) the
triplet state deuteron (s = 1), b) the singlet state deuteron (s = 0), and c) the singlet state
helium-4 nucleus (s = 0); N: neutron and P: proton.
Alternatively, a proton and neutron can align so that their spins are parallel and antiparallel. In
this case the proton and neutron would position themselves adjacent to one another similar to the
configuration shown in figure 11-b, their spin vortices flowing in the same direction in their zone
of intersection. A deuteron with this configuration is said to be in a singlet state and to have zero
spin, its spin magnetic field directions canceling one another. This singlet state, however, is
found to be transitory, the particles quickly adopting the axially aligned triplet state.
When two deuterons combine to form a helium atom (figure 11-c), the vortices of adjacent
nucleons must orient antiparallel to one another in order for the flux streams in their zone of
intersection to be going in the same direction and be mutually compatible. As a result, the spin of
one deuteron subset in the helium nucleus cancels that of its partner, resulting in a spin of zero.
This singlet zero-spin state is in fact observed for helium-4.
This model can also explain the stable spin state of lithium-6 which consists of three protons
and three neutrons. Two neutrons and two protons adopt an arrangement similar to helium-4 and
the remaining proton-neutron pair orients in the head-to-tail triplet configuration, attaching to the
end of either a proton or neutron, e.g. to form an N-P-N-P sequence. The result is to produce a
spin of s = 1 similar to that of the deuteron, and this is in fact observed. Adding one more
neutron to this axial chain to form lithium-7 would yield a spin of 3/2, and this, too, matches the
observed spin value. Proceeding in this manner, one is able to explain the spin values of all
nuclear isotopes.
These spin diagrams suggest that the intermingling of the nucleon spin vortices is what creates
the nuclear bonding force. Thus the nuclear bond may be magnetic in nature. When proximal
particles entangle, they develop a spin linkage that produces a nuclear bond between the
nucleons. When the particles separate from one another but remain entangled, it is as if the spin
bond that developed between them has not yet been dissolved; their etheric space structure
vortices still orchestrate their flows in a coherent fashion.
The T-matrix and distant entanglement
In the context of subquantum kinetics, superluminal speeds of entanglement are not an
anomaly. The speed restriction that applies to the propagation of transverse electromagnetic
wave quanta through the ether does not apply at the subquantum level. Bell's theorem tells us
that no theory explaining quantum phenomena in terms of local hidden variables can account for
18
distant entanglement. But subquantum kinetics does not have the problems of a local hidden
variable theory: it allows phenomena at remote locations to communicate effects to a target
location at superluminal speeds—etherons can diffuse or convect through space at speeds greater
than c. The same is true of the speed whereby spin vortices communicate their orientations to
distant partner spin vortices.
Up to this point we have reviewed how subatomic particles are created, the nature of their
inertial mass and their extended Turing wave patterns, how they generate gravitational mass and
electrical charge, how these in turn form extended gravity and electric fields, how transversely
moving electric fields would produce magnetic forces, how particle spin would arise, and how
spin vortices align to produce nuclear bonds. There is one other aspect of subatomic matter that
we haven't yet discussed that is also predicted by subquantum kinetics, but whose existence has
not yet become recognized by standard physics. That is, not only does an ether vortex form in
the core of a subatomic particle, generating the property of spin, but also the particle's core
should pulsate radially, producing longitudinal movements in the Turing wave pattern that
propagate outward at superluminal speeds.
The idea that subatomic particles might pulsate radially was first suggested in 1895 by Annie
Besant and Charles Leadbeater in a published report of clairvoyant observations they conducted
of the ether in which they viewed a subatomic whorl-like entity that they termed the "ultimate
physical atom," u.p.a. (also later named the "anu"). They described ether whorls forming
currents both emerging from the u.p.a. core as well as entering it, which accords with what
subquantum kinetics predicts should take place in the particle's core. The individual whorl
structures diagrammed in figure 11 to represent protons and neutrons are copied from a diagram
of the u.p.a. published in 1919 in their book Occult Chemistry. Besant’s and Leadbeater’s
description of the u.p.a. preceded by two years Thompson's 1897 discovery of the electron, by
24 years Rutherford's 1919 discovery of the proton, and by 37 years Chadwick's discovery of
the neutron. Their illustration of axial bonding of ultimate physical atoms, which was somewhat
similar to that illustrated in figure 11-a, presciently anticipated the theory of nuclear bonding
proposed 40 years later by Yukawa. Moreover they noted that the u.p.a. whorl pattern as a
whole precessed about an axis much like the precession of a spinning top. Thus they not only
predicted the property of subatomic particle spin 30 years prior to Goudsmit's and Uhlenbeck's
proposal of electron spin in 1925, but also anticipated nuclear spin precession, a type of Larmor
precession, as much as 50 years before its discovery.*
Besant and Leadbeater had also reported observing the u.p.a. pulsating radially, something that
has not been observed by physicists in the more than one hundred years that has elapsed since.
This, however, is not surprising since radial pulsation of the particle's core electric and gravity
fields (X-Y and G potential fields) with wavelength ~λ0 are inherently difficult to detect in the
laboratory.
In 1995 I had proposed that this core pulsation phenomenon follows necessarily for
subquantum kinetics (LaViolette, 1995, 2004). Consider a nucleon in which a vortical X-on flux
enters the nucleon's core, flowing from the adjacent high-X shell into the core where X maintains
a minimum concentration. If a stochastic fluctuation were to cause a slight increase in this inward
flux, this would in turn cause a slight expansion of the nucleon's core diameter since increased
etheron transport always broadens a Turing wave dissipative structure. This would reduce the
* Whereas their spin and precession predictions anticipated properties later observed in subatomic particles, the
application of their discourse to current physics becomes unclear at the atomic level. For example, instead of
proposing that hydrogen is composed of one such u.p.a. and oxygen of sixteen, Besant and Leadbeater instead
envisioned hydrogen as being composed of 18 spinning u.p.a.'s and oxygen of 290. It cannot be certain whether
they were referring here to monotomic hydrogen and oxygen or to molecular aggregations of these atoms.
19
X potential gradient between the shell and core, and this in turn would reduce the rate at which
X-ons spiral into the core. The reduced X-on influx would cause the core Turing wave to
contract, and this in turn would steepen the shell-to-core X potential gradient. This again would
increase the inward vortical flow of X, causing the cycle to repeat. The Y and G field potentials
would also pulse radially in step with X. In general, spin vortices in subatomic particles are
always expected to be always accompanied by radial pulsations. Future computer simulations
performed on Model G for a three dimensional reaction volume are expected to demonstrate both
the phenomena of spin and radial pulsation. The source of energy driving these pulsations, like
that driving the vortical ether flux, is ultimately traceable to the nonequilibrium reaction-diffusion
processes that continuously animate the ether.
The radial pulsation of a particle's core would cyclically displace the particle's entire
electrogravitic Turing wave pattern inward and outward from the core. Based on the results of
the experiment by Sherwin and Rawcliffe (1960) discussed earlier, we may infer that this
displacement would be communicated outward essentially instantaneously or at an exceedingly
high superluminal velocity, causing the particle's Turing wave pattern to oscillate radially
throughout its entire extent in coherent fashion.
Computer simulations of the theory proposed here suggest that the particle’s Turing wave
should maintain its wavelength unchanged throughout its whole extent, although its amplitude
should decrease progressively. We may conclude that the radial oscillation produced by each
particle space structure should persist to very great distances. In light of the results of Sherwin
and Rawcliffe, it should be communicated to remote locations almost instantaneously. The
relativistic speed-of-light restriction that applies to transverse electromagnetic waves would not
apply to the movement of the Turing wave pattern. To an observer stationary with respect to
the pulsation, the moving Turing wave pattern would act in a way similar to longitudinal
Coulomb waves that Tesla was generating from the dome of his magnifying transmitter tower.
But unlike macroscopic Tesla waves, these waves are subquantum, involving the conveyance of
less than a quantum of action in any given direction. This could be a further reason why they are
not easily directly detected in laboratory experiments.
Tesla waves are known to produce resonant energy beams that link the transmitter dome with
nearby objects that develop a sympathetic oscillation. The high voltage waves that radiate from
such a monopole antenna are absorbed and phase conjugated by any ionized region they might
encounter, resulting in a phase conjugate wave traveling back to the antenna dome. Since the
phase conjugate wave is the time reverse of the outgoing longitudinal wave, the two match
perfectly in phase and amplitude to resonantly form a nondispersing soliton beam extending
between the transmitter antenna dome and the remote ionized region. When such a resonant link
develops, the antenna and remote target are said to have become "locked on" to one another.
Although Tesla himself did not use the term phase conjugation to describe the resonance
phenomenon he was observing, he nevertheless was aware of the phenomenon.
Applying the Hermetic maxim "as above so below," we may theorize that similar phase
conjugate resonances develop amongst subatomic particles as a result of the radial oscillation of
their intersecting Turing wave patterns; see figure 12. The ongoing radial oscillation of subatomic
particle cores could seed the formation of interparticle soliton beams. These scalar wave beams
would extend between particle cores, interlinking them into a vast matrix.
Soliton beams formed in the laboratory are observed to resonantly augment their field
amplitudes to levels far in excess of the amplitudes of the original phase conjugated waves. This
self-amplification process has variously been termed field-induced soliton phenomenon
(LaViolette, 2008a), or alternatively, force amplification by stimulated energy resonance, the faser
phenomenon (Obolensky, 1988). Unlike the particle Turing wave pattern, whose wave
20
Figure 12. a) Proximal entanglement of two subatomic particles through phase-locking
of their Turing wave field patterns. b) Maintenance of distal field entanglement
through an interparticle soliton beam.
amplitude would rapidly diminish with increasing radial distance from the core, the wave
amplitude within these self-amplified soliton beams would be relatively uniform along their
length, maintaining an amplitude at least as high as exists immediately adjacent to the particle
core. Thus in the case of two particles that had become mutually locked on to one another, or
"entangled," each would experience the other's presence as if it were positioned immediately
adjacent, even if they were separated by tens or hundreds of kilometers.
In the entangled state, particle cores would pulse in phase with one another to orchestrate a
coherent oscillation. The spin alignment of two entangled particles would restrict one another
just as if the particles were immediately adjacent to one another. Consequently, if the spin
vortex of one particle were to be made to change direction through the action of a local magnetic
field, this would change the spin polarization direction communicated through its soliton beam
tunnel, which in turn would change the soliton beam's magnetization in the vicinity of the partner
particle and cause the spin of the partner to adopt a compatible alignment. Thus a particle would
exert an influence on its entangled partner particle even when separated by great distances.
To understand how spin alignment would be communicated through these soliton corridors,
consider the following. The X-Y-G ether vortex that exists in a particle's core would induce flux
vectors in its surrounding Turing wave pattern. That is, the radial etheric fluxes transpiring
radially between successive Turing pattern shells would adopt transverse etheric flux
components aligned in the direction of the particle's spin vortex. In other words, these oscillating
subquantum fluxes would not be purely radial, but would incorporate a transverse "polarization."
This explains why particle scattering experiments show a periodic stationary-wave,
magnetization pattern around every nucleon that matches its periodic stationary-wave charge
21
density pattern. Normally the intensity of this spin magnetization pattern decreases rapidly
with increasing radial distance. But within a soliton beam, it would become resonantly amplified
to a strength at least as great as that found in the immediate vicinity of the particle core.
One further point that should be made is that these soliton beams could serve as force transfer
conduits, conveying force at superluminal speeds, perhaps as high as 1015 to 1017 c.* Events
occurring at one point in this interconnected particle matrix would be rapidly communicated
through these soliton "nerve pathways" to affect the entire entangled particle network. It may
even be possible that intelligible information could be communicated through these entanglement
conduits, or be sensed through these conduits at faster-than-light speeds. This is consistent with
Ervin Laszlo's (2011) hypothesis of a hologram-like network formed of phase-conjugating, scalar,
standing-wave fields that can instantaneously convey and store information exchanged to and
from matter-energy systems. What name should be given to this phenomenal subquantum,
soliton beam network? Perhaps it should be named the T-matrix, where "T" might stand either
for Tesla, or Turing, or both.
Finally, to consider some more broad reaching questions, do particles maintain their entanglements
throughout their existence? Subquantum kinetics predicts that most of the matter in our
galaxy was created in the Galactic core, Sgr A*. So, if those entanglements were to persist, might
we have some sort of superluminal connection to the massive Mother Star that resides there?
Another question is how many entanglements might a given particle be expected to form at any
one time? Furthermore, does the soliton T-matrix itself have inertial mass? Subquantum kinetics
identifies a particle's inertial mass with the electric potentials forming its Turing wave
periodicity. So, would the electrogravitic stationary wave that makes up the subquantum
interparticle soliton beam also have some amount of mass? If so, how much? Also since force
and information both appear to be conveyed through these beam links, couldn't human's
unconscious or conscious use of such a network explain phenomena such as telepathy,
telekenisis, materialization, and information retrieval from a shared Akashic record? Could all
minds and consciousnesses be interlinked through such superluminal entanglements. These are
questions that standard physics theories cannot ask, or even begin to reasonably approach,
because they restrict their "universe" to the quantum level. By comparison, subquantum
kinetics, which describes quantum phenomena by postulating activity on the subquantum level,
appears to offer a promising framework for understanding nonlocal connectivity. With future
* To figure the speed at which events might be expected to propagate through these soliton links, we turn to
laboratory experiments on longitudinal wave propagation. In 2004 Guy Obolensky performed an experiment that
measured the propagation speed of high voltage Coulomb shocks (electrogravitic potential waves) emitted from a
dome-shaped electrode. The experiment not only demonstrated that these Tesla wave shocks propagate at
superluminal speeds, but also confirmed a prediction of subquantum kinetics that the speed of the shock should be
proportional to the shock front's potential gradient (LaViolette, 2008a); also see Table 1, prediction 11. The reason
is that the electric potential shock wave rides forward on the ether wind generated at the shock forefront, the wind's
forward velocity being proportional to the electric and gravity field gradient at the shock's leading edge. When the
shock was close to its emitting electrode where the field gradient approached 106 volts per meter, the Coulomb
waves were observed to have a speed of about 5 c. In another experiment performed by Eugene Podkletnov and G.
Modanese (2011) Coulomb waves emitted from the surface of a superconducting disc at a field potential gradient of
>2 X 105 volts per meter were observed to generate a longitudinal gravitic impulse that traveled at a speed of 64 c.
Because Podkletnov's gravity impulses are confined to a 10 cm diameter beam, they maintain a constant field
gradient and hence constant superluminal velocity as they travel forward. By comparison, the electric field gradient
in the core of the nucleon at a distance of 6 X 10-13 cm (0.6 fermis) from its center is estimated to be around 2.5 X
1020 volts per meter, or 1014 to 1015 fold greater than the gradients produced in the above laboratory experiments.
Consequently, extrapolating from these laboratory results, Tesla wave-like field pulsations propagated along an
interparticle soliton beam would be expected to travel along the beam a speed of 1015 to 1017 c. Just as in
Podkletnov's collimated gravity impulse beam, this speed along every linear soliton connection should not
diminish with distance. Theoretically, two entangled particles located on opposite sides of the Milky Way galaxy
should be able to orchestrate their spin positions with a time delay of less than a millisecond.
22
development it could lead to a better understanding of phenomena that currently leave
conventional physics in a quandary.
References
AUCHMUTY, J. F. G. and NICOLIS, G., 1975, Bifurcation analysis of nonlinear reaction diffusion
equations--1. Evolution equations and the steady state solutions. Bulletin Mathematical Biology.
37 pp. 323-365.
BESANT, A. and LEADBEATER, C. W., 1919, Occult Chemistry: Clairvoyant Observations on the
Chemical Elements. London: Theosophical Publishing House.
BOHM, D. and VIGIER, J. P., 1954, Model of the causal interpretation of quantum theory in terms
of a fluid with irregular fluctuations. Physical Review 96 pp. 208-216.
BRUMFIEL, G., 2008, Physicists spooked by faster-than-light information transfer. Nature: 1038.
Naturenews: https://www.nature.com/news/2008/080813/full/news.2008.1038.html.
CORNILLE, P., 1998, Making a Trouton-Noble experiment succeed. Gal. Electrod. 9 (1998): 33.
DEWDNEY, C. et al., 1985, Foundations of Physics 15, pp. 1031-1042.
FEYNMAN, R. P., LEIGHTON, R. B., and SANDS, M. 1964, The Feynman Lectures on Physics,
Vol. II, Reading MA: Addison-Wesley.
GLANSDORFF, P. and PRIGOGINE., I., 1971, Thermodynamic Theory of Structure, Stability, and
Fluctuation. New York: Wiley.
GRANEAU, N., 1983, First indication of Ampere tension in solid electric conductors. Phys. Lett.
97A: 253-255.
KELLY, J., 2002, Nucleon charge and magnetization densities from Sachs form factors. Physical
Review C 66 (6), id: 065203. Eprint: https://arXiv.org/abs/hep-ph/0204239.
LAFFORGUE, J.-C., 1991, Isolated systems self-propelled by electrostatic forces. French patent No.
2651388.
LASZLO, E., 2004, Science and the Akashic Field. Rochester, VT: Inner Traditions.
LASZLO, E., 2011, The Cosmic Internet. New York: Harper Collins.
LAVIOLETTE, P. A., 1985a, An introduction to subquantum kinetics. I. An overview of the
methodology. Intern. J. General Systems 11, pp. 281 - 293.
LAVIOLETTE, P. A., 1985b, An introduction to subquantum kinetics: II. An open systems
description of particles and fields. Intern. J. General Systems 11, pp. 305-328.
LAVIOLETTE, P. A., 1985c, An introduction to subquantum kinetics: III. The cosmology of
subquantum kinetics. Intern. J. General Systems 11, pp. 329-345.
LAVIOLETTE, P. A., 1986, "Is the universe really expanding?" The Astrophysical Journal 301, pp.
544-553.
LAVIOLETTE, P. A., 1992, "The planetary-stellar mass-luminosity relation: Possible evidence of
energy nonconservation?" Physics Essays 5(4), pp. 536-544.
LAVIOLETTE, P. A., 1994, Subquantum Kinetics: The Alchemy of Creation. Alexandria, VA:
Starlane Publications, first edition (out of print).
LAVIOLETTE, P. A., 1996, "Brown dwarf discovery confirms theory of spontaneous energy
generation." Infinite Energy 1(5 & 6): 31; reprinted in New Energy News 3(9) (1996): 17-18.
LAVIOLETTE, P. A., 2004, Genesis of the Cosmos (Rochester, VT, Bear & Co.).
LAVIOLETTE, P. A., 2005, "The Pioneer maser signal anomaly: Possible confirmation of
spontaneous photon blueshifting." Physics Essays 18(2), pp. 150-163.
LAVIOLETTE, P. A., 2008a, Secrets of Antigravity Propulsion. Rochester, VT: Bear & Co.
LAVIOLETTE, P. A., 2008b International Journal of General Systems, vol. 37 (6) (2008): 649-676.
LAVIOLETTE, P. A., 2010, Subquantum Kinetics: A Systems Approach to Physics and Astronomy.
Niskayuna, NY: Starlane Publications, third edition.
23
LEFEVER, R., 1968, Dissipative structures in chemical systems. J. Chemical Physics 49, pp. 4977-
4978.
NICOLIS, G. and PRIGOGINE, I., 1977, Self-organization in Nonequilibrium Systems. New York:
Wiley-Interscience.
OBOLENSKY, A. G., 1988, The mechanics of time. In Proceedings of the 1988 Internaional Tesla
Symposium, edited by S. Elswick, Colorado Springs: International Tesla Society, 4.25-4.40.
PAPPAS, P. T. and VAUGHAN, T., 1990, Forces on a stigma antenna." Physics Essays 3: 211-216.
PHIPPS, Jr., T. E., 2009, The Sherwin-Rawcliffe experiment – Evidence for instant action-at-adistance.
Apeiron 16: 503-515.
PODKLETNOV, E. and MODENESE, G., 2011, Study of light interaction with gravity impulses and
measurements of the speed of gravity impulses. In: Gravity-Superconductors Interactions: Theory
and Experiment, edited by G. Modanese and R. Robertson. Bussum, Netherlands: Bentham Science
Publishers.
PRIGOGINE, I., NICOLIS, G. and BABLOYANTZ, A., 1972, Thermodynamics of Evolution.
Physics Today 25(11), pp. 23- 28; 25 (12), pp. 38-44.
PULVER, M. and LAVIOLETTE, P. A., 2011, Stationary particle formation in a three-variable
reaction-diffusion system. paper in preparation.
SAGNAC, G. 1913, The luminiferous ether demonstrated by the effect of the relative motion of the
ether in an interferometer in uniform rotation." Comptes Rendus de l'Academie des Sciences
(Paris) 157: 708-710.
SHERWIN, C. W. and RAWCLIFFE, R. D., Report I-92 of March 14, 1960 of the Consolidated
Science Laboratory (University of Illinois, Urbana); Dept. of Commerce Scientific and Technical
Information Service, document #625706.
SILVERTOOTH, E. W., 1987, Experimental detection of the ether." Speculations in Science and
Technology 10: 3-7.
SILVERTOOTH, E. W., 1989, Motion through the ether." Electronics and Wireless World: 437-438.
WINFREE, A. T., 1974 , Rotating chemical reactions. Scientific American 230, pp. 82-95.
ZAIKIN, A. and ZHABOTINSKII, A., 1970, Concentration wave propagation in two-dimensional
liquid-phase self-oscillating system. Nature, 225, pp. 535-537.
 

Cosmogenesis: The Alpha and the Omega

01.05.2014 22:47
 
1
Cosmogenesis: The Alpha and the Omega
Paul A. LaViolette
October 1973 - January 1974
This unpublished paper presents a glimpse into the early formulation of the
subquantum kinetics methodology less than one year after its initial inception and
11 years prior to its first journal publication. It presents philosophical
underpinnings of the theory showing the rationale for an open system view of the
microphysical realm and the need for more than three dimensions for space. It
also discusses how this novel approach allows a departure from the conventional
indeterministic physics paradigm in agreement with the views of de Broglie and
Einstein and offers a natural solution for the field-source dualism that plagues
standard physics. In this earlier version LaViolette did not refer to ether
substrates or etherons, but used the less controversial term "media" to describe the
entities engaged in the postulated subquantum reaction and diffusion processes.
Comments to the text are marked as updates.
Warehouse Myopia
What could be less interesting than seeing the inside of a furniture warehouse? A huge dusty
room hundreds of feet long, furniture neatly piled along its center isle; the appliances and lamps
to the left, the rocking chairs to the right. Further down to the right you can see the buffets and
shelves stored on their roller carts ready to be pulled out of their respective places at a moments
notice. These groupings of furniture seen as a whole seem to compose an exquisitely ordered
structure, an inanimate spatial classification of wares, static and immortal.
But, to stop our description here would be misleading for all about there is much activity.
Workers are unloading furniture from incoming company vans and carting the pieces to their
respective places on the warehouse floor. If they have unloaded a sofa, it will be wheeled to the
spot where the sofas are stored. If its a rug, it will be loaded into the rug bin, and so on. At the
same time other workers are removing pieces of furniture from their respective locations and
carting them to the outgoing delivery trucks.
After many weeks of observation we would notice that these groupings of furniture had not
changed appreciably in size or in relative placement. Yet, the furniture or components which
compose the groupings might be entirely different from week to week, a particular piece of
furniture having an average of seven days residence time in the warehouse. In such a situation it
is said that the inflow and out flow of furniture pieces maintains a state of dynamic equilibrium
(or steady-state equilibrium), where the process of building up of furniture groupings (anabolism)
is balanced by the process of their destruction (catabolism). The two processes taken together
are referred to as metabolism. The structures thus formed, the furniture groupings, are said to
metabolize. If either of the metabolic subprocesses ceases, the state of dynamic equilibrium will
be upset and the metabolic structure will disappear. For example, if the delivery trucks went on
strike, catabolism would cease, and the furniture would begin to build up in the warehouse. The
groupings would become choked and eventually completely disordered. On the other hand, if the
company vans went on strike, anabolism would cease and the furniture in the warehouse would
2
begin to dwindle. The groupings would become atrophied and would eventually disappear
altogether. While observing, we would also notice that the metabolic process is dissipative, the
workers being seen to expend energy to move the furniture. Consequently, we may refer to the
furniture groupings as being "dissipative space structures".
From our observations of a typical furniture warehouse we have learned two things about
"warehouse structure": 1) warehouse structure is formed only in the presence of a component
flow accompanied by a dissipation of energy, and 2) warehouse structure is metabolic; it persists
as long as the import and export flows are in a state of dynamic equilibrium.
Nevertheless, a naive myopic observer may come to a different set of conclusions. Not being
able to distinguish the individual pieces of furniture nor the workers busily carting the wares to
and fro, he might see only the overall groupings of classified furniture. The overall ordered
structure might seem as if composed of a collection of static entities. To him, these imposing
piles of goods would seem to cast an air of tomb like serenity within the cavernous warehouse.
Seeking to explain their origin in the warehouse he might conclude that some time in the past
these mounds were brought in and left on the floor where they have stood ever since. He might
be led to postulate a sort of "primordial creation''.
Now, imagine that one day there is an earthquake and all the piles of furniture are thrown
about the room in a state of disorder. Yet, the workers go about their business as always,
building up new groupings of order with the incoming furniture and carting away the disordered
furniture to the delivery trucks. After a week or two our piles are all back in their original places
with no trace of disorder. The naive myopic observer, however, is led to conclude that there
must be some "force of attraction" which is responsible for this amazing regeneration of
structure. He postulates that after the earthquake, these massive mounds must have gravitated
back to their original positions as might be observed when boulders roll down into a valley.
The truth is, we all suffer from warehouse myopia. It is a characteristic of perception that in
viewing dynamic forms our mind tends to grasp underlying patterns of a more static nature. For
example, when viewing a color wheel at slow speeds, we are able to distinguish the separate
colors as they whirl around. But when the speed increases, the individual colors blur, and we
instead see a circular disk of blended color. Because of the mind's inability to grasp the rapid
motion of the separate parts of the wheel, perception shifts and focuses on a "time-stable
system", the whole disk, a system constituted by a non varying pattern or repetitive event
sequence. To the naive observer the wheel indeed appears static.
Open Systems vs. Closed Systems
When we see a tornado in the distance we see a slowly moving dark funnel shaped cloud
whose internal structure appears to be static. Yet, we know its form is dynamic and owes its
existence to a rapid whirling flow of air, which we can see if we have the daring to come close
enough. Like the warehouse, the tornado imports and exports matter, in this case air of differing
densities. A given packet of air perhaps remains within the boundaries of a tornado for less than
20 seconds, yet the tornado may have a lifetime of more than half an hour. If this massive flow
of air were to suddenly cease, the tornado would disappear; its structure persisting only in the
presence of flow.
Metabolic structures such as tornadoes and furniture groupings in a warehouse are commonly
referred to as open systems, meaning that the physical boundaries of the particular structure or
system are open to the flow of components such as matter and energy. That is, the term implies
3
that there is an importing and exporting of components between the system and its environment.
On the other hand, systems lacking this characteristic of exchange are termed closed systems, i.e.
they are closed with respect to their environment. Processes taking place inside a closed system
must therefore be attributed solely to phenomena occurring within the system's boundaries.
But, as we have seen, it is easy for a myopic observer to mistake an open system for a closed
system, especially when the dynamic elements of the open system remain hidden from view.
That is, he thinks he has taken into account everything related to what he sees when in reality he
is not seeing everything. Seeing the interacting structures of the system as being static entities,
the observer may likely choose the closed system approach due to its simplicity. Our naive
myopic was guilty of this when he postulated his theory of attraction among warehouse furniture
groupings. He was attributing their behavior solely to agents inherent in the groupings. To
postulate open system behavior would necessitate introducing a new dimension to the system,
the environment. But since the environment remains invisible to the myopic, such a course of
action would be seen as being unnecessarily complicating.
Today it is rather commonly agreed that the closed system view has no place in the life
sciences as an explanation of structure and process. In the fields of biology, psychology,
sociology, business, economics, and information science, open system theory is widely accepted
as an approach to understanding the origin of structure and system behavior. However, it is
interesting to note that each of these sciences at one time clung tightly to a closed system view.
In the "classical era" some 60 or so years ago, business organizations were regarded as selfdeterministic
closed systems functionally independent of their environment. Classical economics
took a closed system approach with quite undesirable consequences. Before the advent of Floyd
H. Allport's event-structure theory,(1) psychology had numerous closed system theories such as
Mill's" building block" theory and the field theory of the Gestalt school of thought. Even
biologists at one time took a closed system view of life theorizing that a sperm contained a fully
developed human being in miniature which eventually grew in size.
The physical sciences, however, still adhere to the closed system view. For example,
classical thermodynamics expressly declares that its laws only apply to closed systems.
However, recently (since the mid 1940's) a new branch of thermodynamics has emerged called
nonequilibrium thermodynamics, which is concerned with the study of open systems. It has
found application in the fields of hydrodynamics and organic chemistry describing the dynamics
of matter in open systems. For example, the open system approach has proved to be useful in
describing phenomena such as the candle flame, ball lightning, the hurricaine, the tornado,
turbulent flow, thermal convection currents, and nonequilibrium coupled chemical reactions.
Nevertheless, one branch of physlcs in particular has remained steadfastly rooted in the
closed system view, namely microphysics. Why should this be the case? Of all the sciences,
why should microphysics be the last to free itself from the closed system approach?
Observatlonal myopia may lie at the root of the problem. Microphysics deals with phenomena
on a scale that is observationally far removed from the human scale.
To see open systems at work in a warehouse we need only open our eyes. To view the
metabolic behavior of a microrganism, we need only look through a microscope or study its
chemical composition. But, the observation of microphysical structures such as subatomic
particles is limited by their extremely small size. The well known Heisenberg uncertainty
principle, (ΔX)(ΔP) ≥ h, states that the product of the uncertainty in a particle's positlon ΔX,
multiplied by the uncertainty in a particle"s momentum ΔP`can never be less than the constant h
(also known as the "quantum of action"). In other words, assuming that the world is
4
fundamentally probabilistic, it states that the more accurately we come to know a particle's
position the less accurately we know its momentum or state of motion. This is because to
determine a particle's position more accurately we must bombard it with exploratory radiation
having an increasingly shorter wavelength. But photons of shorter wavelength (higher frequency)
have greater energy and thus are capable of transferring more momentum to the particle being
studied. Thus, it is impossible to have precise and simultaneous knowledge of a particle's
position and velocity (momentum). Of course, Planck's constant, h, which equals 6.63 X 10-27
erg-sec, which is extremely small with respect to more typical units of measure, making the
quantum uncertainties negligible for physical phenomena of ordinary human scale. But, in the
submicroscopic study of physical phenomena, the relative magnitude of the uncertainty is
considerable and places microphysics in the midst of a very dense observational fog. In effect,
the Heisenberg uncertainty principle declares that no matter how hard we try, we will always be
myopic in our observations of microphysical phenomena.
In light of this, how should we view material reality? Consider for the moment an electron.
Should we regard it as physicists presently do, as a closed system, as an isolated particle whose
apparently static structure has no way of being fully understood? Or, should we regard it as an
open system, a dynamic metabolic structure requiring a sustaining flow of components and that
has a determinable existence? Observations cannot tell us which view is correct because our
observations will always be myopic. So, in postulating a model for the electron, physicists have
chosen the simplest conceptual description, the closed system description. Just as our
warehouse myopic, why should they theorize about component flows which they can never
hope to see? Would this not smack of quackery. The closed system view is conceptually quite
logical so why not use it?
If indeed the electron were an open system, would not proof be required of its inherently
invisible component flows, of these "little workers" running around with their carts continually
building up and breaking down its structure? The biologist need only throw back the shutters on
his windows to see the energy source that drives the hierarchy of life. With what instrument
does the physicist "see" that vast hypothetical animating gradient that continuously maintains
the structure of all matter in the universe? Fortunately, a rational choice can be made to
determine whether the open system approach offers a better description of microphysical reality
than does the closed system approach. The superior approach will be the one which is best able
to unify all known experimental data into a simple, coherent, and understandable theory.* It is
hoped that the open system approach to be presented in this paper will eventually fulfill this
objective.
The Chemical Reaction Model of Cosmogenesis
The open system approach to microphysics which is to be developed shortly, is more easily
understood by reference to a conceptual model. Our present objective, therefore, is to become
familiar with such a model and to do this we turn to the field of chemistry. It might be noticed
that a certain class of chemical reactions exhibit structural and kinetic ordering phenomena much
like those observed at the micro-physical level. Such chemical systems are of the nonlinear
coupled variety and will be examined in the chemically open mode, far from thermodynamic
_____________________
* [update] Or, in the words of Einstein: "We are seeking the simplest possible scheme of thought that
can tie together the observed facts."
5
equilibrium where, it will be seen, they exhibit temporal and spatial ordering of their reactants.
To become familiar with open systems such as this, we will first examine one of the simpler
reaction systems whose behavior is portrayed by the Lotka-Voltera model.
The Lotka-Voltera model was originally introduced in the field of population biology to
describe the predator prey interaction among species. However, it has found other applications
such as in modeling macroscopic stock market behavior,(2) and in representing certain biochemical
reaction systems characteristic of neural networks. A nonlinear, open chemical reaction system
of the Lotka-Voltera variety is represented below.
A + X ——❿
k1
k-1
➛—— 2X k1 > k-1
(1) X + Y ——❿
k2
k-2
➛—— 2Y k2 > k-2
Y ——❿
k3
k-3
➛—— Ω k3 > k-3
Here we have assumed that the reactions are reversible. However, when this model is used in
population biology, only the forward reactions are written.
The X here represents the concentration, or quantity, of the "prey" chemical or species, and
the Y represents the concentration of the "predator". Flow enters the system in the form of A
which is the food supply or energy supply of the prey, and flow leaves the system in the form
of Ω representing the dissolution or death of the predator. The global reaction appears as the
following transmutation: A → Ω. The nonlinear nature of this system arises from the
autocatalytic action of the first and second equations, the first positive feedback equation
exhibiting autocatalysis with respect to X and the second with respect to Y. Given an open
system, i.e., a supply of species A continually entering the system, the first equation taken by
itself, would produce an exponentially increasing concentration of X, or in other words, a
nonlinear increase in X. But due to the coupling of the first equation with the second equation, X
is continually removed, and so it never builds up indefinitely. Similarly, Y is removed in the third
equation.
At equilibrium, the species concentrations are determined by their kinetic constants ki and by
the concentration of A in the following manner:(3)
(A/Ω)eq = (k-1 k-2 k-3)/(k1 k2 k3)
(2)
Xeq = (k1 /k-1) A Yeq = (k1 k2/k-1k-2) A
If the ratio A/Ω is only slightly different from its equilibrium value shown in (2), reactions (1)
proceed steadily to the right in a linear manner, the reactants tending towards their equilibrium
values. In this near equilibrium regime the system behaves according to the laws of classical
thermodynamics which predicts that any arbitrary fluctuation in the concentration of any
chemical species tends to become damped by other spontaneous fluctuations and the resulting
species concentration tends to regress in an aperiodic manner to its steady state value.
However, suppose the ratio A/Ω deviates significantly from its equilibrtuin value, either by
making the reverse reactions negligible (i,e. k-1, k-2, k-3 → 0), or by causing the flow into.the
system of the energy releasing chemicals A to increase, i.e. A/Ω → ∞. Then, the gradient or
6
affinity of the reaction system to go toward Ω tends toward infinity, and the reactions become
irreversible.
As the affinity of the reaction is increased, a certain critical threshold will be reached. Below
this threshold, the reaction will operate in the near equilibrium regime, its reaction kinetics
proceeding randomly at the molecular level. The system can be described macroscopically by
classical thermodynamics. Its chemical concentrations will show no time dependence. Hence the
system is said to maintain a steady state.
But, beyond this threshold this steady state becomes unstable. A nonlinear regime is entered
in which the system behaves according to a new set of principles which predict the creation of
temporal ordering in the concentrations of its chemical species.* The behavior of the system,
here, is best analyzed with the use of nonequilibrium thermodynamics. The nonlinear behavior of
the first and second autocatalytic reactions (see (1)) tends to override the disrupting effect of
spontaneous fluctuations. An arbitrary fluctuation becomes sustained rather than damped and
becomes manifest as a periodic oscillation in the concentrations of X and Y. The concentrations
of X and Y, now being time dependent variables, may be described by the following kinetic
equations, A and Ω being maintained time independent:(4)
dX/dt = klAX - k2XY
(3)
dY/dt = k2XY - k3Y
The solutions to these equations for various (X, Y) values corresponding to various
magnitudes of fluctuations from the steady state are shown in figure 1.(5) Each orbit plotted in
Figure 1. Oscillations of variable species X and Y about the steady state in a
Lotka-Volterra (predator-prey) system (after Glansdorff and Prigogine, 1971).
_____________
* This event may be viewed as a set-superset transition, a simple example of how hierarchy may
arise naturally in nonliving open systems.
7
the (X, Y) phase plane denotes the periodic behavior of the system over a complete cycle. As is
seen, an infinite number of orbits around the steady state S, is possible, each corresponding to a
different set of initial (X, Y) values.
Each orbit appears as a state of marginal stability, where even a minor fluctuation is sufficient
to change the oscillation of the system to a new orbit and consequently to a new frequency.
These sustained oscillations provide an example of dissipative temporal ordering. The use of the
word "dissipative" implies that chemical energy is being expended or dissipated and as a result of
this time ordered patterns emerge in the chemical concentrations.
There are other nonlinear, open reaction systems which not only exhibit temporal ordering
but also spatial ordering of their chemical species. The thermodynamics of such systems were
pioneered by Ilya Prigogine and his coworkers.(6) Here we will review some of their work. One
such reaction scheme, known as the Brusselator system, is shown below. It is not realistic from
a chemical standpoint because tri-molecular reactions such as (4-b) are very uncommon.*
Nevertheless it is studied due to its simplicity.
A → X (a)
2X + Y → 3X (b)
(4) B + X → Y + D (c)
X → Ω (d)
The initial products, A and B, and final products, D and Ω, are maintained space and time
independent throughout the system while X and Y are free to vary as dependent variables. The
inverse reaction rates are neglected and the forward kinetic constants are set equal to unity
placing the system at an infinite distance from thermodynamic equilibrium. Under these
conditions two overall irreversible reactions (A → Ζ and B → D) will take place in a steady state
manner homogeneously throughout the chemical medium provided that the concentration of B is
below a critical threshold Bc where Bc = A2 + 1.(7) Thus, matter which was originally structured
as chemicals A and B ends up composing chemicals D and Ω and in the process of this
transformation composes the intermediate chemicals X and Y. In this steady state, X and Y will
have values X0 = A and Y0 = B/A.(8)
As the concentration of B is increased past threshold Bc, the concentration of Y becomes
significantly elevated and affects the dynamics of equation (4-b) which is autocatalytic with
respect to X. The steady state now becomes unstable and the system enters a nonlinear regime
where temporal fluctuations in mixture composition become amplified rather than damped. A
new stable state is reached characterized by the appearance of sustained periodic oscillations in
the concentrations of X and Y. This periodic process, called a "limit cycle", is seen in figure 2 for
a hypothetical example where A =l unit and B = 3 units.(9) Values of X and Y for different points
in time are here plotted against each other as was done in figure 1. However, it is seen that unlike
the Lotka-Volterra model, the limit cycle here is uniquely defined as a single, irreversible orbit.
Its frequency and amplitude are uniquely determined by the kinetic constants as well as by the
concentrations of the initial and final products in the reaction. Also, as is seen in figure 2, the
system's approach to the limit cycle is equifinal and independent of the initial values of X and Y.
________________
* Nevertheless, Lefever, et al. (1988) have shown that trimolecular reaction (4-b) can be expanded
into two coupled bi-molecular reactions. [Lefever, R., Nicolis, G., and Borckmans, P. "The
Brusselator: It does oscillate all the same." J. Chem. Soc. Faraday Trans. 1 84 (1988): 1013-1023.]
8
Figure 2. Computer simulation of the Brusselator system showing oscillations
of variable species X and Y in an equifinal approach to a limit cycle oscillation
(after Glansdorff and Prigogine, 1971).
So far we have discussed only temporal ordering in which the homogeneous chemical
transmutation of matter, which is a time-dependent process, becomes unstable and achieves a
new stable state where this transmutation is inhomogeneous with respect to time. Now, let us
consider also the phenomenon of spatial ordering. In this case, a new dimension of activity must
be added to the chemical kinetic process, this being a molecular transport; i.e., chemical diffusion.
Within this more general framework of both time and space ordering, pure temporal ordering
becomes a special case in which the diffusion coefficients of all reacting species are assumed to be
infinite or where the reactants are assumed to be homogeneously mixed.
However, if the diffusion coefficients of the oscillating species, Dx and Dy , are comparitively
low and if the reaction medium is left mechanically undisturbed, it should also be possible to
observe spatial ordering of species X and Y, that is, provided that the concentration of B is
greater than the critical threshold Bc' where Bc' = [1 + A(Dx/Dy)½]2.(10) Figure 3 shows the onset
with the passage of time of such spatial variations in the concentrations of X and Y where the
subscripts denote two adjacent boxes in space.(11)
A small spatial fluctuation in the homogeneous state of Y, ΔY = Y2 -Y1, initiated at time 0
becomes amplified by the autocatalytic reaction (4-b), wherein this spatial fluctuation in Y
induces an amplified spatial fluctuation in X, which in turn feeds back through equation (4-c) to
further augment the spatial fluctuation in Y. Providing that B is sufficiently large, this autocatalytic
amplification process will override the dampening effects introduced by the random diffusion
of the chemicals. Thus, a single fluctuation may augment driving the system to a new final
state of order characterized by spatially alternating concentrations of X and Y; see figure 4.(12)
Just as in the case of the limit cycle, this new state is reached equifinally regardless of the
initial concentrations of X and Y. The wavelength and amplitude of the pattern are uniquely
determined by the concentrations of A and B, the kinetic constants, and the diffusion coefficients
Dx and Dy. Diffusion coefficients of the initial and final reactants, Da, Db, Dd, and Dω, are
assumed to be infinite, i.e., these species are assumed to be homogeneously distributed.
9
Figure 3. Computer simulation of the Brusselator showing the onset with the
passage of time of an inhomogeneous steady state distribution in X and Y in a
simplified two box reaction volume (after Glansdorff and Prigogine, 1971).
Figure 4. Computer simulation of the Brusselator showing the final periodic
steady-state distribution attained by the concentrations of the reaction variables
X and Y (after Glansdorff and Prigogine, 1971).
10
An important point which should be emphasized is that small fluctuations, for example of
thermal origin, can no longer reverse the system configuration back to the homogeneous state.
Destruction of this "super-set" pattern will occur only if the perturbations of the steady state
concentrations are of the same order of magnitude as the difference in concentration between two
adjacent locations.
An interesting case emerges when Da is chosen to be finite but large with respect to Dx and
Dy, and when A is nonuniforinly distributed.(13) Under these circumstances, the space ordered
structure described above may be localized inside the reaction volume, its boundaries being
uniquely determined by certain concentration and diffusion parameters. Its organization would
now be maintained by a flux in two dimensions; one flux, A → X, occurring as before in the
"reactant dimension", and now, an additional flux of A in the spatial dimension crossing the
spatial boundary of the structure.
Figure 5(14) shows a "nonequilibrium phase diagram" depicting the various states of time and
space ordering as they depend on the concentration of B and on the diffusion coefficient Dy,
parameters A and Dx being held constant for simplicity. In domain I, the homogeneous steady
state is stable with respect to fluctuations in mixture composition. In domain II, fluctuations
increase monotonically driving the system to a new inhomogeneous steady state corresponding to
a regular static spatial distribution of X and Y, a "dissipative space structure". Domain III marks
the appearance of a dissipative structure ordered in both space and time. In this regime the
medium remains spatially inhomogeneous while the concentrations of X and Y at each point
undergo periodic oscillations creating the appearance of a propagating spatial pattern. If Dx and
Dy are taken very large, then in domain III all space dependencies disappear and the reaction
volume oscillates everywhere with the same phase. However, in domain II the system would
remain in an aperiodic steady state.
Taking a moment to step back, it is interesting to note the similarity between the chemical
system just described and our crude warehouse model. The various transmuting chemical species
represent the component flow, i.e. the furniture being moved around. Both systems are
dissipative, in one case, heat is being released from the reacting chemicals and in the other case the
workers are consuming their reserve of energy, getting tired, and giving off heat. Just as does the
Figure 5. Nonequilibrium phase diagram for the Brusselator system
(after Glansdorff and Prigogine, 1971).
11
warehouse, the chemical system has an input flow, A → X, and an output flow, X → Ω.
Finally, in both systems there emerges a hierarchic ordering or patterning of components. On the
one hand, this is manifest as inhomogeneities in chemical composition, and on the other hand, this
is evidenced by furniture groupings.
The Belousov-Zhabotinskii Reaction
The Belousov-Zhabotinskii reaction is a fascinating example of an open chemical reaction
system that is capable of exhibiting both temporal and spatial patterning. Field, Körös and
Noyes have succeeded in determining the mechanism by which this reaction produces temporal
oscillations in a stirred homogeneous system. Their reaction scheme is shown below.(15)
HOBr + Br- + H+ ➛ ————❿ Br2 + H2O
HBrO2 + Br- + H+ → 2HOBr
BrO3
-+ Br- + 2H+ → HBrO2 + HOBr
2HBrO2 → BrO3
- + HOBr + H+
BrO3
- + HBrO2 + H+ ——❿ ➛—— 2BrO2 + H2O
BrO2 + Ce+.3 + H+ ➛ ————❿ HBrO2 + Ce+4
BrO2 + Ce+4 + H2O → BrO3
- + Ce+.3 + 2H+
Br2 + CH2(COOH)2 → BrCH(COOH)2 + Br- + H+
6Ce+4 + CH2(COOH)2 + 2H2O → 6Ce+.3 + HCOOH + 2CO2 + 6H+
4Ce+4 + BrCH(COOH)2 + 2H2O → Br- + 4Ce+.3 + HCOOH + 2CO2 + 5H+
They explain its general functioning in the following way:(16)
"In a stirred sulfuric acid solution containing initially potassium bromate, cerium sulfate, and
malonic acid, the concentrations of bromide ion and of cerium (IV) undergo repeated
oscillations of major proportions. The concentrations of these species have been followed
potentiometrically, and the detailed mechanism of the reaction has been elucidated. When the
solution contains sufficient bromide ion, BrO3
- is reduced to Br2 by successive oxygen
transfers (two-equivalent redox processes), and the malonic acid is brominated by an
enolization mechanism. When the concentration of bromide ion becomes too small to remove
HBrO2 sufficiently rapidly, the latter reacts with BrO3
- to form BrO2 radicals which oxidize
cerium (III) by one-equivalent processes. As a result, HBrO2 is produced autocatalytically in
the net reaction BRO3
- + HBrO2 + 2Ce3+ + 3H+ → 2HBrO2 + 2Ce4+ + H2O. Indefinite
buildup of HBrO2 concentration is prevented by the second order disproportionation of this
species. The cerium(IV) oxidizes bromomalonic acid with liberation of bromide ion which
ultimately terminates the autocatalytic production of HBrO2 and initiates a repeat of the
cycle."
The species Ce3+ and Ce4+ are seen to oscillate with respect to each other in the manner of a
limit cycle, much like the oscillations in X and Y predicted by the previous model. By adding
Ferroine redox indicator to the solution the valence of Ce may be visibly followed by a color
change, the indicator being appearing either red or blue depending on the ionic state. Glansdorf
and Prigogine describe the following experiment performed in a one dimensional medium, a
12
vertical test tube, in which they observe the appearance of a dissipative space structure; see
figure 6.(17)
"Equal volumes of Ce2(SO4)3, (4 × 10-3 M/l); KBrO3, (3.5 × 10-1 M/l); CH2(COOH)2, (1.2
M/l); H2SO4, (1.5 M/l); as well as a few drops of Ferroine (redox indicator) were stirred with
a magnetic agitator for 30 minutes at room temperature.
Two milliliters of this homogeneous mixture were then put into a test tube kept at the
constant temperature of 21 C by a thermostat and stirring discontinued.
Temporal oscillations immediately appeared; the solution in the test tube changed color
periodically from red, indicating an excess of Ce3+, to blue, indicating an excess of Ce4+, the
period depending on the initial concentrations and temperature, For the above conditions, the
period was about four minutes. The oscillations did not occur simultaneously throughout the
solution but started at one point and propagated in all directions at various speeds. After a
variable number of oscillations, a small concentration inhomogeneity then appeared, from
which alternate red and blue layers were formed one by one. ... During the formation of these
layers, time oscillations continued to be observed in the part of the solution where the
structure had not been established."
Arthur Winfree performed the reaction in a petri dish containing a solution layer of up to
2mm deep, hence forming a two dimensional medium; see figure 7.(18) He made the following
observations:(19)
"Pseudo waves (phase gradients in bulk oscillation) sweep across the reagent at variable
speed. In addition, blue waves propagate in concentric rings at fixed velocity from isolated
points (pacemaker centers) with a period shorter than the period of the bulk oscillation.
Unlike pseudo waves, these waves are blocked by impermeable barriers. They are not
reflected. They are annihilated in head-on collisions with one another. The outermost wave
surrounding a pacemaker is eliminated each time the outside fluid undergoes its spontaneous
red-blue-red transition during the bulk oscillation. Because of uniform propagation velocity
Figure 6. Dissipative space structure seen in the
Belousov-Zhabotinskii reaction. (Courtesy of Glansdorf and Prigogine, 1971)
13
Figure 7. Chemical wave fronts propagating in a dish containing the Belousov-
Zhabotinskii reaction. Courtesy of A. Winfree.
and mutual annihilation of colliding waves, faster pacemakers control domains which expand
at the expense of slower ones: each slow pacemaker is eventually dominated by the regular
arrival of waves at intervals shorter than its spontaneous period."
The pseudo waves or bulk oscillations spoken of here refer to spatially homogeneous limit
cycle oscillations of the medium. On the other hand, the propagating concentric rings originating
from pacemaker centers are an example of chemical ordering which is both space and time
dependent, similar to the condition predicted to exist in region III of the phase diagram in figure 5.
Winfree observed that pacemaker centers seem to arise at discontinuities such as at nuclei on
the air-liquid interfaces. He also observed pacemaker periods ranging from 15 seconds to several
minutes and found that at 25 C their waves propagate about 6 mm per minute in a 1 mm deep
medium. In another paper by DeSimone, Bell, and Scriven, wave diffraction phenomena were
reported where wave fronts propagating in a two dimensional medium were observed to diffract
around a barrier placed obliquely to their frontal boundary.
Although the mechanics of temporal ordering in the Belousov-Zhabotinskii reaction are fairly
well understood, the processes involved in the production of spatial ordering have thus far been a
subject of controversy. At present two explanations have been offered for these spatial patterns.
One view is similar to that discussed earlier with reference to the model represented by equations
(5). This view is of the opinion that diffusion plays an important role. However, some have
noted that diffusion alone cannot explain the rapidity with which these rings propagate. It has
been suggested that perhaps a "reaction enhanced diffusion" may be involved where the diffusion
14
of HBrO2 and the autocatalytic reaction, BrO3
- + HBrO2 + 2Ce+.3 + 3H+ → 2HBrO2 + 2Ce+.4 +
H2O, work together to speed up the propagation of the wave front. Field and Noyes(20) have
developed a detailed explanation along these lines. They have concluded that each band as it
propagates through the medium leaves in its wake a region unfavorable for the propagation of
another band. From this they are able to explain why a trailing band will never overtake a leading
one, why a band will not be reflected by a physical obstruction, and why two colliding bands will
annihilate each other.
Another view, held by A. Kopell and A Howard considers diffusion as being a relatively
unimportant factor, but only in explaining the band patterns observed in the one dimensional,
vertical tube experiments. They feel that the oscillations in the reaction medium are spatially
uncoupled, in other words, that a valence transition in one unit volume of reaction medium is not
responsible for initiating a similar transition in an adjacent volume. They believe that the bands
appear when one of the chemical species, such as H2SO4 becomes inhomogeneously distributed.
Since the frequency of oscillation of the reaction is dependent upon the concentration of this
species, a frequency gradient would be established corresponding to this species concentration
gradient. The spatial patterns could thus be due to spatial phase variations in this temporal
ordering phenomenon, giving only the appearance of spatial ordering. So, by this second view,
only temporal ordering occurs and not as suggested by the theory of Prigogine. Glansdorf and
Prigogine themselves admit that the static band pattern they observed "always appeared after an
oscillatory state"(22) and that they have not yet observed a range of concentrations, as predicted
by their theory, where a spatial structure has become established without oscillation. Dieter
Thoenes(23) has extended the phased oscillator view to explain the ring patterns observed in the
horizontal, two dimensional experiments. But here, there is a conflict with the diffusion
explanations offered by Field and Noyes, and others. Perhaps further experimentation will be
necessary to resolve these differing views.
Having gained a familiarity with nonlinear chemical kinetics we are now in a fairly good
position to tackle an open system approach to microphysics. But, before proceeding let us
survey the current state of the art in theoretical microphysics, as it presently stands based upon
closed system concepts.
Contemporary Microphysics: A Divided Science
We have seen earlier that observation of microphysical phenomena is quite difficult, that all
of us are natural born myopics. Then, in reviewing the various scientific theories on this elusive
subject what should we expect to find, mutual agreement or discord? For a preview, we may find
it helpful to study the parable of the three blind men and the elephant which many are
undoubtedly familiar with.
Three "myopic" blind men, on a walk one day, came upon an elephant. The first, feeling its
trunk, exclaimed that they must have come upon a rope. The second, feeling the elephant's side,
disagreed saying that it seemed to him more like a wall. The third, feeling the foot of the
elephant, disagreed with both of them insisting that they had come upon a tree. All the while, the
elephant was curiously amused.
Similarly, we find today that twentieth century microphysics is involved in the same sort of
myopic dilemma. The currently accepted field theory approach has evolved three main
theoretical approaches: quantum mechanics, wave mechanics, and relativity theory. Each of
these represents a separate, coherent body of knowledge, but each provides an understanding of
15
only a part of the totality of observed physical phenomena. Attempts toward a unification have
so far been unsuccessful. Unified field theories have instead tackled unifying electromagnetism,
gravitation, and the strong force. In an attempt to reconcile the divergent views of quantum
mechanics and wave mechanics, physicists have adopted the dualistic view that each be taken as
equally valid interpretations of the microphysical world, like viewing two sides of the same coin.
However, the coin itself has not been holistically comprehended. It is as if the blind men in our
parable have agreed that they had found an object which was both a rope and a wall, without
comprehending that they had found an elephant.
One of the major challenges for microphysical theory has been this dilemma that particles of
matter and quanta of radiant energy possess dual characteristics, in some situations seeming to
behave as corpuscles and other times seeming to exhibit wave properties. Neils Bohr took an
indeterministic approach to the problem saying that because of the complex nature of
microscopic reality it was not possible to devise a single mental picture of a corpuscle. In other
words, that not only are we unable to directly observe corpuscles with our instruments, as is
stated in the uncertainty relations, but that even so, the human mind is simplistic and would be
incapable of conceiving this reality with a single picture. He suggested that in order to describe
this complexity it might be necessary to use successively two (or several) idealizations for a
single entity, like visualizing a cone in two-dimensions as being either a circle or triangle. He
pointed out that the particle and wave pictures do not come into direct conflict thanks to the
uncertainty relations, that the more precise it is desired to make one picture through observations,
the hazier the other becomes. Thus although one continually expects a battle between the
wave and corpuscle, it never occurs because there is never but one adversary present. This view
held by both Bohr and Heisenberg has become known as the probabilistic approach. According
to this, the corpuscle is viewed as being spatially and dynamically indeterminate having a range of
possible locations and momenta, or similarly a range of possible frequencies or energies. Louis de
Broglie, however, expressed doubts about this view of reality:(24)
"All physicists are aware that for the past 25 years wave mechanics has been interpreted on
the basis of pure probability. In this interpretation the wave associated with the particle is a
probability function which varies with the respective state of' our knowledge and is thus
subject to sudden fluctuations, while the particle is said to lack a permanent localization in
space and thus to be unable to describe a well-defined trajectory. This way of looking at the
wave-particle dualism goes by the name of 'complementarity' a very vague notion which some
have tried to extrapolate from physics to other disciplines, often with dangerous
consequences."
This subjectivist approach, although successful at keeping peace between quantum and wave
mechanics, has consequently underrated man's ability to grasp microphysical reality and tended
to negate the possibility that a hidden underlying reality may exist. The uncertainty principle
which was originally intended as a statement of the limitations of observation has been extended
by the probabilistic interpretation as a law of nature. De Broglie expresses the following feelings
regarding indeterminism in modern physics:(25)
"There has been a great deal of discussion in the last years about this question of
indeterminism in the new mechanics. A certain number of physicists still manifest the
greatest repugnance to consider as final the abandonment of a rigorous determinism, as
present day quantum physics must do. They have gone to the length of saying that a non16
deterministic science is inconceivable. This opinion seems exaggerated to us, since quantum
physics does exist and it is indeterministic. But it seems to us perfectly permissible to think
that, some day or other, physics will return to the paths of determinism and that then the
present stage of this science will seem to us to have been a momentary detour during which
the insufficiency of our conceptions had forced us to abandon provisionally our following
exactly the determinism of phenomena on the atomic scale. It is possible that our present
inability to follow the thread of causality in the microscopic world is due to our using
concepts such as those of corpuscles, space, time, etc.: these concepts that we have
constructed by starting with the data of our current macroscopic experience, these we have
carried over into the microscopic description and nothing assures us, but rather to the
contrary, that they are adapted to representing reality in this field."
In an attempt to achieve a more concrete picture of the wave-corpuscle duality, de Broglie
proposed his pilot wave theory. According to this, the corpuscle is considered as a kind of
singularity in the midst of an extended wave phenomenon. These pilot waves, as they are called,
are seen as being separate from the particle but closely associated with it such that the particle's
motion is controlled or piloted by them, much like a cork that is carried along by a current. The
amplitude of this wave group must be modulated in such a way that their value is non zero only
over a finite region of space in the vicinity of the particle. Properties such as mass, charge, and
spin are viewed as being characteristic of the particle or singularity only. However, de Broglie
met with difficulty in trying to formulate a mathematical description of his model particularly
with regards to the structure of the singularity and its synergism with the pilot waves. Also, he
could not find a reasonable explanation as to why the wave mechanics of Erwin Schrödinger, had
proven itself to be so successful by considering only continuous solutions to wave equations, the
so called ψ waves, and why it could ignore singularities.
Erwin Schrödinger critically opposed to the probabilistic interpretation, favored a description
which denied the existence of the wave-particle dualism. He believed that waves alone have a
physical significance, while the propagation of waves could occasionally give rise to corpuscular
appearances, but that these would be appearances only. At first, Schrödinger wanted to compare
the corpuscle to a small train of waves, but this interpretation could not be upheld, because a
wave train, in the manner he had defined it, would always have a tendency to expand rapidly and
continually into space and consequently could not properly represent particles of lasting
stability.
Albert Einstein tended to side with Schrödinger in criticizing probability theory. He raised
the following objection. Let a particle and its associated plane, monochromatic wave fall
normally on a screen pierced by a circular hole. The wave will be diffracted in passing through it
and will form a divergent spherical wave behind the screen. If a hemispherical photographic film
is placed behind the screen, the particle will reveal its presence at a particular point on this film
by making a photographic impression. But, in doing so, the probability of its passing through
any other point of the film becomes zero. Thus, it seems impossible to explain how a
photographic effect at a point P could prevent a simultaneous event at a point Q unless the
particle is actually localized in space.
However, according to the probabilistic interpretation, before the photographic impression is
made, the corpuscle is potentially present in all points of the region behind the screen with a
probability equal to the square of the amplitude of the ψ wave. The moment that the
photographic impression is produced at a particular point the probability of its presence at any
17
other point instantaneously vanishes. But, according to Einstein such an explanation would be
contradictory with all our ideas on space and time and with the restrictions that physical actions
are propagated through space at a finite velocity.
This turmoil over the probabilistic interpretation of the wave-particle dualism was one of the
major causes of the schism that occurred within the early twentieth century microphysics
community. There are many, however, who would tend to place the blame for this at the
foundation of microphysics, namely on the field theory approach, which has demonstrated an
inadequacy to properly integrate observable phenomena. As will be seen, the field theory
approach itself has been construed upon a dualism, the field-source dualism.
The field theory approach of modern microphysics may best be visualized as a skeleton in a
closet, the "remains" of the nineteenth century ether theory. At the time of Maxwell, it was
believed that the universe was filled with one or several inert ethers of infinite extent having
mechanical properties such as elasticity and compressibility. It was within this theoretical
framework that Maxwell conceived his equations of electromagnetism. It was postulated that
space contained a luminiferous ether, a continuous medium that acted as a mechanical carrier of
light and electromagnetic radiation; much the same way that a body of water acts as a carrier of
surface waves. The ether had also been conceived as a carrier of gravitation. According to this, if
a celestial mass were suddenly brought into existence, it would create a distortion in the ether
which would propagate outward in all directions. Upon reaching a neighboring celestial sphere
this ether distortion, or warp, would act upon this body forcing it towards the source sphere.
The gradual downfall of the mechanistic ether theories was brought about by the results of
the Michelson-Morley experiment, which were interpreted by many as an indication that the
velocity of light remained invariant with respect to any frame of reference. The theory of
relativity which emerged was incompatible with the concept of an ether with an absolute frame of
reference. Hence, the concept of an ether filled space became gradually abandoned and with it
went Maxwell's conceptual model of electromagnetic propagation. All that remained was a
truncated version of his original equations which, after Maxwell's death, became reformulated into
their present version in the 1880's by the mathematician/physicist Oliver Heaviside. What are
today called "Maxwell's equations" are but a mathematical skeleton of what formerly had been
Maxwell's theory. The original equations, which had intended to describe the electromagnetic
behavior of the ether, were now made to describe physical vector and tensor magnitudes existing
in an empty space without reference to any underlying medium. These interrelated magnitudes
were seen to compose a continuous, non-mechanical field that mathematically portrayed the
electric and magnetic state of each point in space. Thus, the field theory approach to
electromagnetism emerged as a "court-martialed" version of the ether theory, a mathematical
theory devoid of its conceptual model.
The field theory later became infected with the idea of particles existing as singularities. This
began with the idea introduced in the 1890's by Hendrik Lorentz of conceiving charged material
corpuscles or subatomic particles such as electrons and protons as being the sources of electric
fields. He envisioned these as being immersed in a luminiferous ether, yet distinct from that
ether. This idea seemed plausible, being closely linked to one's everyday experience of seeing
solid objects surrounded by gas or liquid media, yet distinct from those media. But, with Lorentz
this familiar concept became extrapolated to the microphysical level where it introduced an etherparticle
dualism into microphysics. With the abandonment of the ether theory and the adoption
of the force field concept, this ether-particle dualism was transformed into a field-source dualism,
the source particle necessarily constituting a distinct charge singularity in the field; see figure 8.
18
Figure 8. An illustration of the source-field dualism.
Once it had become conventionalized, this field-source dualism became implanted into
modern microphysics where it has since led to much objection. One of the major critics of the
idea was Albert Einstein who noted the incompatibility of this concept with his theory of
relativity. In his article, "On the generalized theory of gravitation," he expressed the following
views as to the coexistence of fields and singularities:(26)
"The introduction of the field as an elementary concept gave rise to an inconsistency of the
theory as a whole. Maxwell's theory, although adequately describing the behavior of
electrically charged particles in their interaction with one another, does not explain the
behavior of electrical densities, i.e., it does not provide a theory of the particles themselves.
They must therefore be treated as mass points on the basis of the old theory. The
combination of the idea of a continuous field with that of material points discontinuous in
space appears inconsistent. A consistent field theory requires continuity of all elements of
the theory, not only in time but also in space, and in all points of space. Hence the material
particle has no place as a fundamental concept in a field theory. Thus, even apart from the
fact that gravitation is not included, Maxwell's electrodynamics cannot be considered a
complete theory."
Curiously enough, Floyd Allport expressed similar discomfort with the field theory approach
as it applied to psychology. His reference to the "inside-outside problem dealt with this same
difficulty of representing singularities.(27)
"The inside-outside problem has been a stumbling block for physical field-theory as well as
for psychological. Maxwell dealt with it, in the same way Lewin did, by taking a small, but
still real, area within the field as a locus for determining the magnitude and direction of the
field-vectors. But what about the region within that small portion that was taken? This
region is not a part of the field itself; it represents only something that is acted upon by the
19
surrounding field forces. Does it have an inside field all its own? If it had, we should not
know what to do with it or how to integrate it with the field outside. Hence its status is quite
anomalous."
It is interesting to note that the event-structure theory which Allport proposed, which takes
an open systems approach, succeeded in circumventing this inside-outside problem and many
other related problems inherent in field theory. In a similar fashion, a reformulation of
microphysics along the lines of the open system, reaction-diffusion ether model proposed in the
present paper would resolve its current field-source dualism as well. Einstein, who objected to
the field-source dualism, in fact offered a solution very compatible with this metabolic ether
approach. He felt that it was incorrect to regard fields as being the externally generated
phenomena of material singularities. He believed to the contrary that matter and energy were
formed from fields themselves either as static or translationally dynamic field densities as the
case may be. He felt that fields in nature although continuous must always contain very small
regions in which the field values are extremely high. These he referred to as "bunched fields" and
would correspond to the conventional notion of particles:(28)
"Since the theory of general relativity implies the representation of physical reality by a
continuous field, the concept of particles or material points cannot play a fundamental part,
nor can the concept of motion. The particle can only appear as a limited region in space in
which the field strength or the energy density are particularly high"
Also de Broglie quotes Einstein as saying:(29)
"A stone's throw is, from this point of view, a varying field in which states of maximum field
intensity are displaced through space with the velocity of the stone. The new physics will
not have to consider fields and matter; its only reality will be field."
But, in stating his generalized theory of gravitation, Einstein, like Lorentz, was forced to couple
his field equations with extraneous terms representing the field sources. For example, in his
equation of gravitation,
Rik = ½gikR - KTik ,
the first term on the right is expressed using field components, namely the metric tensor, gik, and
the Riemann-Christoffel tensor, R. However, the second term contains Tik, the energymomentum
tensor, which is needed to represent the gravitational field sources, i.e. the
distribution of matter and energy that produces the curvature of space. He hoped that a unified
"field" theory based on a field of more complex nature would resolve this dualism or field-source
problem:(30)
"These differential equations completely replace the Newtonian theory of the motion of
celestial bodies provided the masses are represented as singularities of the field. In other
words, they contain the law of force as well as the law of motion while eliminating 'inertial
systems'.
The fact that the masses appear as singularities indicates that these masses themselves
cannot be explained by symmetrical gik fields, or 'gravitational fields'. Not even the fact that
only positive gravitating masses exist can be deduced from this theory. Evidently a complete
relativistic field theory must be based on a field of more complex nature, that is, a
generalization of the symmetrical tensor field."
20
The notion of regarding material particles as inhomogeneities of an underlying continuous
substance was not first proposed by Einstein. In the mid 1800's, Lord Kelvin proposed his
hydrodynamic vortex atom theory of matter, where a corpuscle was seen to consist of a
hydrodynamic vortex in the ether. This idea was suggested to him by Helmholtz's discovery of
the great stability of vortex motions such as smoke rings. The atom vortices were considered to
be non-dissipative, the ether being assumed frictionless. Hence Kelvin's theory of matter could
not be considered an open system theory, although his model, the smoke ring vortex, in fact, is an
example of a hydrodynamical open system.
Another attempt to describe matter as an inhomogeneity in an underlying continuum was
made by Abraham after the inception of the field theory approach. In an attempt to resolve
Lorentz's field-source dualism, he attempted a pure field theory description of matter in which he
assumed the electron to be a rigid structure whose mass was fundamentally an electromagnetic
manifestation. He showed that the entire mass of an electron in motion could be built up from
field magnitudes, although he could not account for its rest mass. Others objected to his theory
on the grounds that the electric charges composing the electron, being of the same sign, should
repel one another, causing the electron to explode. They argued that to account for the stability
of an electron, it would be necessary to postulate some extraneous force of non-electromagnetic
origin to prevent its charges from escaping.
The Need for Higher Dimensions
But, basically, the nineteenth century ether theories and the contemporary field theories have
failed to unify microphysics because they have all been closed system theories. They all contain
the underlying assumption that all observable physical phenomena are produced by agents such
as fields and corpuscles all residing within a four-dimensional, space-time universe. One's
immediate reaction to this is why not. It sounds like a reasonable assumption. Besides, how can
phenomena within the universe be produced by agents outside the universe? The universe is
infinite; so how can there be anything "outside" the universe? Perhaps we had better discuss this
question because it is central to the concept of an open system theory.
The open systems approach recognizes that all observable phenomena cannot be entirely
accountable to agents within the universe, but that there must be an outside environment which
plays an active role, for example, see figure 9. The line of infinite length pictured here is a
graphical representation of the fifth dimension, and the point, U, on this line represents our
observable, four dimensional, space-time universe.
Now it becomes easy to see that any points on our fifth dimension line not coincident with
point U would necessarily lie "outside of the universe", in the universe's higher dimensional
environment. To avoid confusion, suppose we call the entire fifth dimension line the "Cosmos",
with the understanding that the Cosmos encompasses all five dimensions. We distinguish this
from the physical "universe," whose extent is defined by only four dimensions (space-time) and
Figure 9. Our perceived physical universe depicted as a point
residing along a higher fifth dimension continuum.
21
Figure 10. Relationship of the observable universe U
to the higher dimensional Cosmos C.
which appears as a point on our fifth dimension line. So, the universe is seen to be contained
within the Cosmos.
This may be represented also with the aid of set theory, see the Venn diagram in figure 10.
Set U, a four-dimensional space, represents the universe, the set of all physically observable
phenomena. Set C, a five-dimensional space, represents the Cosmos. This includes all
physically observable phenomena of set U and innumerable other non overlapping sets, U', U",...
Set U is considered a subspace of C, and set C is considered the hyperspace of U.
A fifth dimension is a necessary requirement for an open system microphysical theory, but
not a sufficient requirement. Indeed, several theoreticians such as Einstein and Kaluza have
employed a fifth dimension in their calculations. Einstein, a believer in rigorous causality, hoped
to devise a unified field theory in accordance with the principles of relativity that would
encompass the description of both electromagnetic and gravitational phenomena. He believed
that such a synthesis could not be brought about in four-dimensional space-time using
combinations of the known fields, but that a field of a more complex nature, of higher
dimensionality, would need to be postulated. As did Kaluza, Einstein found it necessary to
introduce field magnitudes that were thought to be possible only in a five-dimensional world. He
felt that the rationality of postulating a fifth dimension would necessarily be judged on
theoretical grounds, rather than experimental; i.e., on the resulting degree of theoretical coherence
and simplicity:(31)
"It must be conceded that a theory has an important advantage if its basic concepts and
fundamental hypotheses are 'close to experience', and greater confidence in such a theory is
certainly justified. There is less danger of going completely astray, particularly since it takes
so much less time and effort to disprove such theories by experience. Yet more and more, as
the depth of out knowledge increases, we must give up this advantage in our quest for logical
simplicity and uniformity in the foundations of physical theory."
Although Einstein always referred to "fields" in his nomenclature, by a stretch of the
imagination we see that he was proposing what might be described as five-dimensional
mathematical media, continuous in both space and time, whose existence could not be attributed
to material singularities, but on the contrary, whose interactions produced all observable physical
phenomena. While Einstein's attempt at unification did not offer an open systems approach,
some of its precepts seem strikingly similar to those which will be developed shortly.
22
The Primal Flow
The open system description of microphysics which we will now examine is fundamentally
based upon the dynamics of a five dimensional medium. Our physical universe may be depicted
as lying within this medium as was seen in figure 9. Let us freeze time for a moment, our fourth
dimension, and try to visualize how this hyperdimensional medium might appear to us 3D
creatures. First imagine "space" a void of infinite extent. Now, imagine space to be filled with an
infinite number of invisible media, each medium being continuous and present throughout all of
space. Here we fragment our 5-dimensional medium into an infinite number of 4-dimensional
media since we as physical beings are able to best conceive only one infinite medium at a time.
Now, let us take a "trip" with our mind and imagine that we perceive one of these media after the
other in succession. Suppose that each successive medium which we perceive has a higher
valence or ranking than the previous one. Proceeding in this manner through a succession of
conceptualizations, we have journeyed into the fifth dimension or into the "medium dimension".
What we have just accomplished can be more easily seen when we consider how 2D planar
creatures might visualize a cube, By mentally placing their planar world such that it intersects the
cube at right angles, they may examine thoroughly this planar section of the cube and discover
that it appears to them as a square. Now by taking a trip with their minds and imagining their
planar world to move in successive increments through the cube, they may examine its entirety
finding that the square intersection vanishes at the boundaries of the cube. This is similar to the
exercise which we have attempted above in our visualization of the fifth dimension.
Let us now leave our accustomed 4-dimensional frame of reference and view this succession
of media graphically. Figure 11 depicts the medium dimension as a partitioned dimension line
where each increment represents a particular 4-dimensional medium. Each increment or medium
has an associated reactive potential, or "energy level," and these are arranged in monotonically
decreasing value along the dimension line. Going from left to right, media of higher reaction
potential transmute successively into media of lower reaction potential. Taken together, these
indicate the medium flux, or "primal flow".
This process may be clearly illustrated by the use of a hydrodynamic analogy, see figure 12.
Here we have a series of tanks where water drains from one to another in succession. Each tank
represents a particular medium, and has a nozzle opening k1 proportional to the medium's
transmutation constant. The tanks are shaped such that the velocity of flow from each tank is
proportional to the amount of water it contains. At equilibrium where the flow velocities are all
equal, it will be seen that the tanks with the smallest nozzles will contain the greatest quantity of
water, or analogously, the greatest media concentration.
The medium dimension may be assumed to be infinite in extent and its unidirectional primal
flow may be considered to be continuous and unending. If point U in figure 11 represents our
physical universe, it is seen that the primal flow enters and leaves the universe as it flows in the
fifth dimension. The open systems approach to microphysics considers that all physical
phenomena within the universe such as radiant energy and matter are a maintained by the primal
flow. If the flow were to cease somewhere "upstream" on the medium dimension, this event
Figure 11. Illustration of the primal flow along the medium dimension.
23
Figure 12. Hydrodynamic analogy of the medium transmutation
taking place along the medium dimension.
would eventually reach our universe. With no driving flow to maintain it, our universe would
dissolve and only a timeless void would remain.
Matter and energy may be viewed as metabolic structures whose forms (inhomogeneities in
the hyperdimensional substance) are continuously regenerated by the medium flow. Radiant
energy is an example of a space-time dependent structure whereas matter demonstrates only
space dependence. These dissipative structures exist only in the presence of an energic flux . We
may say that the real essence of radiant energy, matter, and even the essence of motion and
evolution in our universe is basically derived from the primal flow. Physical phenomena are like
eddies in a river, inhomogeneities in this underlying dynamic reality, a reality which remains
invisible to us. The warehouse model is useful for picturing this metabolic concept. The pieces
of furniture being moved about may represent the hyperdimensional component flow or primal
flow, and the furniture groupings which are formed may represent the physically perceived
patternings such as radiant energy and matter.
These processes can also be understood with the help of the chemical reaction model. The
Brusselator reaction scheme represented by equations (4), it may be recalled, was viewed as
having a "reactant dimension". As is seen in figure 13, the reactants may be ranked in order of
Figure 13. Mapping out the Brusselator reactants
along a hypothetical reactant dimension.
24
their molecular energy level. Each chemical species or coordinate composing the reactant
dimension has three spatial dimensions and a time dimension, as in our cosmic medium model,
and the reactant dimension represents the fifth dimension. The more energic chemicals A and B
interact in a parallel manner to form chemicals X and Y and finally reduce to form chemicals D
and Ω. The dependent variables X and Y, which form the time and space ordered structures of
the system, can be taken together as constituting a set U, or "universe", of the reaction system.
In this example the reactant dimension has a finite length of three species.
The Belousov-Zhabotinskii reaction system, represented by equations (5), would have a
somewhat longer reactant dimension and would be somewhat richer in terms of parallel reactions.
Its universe might involve four or more principle time-dependent species; i.e. Ce3+, Ce4+,
HBrO2, and Br-. The number of parallel reactions may be viewed as a sixth dimension, but it is
not necessary to make this complication. However, it is apparent that for the reaction system
variables to be able to form regular spatial or temporal patterns, the reaction system must have at
least two parallel reaction paths along the reaction dimension that intersect and interact with one
another.
Building on the basis of the chemical reaction model as represented in figure 13, we may
hypothesize a medium interaction scheme as shown in figure 14.
The physical universe is depicted here as being composed from a set of three interacting
media: G, X, and Y, the dependent variables of the universe. The source media, A and B, and the
sink media D and Ω remain homogeneous in time and space. Also, as before, we might picture an
infinite ranking of media extending to the left and right on this dimension. The set of kinetic
equations presented in reaction system (6) describes this medium interaction.* Note the
similarity with reaction scheme (4).
Figure 14. Media reaction sequence proposed as a generator of a physical universe U
whose states are shown mapped along a hypothetical reaction dimension.
_________________
* This reaction scheme is undoubtedly oversimplified as a kinematic representation of the media
dynamics involved, but it will serve its purpose well as an organizing vehicle of the thoughts to be
presented here.
[Update comment: This reaction scheme constitutes the earliest presentation of what later I termed
"Model G". This version involved the same set of reaction species as those adopted later, but with
the exception that all reactions were depicted as forward reactions. It was not until 1978 that I
realized the importance of the reverse reaction X ← G which allowed Model G to generate an
autonomous, localized, dissipative soliton having particle-like properties.]
25
A → G (a)
G → X (b)
(6) 2X + Y → 3X (c)
B + X → Y + D (d)
X → Ω (e)
The overall irreversible reaction proceeds as: A, B → G, X, Y → D, Ω. The G, X, and Y
media are here cast in the roles normally reserved for the gravitational, electric and magnetic fields
of contemporary field theory. One important difference, however, is that the media proposed
here always have positive values or positive concentrations even during periodic oscillation,
unlike conventional field magnitudes which may take negative values as well as positive.*
Figure 15 shows the interaction of the G, X, and Y media in greater detail. The reaction
proceeds as follows: Medium A converts to G which converts to X and finally to Ω.
Simultaneously, medium G combines with X to form Y and D (reaction d). In turn, Y combines
with X to autocatalytically produce more X, (reaction c).
Figure 15. A medium reaction scheme proposed
as a generator of the physical universe.
The Created Universe
When the primal flow is below a critical threshold, the reaction system is in the steady state.
The media kinetics proceed in a linear manner and all media concentrations remain time invariant
and homogeneous throughout space. This must have been the status quo before the creation of
our universe, before the appearance of physical structure. But, suppose that some time in the
past the G medium developed an inhomogeneity and that in a local region of space its
concentration became low enough that the critical reaction threshold was exceeded. As a
consequence, the concentration of X would have decreased to the point where the autocatalytic
X-Y loop would have become nonlinear. Temporal fluctuations in the steady state
concentrations of these media would have become amplified rather than damped and a new stable
inhomogeneous state would have been reached in the concentrations of X and Y. A universe was
born.** Two types of structures were eventually formed, radiant energy and matter. Let us a
analyze these structures and their interactions using the chemical kinetics analogy. We will begin
with the photon.
__________________
* [Update comment] In the original manuscript I had used the symbols E and B, instead of X and Y
and had identified these with electric and magnetic field intensities. Later I realized that both
variables depicted the electric field potential and labeled them instead as X and Y. At that time I
realized that the magnetic field was not itself a real entity but only a manifestation of a moving
electric field; i.e., of vortical movement of the X and Y media.
26
Diffraction experiments indicate that photons do not have significant lateral interaction with
their environment past a distance of 1λ. We may analyze this as follows. When a given spatial
location is crossed by a photon, a disturbance in the G, X, and Y media will be present for a time
1/ν. During this time these media inhomogeneities would propagate a lateral distance 1λ, after
which the photon would have passed by, and the surrounding media would then revert back to
the homogeneous steady state.
It has been observed that the light from distant stars is bent when passing near the Sun.
Einstein explained this by saying that space was warped in the vicinity of massive bodies. Here
we offer a novel approach to this phenomenon. Suppose that due to the large mass of the Sun
that in its vicinity there is a strong gradient in the gravitational medium. Imagine that the photon,
which has a spatial extent of 1λ, passes through this gradient. The frequency of its X, Y cycle
would be increased preferentially on the side toward the sun where the G medium concentration
is greater. Thus, the structure of the photon is subjected to a stress, its oscillating X, Y media
being subject to a frequency gradient.
It could be said that the photon behaves in a homeostatic manner. According to Le Chatelier's
Principle "if by any action (e.g., change in concentration) a shift in the equilibrium state is
produced, the nature of this shift is such that the initial action is reduced in magnitude". So, to
maintain the spatial integrity of its quantized medium structure, the direction of catalysis of the
photon is continuously altered and its path of propagation turns into the gradient (in the
__________________
** [Update comment] In the original version of this manuscript, I had proposed that this emergent
inhomogeneous state was periodic in both space and time producing a limit cycle oscillation. I had
suggested the possibility that this had emerged as an outward expanding spherical wave that may have
nucleated at a particular point in space as in the fireball theory of creation, or that this disturbance
may have been randomly precipitated throughout space. But in later years seeing that subquantum
kinetics predicted a tired-light effect obviating spacial expansion, I came to reject the standard big
bang cosmology. Also I adopted the position that media substrate fluctuations instead led to the
formation of inhomogeneous steady-state dissipative structures, i.e., to the nucleation of material
particles.
In the original version of this manuscript I had conceived the photon as a space-time dependent
structure, in some ways resembling the chemical structure classified in region III of the phase diagram
presented in figure 5. I had imagined the concentrations of its two coupled species, X and Y (or E
and B), as mutually oscillating 180° out of phase in a limit cycle fashion, its frequency, ν, amplitude
a, and wavelength λ all remaining time invariant. However, I realized that such a representation had
several shortcomings, one being that the chemical model did not predict the corpuscular nature of the
photon, the fact that the photon travels in a linear manner. For example, I noted that the
propagating ring pattern of the Zhabotinskii reaction is observed to spread out as an expanding wave.
Since the media concentrations of the wave remain undiminished as the wave expands, the quantity
of temporal structuring would increase with time. On the other hand, a photon is observed to
propagate linearly, not concentrically. Its medium structure remains spatially localized as it travels,
a quantized packet of unit disturbance. I theorized the possibility of a G medium inhomogeneity
packet of given magnitude for some reason diffusing or propagating in a linear manner with a
velocity c uniquely determined by the reaction kinetics and diffusion coefficients. Then I imagined
that this propagating G medium inhomogeneity was somehow accompanied by a periodic limit cycle
disturbance in the X and Y media, the magnitude of this disturbance and consequently the frequency
of its oscillation being somehow dependent upon the magnitude of the G medium inhomogeneity. In
this manner, I imagined that higher frequency photons, or higher energy photons, would be
associated with greater gravitational media concentrations. All of this was rather vague speculation
and one that I abandoned in later versions of subquantum kinetics.
27
direction of lower G concentration). This phenomenon may be also viewed in a manner similar to
that suggested by Kopell and Howard for viewing the spatial band patterns of the Belousov-
Zhabotinskii reaction. By assuming that the gravitational medium gradient laterally decouples the
X, Y oscillation, then the turning of the photon may be viewed as a migration of the disturbance
pattern as it maintains phase coherence.
Now let us examine the time-independent space order patterns of the interacting G, X, and Y
media; i.e. subatomic particles. The transition from the time-dependent state, radiant energy, to
the time-independent state, matter, is observed in the phenomenon of pair production. This may
occur when a photon having an energy of greater than 1.02 Mev passes in the neighborhood of a
massive nucleus. Two subatomic particles of opposite charge and equal mass are produced, the
electron and the positron. If these particles collide with one another, they will become
annihilated; the matter state then converting back to the radiant energy state.
Let us analyze pair production from the chemical kinetic viewpoint. Suppose that when the
photon is in the vicinity of a nucleus it encounters a strong G medium gradient. This induces the
formation of a time-independent, spatially ordered structure having alternate shells of X and Y
media concentrations similar to the vertical band patterns observed in the Belousov-Zhabotinskii
reaction except that the spatial patterns produced here have a spherical geometry.
Figure 16 depicts the layered structure of the electron and positron, the light areas
representing a predominant concentration of the X medium and the dark areas representing a
predominant concentration of the Y medium. Note that the pattern sequence of the one is
reversed in the other, hence they are compliments of one another. Let us hypothesize that the
Figure 16. Depiction of the electron and positron as localized dissipative space structures.
28
electron has a core of Y medium surrounded by a shell of X medium, and that its antiparticle has
the opposite configuration with an X medium core and a Y medium shell.
Each shell, let us suppose, maintains the medium inhomogeneities in the shells lying
immediately within it and without it. For example, if a shell has a high X medium concentration
(and therefore a low Y medium concentration), it will tend to catalyze more Y medium in the
shells above and below it as its reaction diffuses outward and inward. So, taken together, with
each shell maintaining its neighboring shells in a steady state manner, the whole pattern remains
time invariant.
At the instant that these particles are formed, their spatial patterns would propagate outward
at the speed of light. It is as if the original photon which had formed them were still traveling at
the frontiers of the space structure patterns. Since the photon is quantized, the amount of
medium disturbance it creates in traveling outward remains constant. Hence, each subsequent
shell that is formed has medium concentrations that are lower than the previous. Therefore, the
medium concentrations fall off as the inverse square with the distance from the center of the
structure. The G medium attenuates continuously with distance, whereas, the X and Y media
attenuate with a superimposed periodicity, the shell pattern.
[Update comment] Actually, my suggestion here was incorrect. With an assumed inverse square
decrease, the amount of "action" within a given shell (the shell's summed deviation from the
steady state) would have remained constant in shells situated at successively greater distances
from the particle's center, hence the total action for the particle as a whole would have tended
toward infinity as radius increased. So action in a given shell must necessarily decline rapidly to
avoid this infinity problem. Thus the idea of a photon traveling radially outward was misguided.
It is the particle's nuclear electric field pattern that propagates radially outward as a spherical
wave. In later papers on subquantum kinetics, I had predicted that the X-Y pattern at the core of
a subatomic particle should fall off much faster than inverse square, hence allowing the shell
pattern's integrated action, its sum total deviation from the steady state concentrations, to
converge to a finite number at large radial distances. This predicted rapid power law decline was
later confirmed in computer simulations performed on Model G.
Each set of X, Y shells composing the electron's structure has a wavelength or spacing of λ.
Thus, the electron space structure should be expected to diffract from a grating in the same
manner as a photon. Its wave characteristics, therefore, are not a result of an associated wave as
suggested by de Broglie, but are due to the periodicity of the medium densities composing its
structure.
In an inertial frame of reference in which the electron is at rest, its space structure would have
a spherical geometry and its shell spacing would be uniquely determined as λc. However, if the
electron were traveling at a constant velocity, v, with respect to an observer's frame of reference,
it would be expected to exhibit a shorter wavelength when approaching the observer and a longer
wavelength when leaving the observer, a sort of Doppler effect. Also, its space order structure
would appear to be compressed together in front and expanded in back relative to its direction of
travel.
Let us now investigate the accelerated motion of a particle where its velocity changes with
time. This being a complex phenomenon, it will be necessary to examine it in an incremental
fashion for ease of comprehension, much like the method employed in the elementary treatment
of mathematical integration. So, although acceleration may indeed be continuous, we will imagine
that it occurs in jumps or quantum transitions.
29
Space ordered structures such as the electron are spatially uniquely determined independent
of time, and therefore, may be supposed to exist only in inertial frames of reference, i.e., ones
that are time invariant or not accelerated. By this criterion, an electron may travel through space
at any arbitrary velocity, but it may never change its direction or speed without destroying its
structure, for its structure is a time invariant phenomenon. To undergo accelerated motion the
electron must jump from one inertial frame to another, each time increasing its velocity, each time
creating a new space structure and allowing its former structure to dissolve.
When the former space order structure dissolves, it returns to the time-dependent state and
radiates out as a photon. This may be seen to occur in the following way. Suppose an electron
is at rest at location x0 in reference frame α0 and that at time t0 it jumps to reference frame α1
having a relative velocity, v, with respect to α0. In making this jump, it has not changed its
spatial location. However at time t1, it will have been displaced a distance Δx = x1 - x0 = v(t1 -
t0) from its position at t0. Also, at time t1 its α1 structure would have radiated out a distance of
d = c(t1 - t0) and its former α0 structure would have receded radially by an equal amount. Both
wave fronts at this time would be out of register in the direction of v by an amount Δx = v(t1 -
t0). Suppose that these two space structures must be out of register by a critical distance, λ/2
before the α0 structure converts to the time-dependent state and that this spacing is reached at
time t1. At this time the spatial dislocation would have reached a shell in the α0 structure at a
distance d from its center. Whereupon, this shell would convert into the radiant energy state
forming a photon whose direction of propagation would be perpendicular to the axis of
dislocation. If the acceleration is great, i.e., if the quantum jumps are great, the critical dislocation
will be reached sooner in the inner lying shells, and the radiation will necessarily be short wave.
With slower quantum jumps the more outlying shells will become unstable resulting in long wave
radiation. The phenomenon which we have just described, whereby charged particles undergoing
acceleration radiate photons, is known as bremsstrahlung radiation.
[Update comment] Clearly, the understanding of particle acceleration given in the above
paragraph needs further development. In later versions of the subquantum kinetics theory (and
the version that was published in 1985), I identify particle charge with either a positive or
negative potential biasing of the X, Y space structure pattern relative to the ambient
homogeneous steady state X and Y concentrations. So the above discussion of an accelerated
space structure producing bremsstrahlung radiation as a result of dissolution of its previous space
structure configuration should apply only to charged space structures, hence those having X, Y
space structure patterns biased away from the homogeneous steady state. To be realistic, this
approach must show that neutral particles, such as the neutron, whose space structure has no such
positive or negative potential bias, would produce no photon radiation upon acceleration.
Another phenomenon which may be discussed here is inertia, or the tendency for massive
bodies to resist acceleration. We have said that when a particle accelerates, it must recreate its
structure with each quantum jump. Thus, particles of greater mass, i.e., ones having greater G, X,
and Y media concentrations, must organize greater quantities of media when making quantum
jumps. Therefore, compared to less massive particles, such particles require a greater amount of
time to carry out the same action.
Let us now examine how a particle's acceleration is related to medium gradients in its
environment. Imagine that an electron, particle P, is subject to a negative G medium gradient due
to a nearby space structure, particle Q. Viewing this from Q's frame of reference, we would see P
30
approaching Q at a velocity, v. At a given instant t0, we observe that Q's G medium gradient
induces an X concentration gradient across particle P, causing P to have lower concentrations of
both X and Y media, and consequently a lower X-Y cycle flux, on the side toward Q where the G
medium is less concentrated. Consequently P's periodic space structure would have a shorter
wavelength on the side toward Q and a longer wavelength away from Q. But, this is exactly the
same as the "relativistic" rod contraction distortion we would expect to find when viewing P
traveling at a velocity v with respect to Q's frame. Let us take other observations of P at times
t1, t2 , etc., at closer proximity to Q and consequently in steeper G medium gradients. We would
find that at each such time particle P would adopt a velocity toward Q such that the resulting rod
contraction distortion of P's space structure would match the distortion that would arise due to
Q's gravity gradient causing P's space structure to be more compressed on the side facing Q.
Thus, particle P's tendency to adapt to the changing G medium gradient may explain its
accelerated motion.
On the other hand, if P were in a circular orbit around Q, its accelerated motion, being
perpendicular to its direction of travel, would only serve to change P's direction of travel, and
hence, the direction of its gravitational spatial distortion with respect to Q. P's relative velocity
with respect to Q would not change since the G concentration would remain constant along P's
orbital circumference.
Where X and Y medium gradients are involved as in electrostatic attraction and repulsion,
there are two possibilities for accelerated motion. Choosing two complimentary particles, such
as the electron and positron, we would observe an attractive acceleration. On the other hand, two
identical particles, such as two electrons, would undergo a repulsive acceleration. These two
aspects of electrostatic motion can be attributed to the structural sequencing of the X and Y
media in the particles under consideration. We would find that, on the average, locations in the
vicinity of an electron have a higher average Y medium concentration and lower average X
medium concentration. This is because the Y medium is present in a more concentrated state at
the electron's core, whereas the X medium in the surrounding shell has a much lower
concentration. This situation is just the reverse for the positron. So, the electron has a positive
net Y medium radial gradient in its space structure and the positron has a positive net X medium
radial gradient in its space structure.
Suppose an electron were subject to an X medium gradient, for example, a gradient originating
from a positron. Thus, there would be an elevated X medium concentration on the side of the
electron nearest the positron. The electron's space structure has a "need" for higher X
concentrations since it has an overconsumption of X in its core which causes a net low X
concentration there. So its appetite for X would be more satisfied by moving closer to the
positron and into its region of higher X concentration. This shift will be accompanied by a
transition in inertial frames with the result that the electron's space order structure is
foreshortened on the side toward the positron and elongated on the side away. The movement of
the electron may be visualized as the migration of a sand dune on a windy beach. The particles of
sand composing the dune on the windward side are eroded away while on the leeward side they
are accumulated. Thus, the dune migrates by a process of metabolization.
The same reasoning can be employed when considering the repulsion of two electrons. In
this case we may view one electron as being perturbed by the other's X medium gradient. That
is, the first electron will be subjected to a lower X concentration on the side facing the other
electron. Since this lower concentration reduces its ability to satisfy its overconsumption of X, it
will want to move away from its partner electron into a more favorable environment. A similar
31
but complementary argument could be made for the Y medium components of their space
structures.
The elastic collision of two electrons may be viewed as a situation in which the two particles
undergo spatial metabolization as their space order structures mutually adjust to each other's
presence. Let us view this collision from an inertial frame located at the center of mass of the
system. The two electron space structures will be assumed to approach each other at time t0
with a relative velocity, v. Both space structures are distorted such that they are compressed in
front and elongated behind with respect to their direction of travel. As they approach closer,
they will enter steeper X and Y medium gradients. Adapting to these environmental
disturbances, each electron will shift inertial frames, reducing its velocity with respect to the
other particle. Accompanying this deceleration will be the phenomenon of inertia and
bremsstrahlung radiation. At their closest approach both electrons will be in the observer's frame
of reference. The space structure of each electron will now be spherical with unique wavelength,
λ0. At later observations the electrons will be seen accelerating away from one another, this
acceleration diminishing in magnitude as the electrons become more separated. Their space order
structures again will appear distorted with respect to their direction of travel. When the electrons
have reached the distance of separation which they had at time t0, they will be observed to be
moving apart with a relative velocity less than v. This is due to the fact that a portion of their
media concentrations had become lost as bremsstrahlung radiation, and consequently, their
transition to new inertial frames was "nonconservative".
By visualizing the inelastic collision in the above example, we may come to understand why
matter can give the impression of being solid even though it is composed of diaphanous
substances. From this, we may see more clearly what Einstein meant when he spoke of a stone's
throw as "a varying field in which states of maximum field intensity are displaced through space
with the velocity of the stone."
There are many other subatomic particles of which we have not yet spoken, most of them
more massive than the electron or positron. Particles such as the proton, the neutron, and the
meson, are examples of other stable states existing as time independent space structures. Being
more massive, they involve media disturbances of greater concentration. For example, protonanti
proton pairs may be created by the pair production process in the same manner as electronpositron
pairs, however the incident photon must be 1840 times more energetic. The fact that
we observe only certain types of stable particles in nature means that only certain wavelengths
are allowed to exist in the time independent state. Upon investigation, these quantized media
states should serve as valuable clues to the nature of the media kinetics which have created our
universe.
[Update comment] In this paper I had proposed that a neutron may be a dissipative structure
whose core polarity is in oscillation, alternating between a high X concentration and a high Y
concentration as in the propagating ring pattern seen in the Belousov-Zhabotinskii reaction. I
had proposed that this would result in electrical neutrality since the neutron's X and Y media
gradients would be the same on the average. I later abandoned this oscillatory particle model of
the neutron when I realized that, as in the Brusselator, these space structure would undergo a
secondary bifurcation in which a charged state would appear in which their X and Y space
structure patterns were biased relative to the homogeneous steady-state (X positively and Y
negatively biased, or X negatively and Y positively biased). A particle in the neutral state, such
as a neutron, would simply be the particle before having undergone this secondary charge
bifurcation.
32
It is interesting to note that a particle's media concentration, or "mass" is independent of its
space order structure sequencing, or "charge." Hence, we observe particles of different masses
having charges of equal magnitude.
Conclusion
The open systems approach which has been sketched out here appears to fulfill the basic
requirements of the "unified field theory" which Einstein had envisioned. From an open systems
model of interacting media we were able to predict the existence of matter and radiant energy
states and the mechanism of their interchangeability. By analyzing the structure of the matter
state as a dissipative space structure, we were able to offer explanations for the existence of
charge, mass, and wavelength. By studying the dynamics of how space structures adapt to
medium gradients, we were able to predict gravitational attraction, electrostatic attraction and
repulsion, and understand their connection with the concepts of inertia, relativistic rod
contraction, and radiating charges. By this approach, the field-source problem and the waveparticle
dualism of field theory are eliminated and a framework is established in which relativity
theory may be made compatible with quantum mechanics. Also the open systems approach
offers an opportunity for physics to return to the classical determinism it once knew.
Even more significant, this approach revives microphysics from its inanimate, closed system
past, and brings it under the framework of general system theory as a "life science". Formerly,
efforts to unify physics and biology under a common theory had proven futile, like trying to mix
oil -and water. One dilemma which was particularly puzzling was that in observing physical
systems one would conclude that entropy increases, whereas, in biological systems entropy
decrease. However, the advent of open systems microphysics undermines the tenet that positive
entropy is the law of the universe, and this age old dilemma becomes resolved.
For example, when gas molecules displaced to one end of a volume are left to expand and fill
the whole volume, classical thermodynamics tells s that they go toward a state of disorder, i.e.
entropy increases. Yet, when a plot of land is cleared in a jungle and left untended, it becomes
overgrown with vegetation. With the open systems analogy, we may view, these two situations
as being essentially the same. The gas molecules, like the jungle plants, are negentropic, open
systems which behave in a homeostatic manner. In both cases, however, it appears that an
ordered placement has tended toward a disordered placement. We then realize that perhaps
positive entropy is merely a manifestation of the behavior of open systems, negentropic systems
seeking mutual equilibrium. Perhaps positive and negative entropy are just two sides of the same
coin.
Closed system concepts, such as the atom of Democritus and the subatomic building block
particles of modern theories will have to be dispensed with. The new physics will be describable
on the basis of the warehouse concept and open system principles. Physical reality will become
regarded in a new light. It will become realized that the cosmos is dynamic; that existence is
dynamic. The primary principle or law of nature which operates is the evolution of ordered
systems, where dynamic events repeated in the same manner with great frequency give rise to the
appearance of structure. Structure will be viewed holistically rather than indeterministically.
Structure will be understood only within the context of its dynamic, sustaining environment. The
formation of the universe will become regarded not as a past event, but an ongoing process.
33
References
1) Allport, F. H. Theories of Perception and the Concept of Structure (New York: Wiley, 1955).
2) LaViolette, P. A. "The predator-prey relationship and its appearance in stock market trend
fluctuations." General Systems 19 (1974):181-194
3) Glansdorff, P., and Prigogine, I. Thermodynamic Theory of Structure, Stability and
Fluctuations (New York, 1971), p. 224.
4) Glansdorff, P., p. 225.
5) Glansdorff, P., p. 230.
6) Glansdorff, P., p. 233.
7) Glansdorff, P., p. 236.
8) Glansdorff, P., p 236.
9) Glansdorff, P., p. 241.
10) Glansdorff, P., p. 249.
11) Glansdorff, P., p. 258.
12) Glansdorff, P., p. 260.
13) Prigogine, I. "Thermodynamics of Evolution, Physics Today, Nov. 1972, p. 26.
14) Prigogine, I., p.26.
15) Field, R. J. "Oscillations in chemical systems. II." Journal of the American Chemical
Society, 94 (Dec. 13, 1972), p 8657.
16) Field, R. J., p. 8649.
17) Glansdorff, P., p. 262.
18) Winfree, A. "Scroll-shaped waves of chemical activity in three dimensions," Science, 181
(Sept. 1973), p. 937.
19) Winfree, A. "Spiral waves of chemical activity," Science, 175 (Feb., 1972), p. 634.
20) Field, R. J., and Noyes, R. M. "Explanation of spatial band propagation in the Belousov
reaction," Nature, 237 (June 16, 1972), p. 391.
21) Kopell, N., and Howard, L.N. "Horizontal bands in the Belousov reaction," Science, 180,
(June, 1973), p.1171.
22) Glansdorff, P., p. 263.
23) Thoenes, D. "'Spatial oscillations' in the Zhabotinskii reaction," Nature Physical Science,
243, (May 14, 1973), p. 18.
24) De Broglie, L., New Perspectives in Physics (New York, 1962), p. 108.
25) De Broglie, L., The Revolution in Physics, (New York, 1953), p. 216.
26) Einstein, A., "On the generalized theory of gravitation," Scientific American, 182, (April,
1950), p. 14.
27) Allport, F., 1955, p. 159.
28) Einstein, A., p.15.
29) De Broglie, L., 1962, p. 144.
30) Einstein, A., p. 16.
31) Einstein, A. p. 15.
 
Objekt: 25 - 27 av 85
<< 7 | 8 | 9 | 10 | 11 >>