Vous êtes sur la page 1sur 76

N and Ω

a semi-classical unified theory


of elementary particles based on
the qualities of space-time

salvatore gerard micheal

1
About the book:
this book is intended for a general audience interested in
science, physics, or unification physics. It is intended to be
readable and enjoyable. The author frequently makes fun of
science, physics, and himself – so please read with an open
mind!

About the author:


sg micheal was formally educated in statistics, psychology, and
systems science at Michigan State University. He also has
graduate level education in nuclear, mechanical, and electrical
engineering from various other institutions. He has written
several books on physics and systems.

About the theory:


this theory has been developing for about twenty-five years in
fits-and-starts. It has matured to a point in need – to reach a
broader audience. It is not sophisticated mathematically but
based on an engineering perspective – with elements rooted in
established and accepted engineering principles.

About the title:


N is for newton, the SI unit of force; Ω is for ohm, the SI unit
of resistance. Z0, the impedance of space, has units in ohms;
Y0, the elasticity of space, has units in newtons. Other than
dimensionality, these are the only two qualities of space-time.

2
Contents:

page 01-02: cover


page 03: Contents
page 04: Chapter One – Background
page 09: Chapter Two – Convention’s Approach
page 12: Chapter Three – A Decisive Test
page 16: Chapter Four – The Core Equations
page 26: Chapter Five – An Intuitive Description
page 31: Chapter Six – The Systems Approach
page 36: Chapter Seven – The Website
page 46: Chapter Eight – Uncertainty, Part One
page 51: Chapter Nine – Uncertainty, Part Two
page 56: Chapter Ten – The Source of Uncertainty
page 64: Chapter Eleven – Energy Distribution
page 76: Chapter Twelve – Eulogy/Christening

3
Chapter One – Background

Science should be about the pursuit of truth and understanding


nature – our universe. But it has become more about “paying
dues” and defense of one core principle – more than anything
else. That core principle is reduction. And “paying dues” means
endorsing some conventionally accepted idea, doing years of
research “proving” it, and receiving recognition for it. Physics
has become an agglomeration of disparate, many times
conflicting, ideas that have been generated in this pay-dues-
reduction machine. One great scientific principle, Occam’s
Razor, has survived – but in reality, they only pay lip service to
it. Occam’s Razor states – among proposed ideas in science
(those that try to explain natural phenomena) – the simplest
idea that requires the least number of assumptions tends to be
the correct one. Occam’s Razor cuts away the fat of incorrect
ideas – arriving at the meat of truth. That’s the way it’s
supposed to be..

Instead of a razor, science has become an enormous blender or


meat grinder – where all kinds of bits meaninglessness go in:
Casimir farce, virtually undetectable particles, the Higgs bozo..
What comes out is “golden” or something else? The core
principle that was generated is the idea that very small bits of
matter, elementary particles, are in reality – multi-state random
waves. Random waves of what? Energy? Random waves in
what? Space-time? “Science” does not want to say – because it
would be extending itself, violating Occam’s Razor.

Instead, “science” has created an enormous mathematical


artifice purporting veracity based on complexity and elitism. “I
went to Stanford/Harvard. Doesn’t that make me right?” “I
kissed your ass for twenty-five years. Shouldn’t you pay me
some respect now?” Conventionalists, cons for short, will
4
dismiss this with a wave of a hand saying “most developments
come from non-Ivy League institutions”. That may be true, but
it’s the same as saying elitist snobbery knows no bounds. Cons
will try to sell you something you don’t need or violates your
belief system. They do it by tricking you into believing their
math is perfect, every assumption is the best possible and most
reasonable, and that every dollar spent is an investment in the
future. Is there really any difference between a conman and a
conventional physicist – aside from education? Look at Niels
Bohr. He’s a perfect example. He publicly argued with Einstein
because of a differing belief system – not hardcore evidence
against Einstein. Einstein was a man of faith. He saw the
genius of the Divine in his equations. Bohr was a hardcore
atheist and libertine. His favorite saying was “there is no God”.
Of course a bunch of elitist snobs would welcome Bohr over
Einstein – it gave them license to do whatever they wanted.
Besides atheism and liberty, Bohr also pushed the random
wave concept. So “science” became deluded and sidetracked
based on a hidden desire for freedom.

In my searches of the internet, I have found only one group of


physicists who subscribe to a rational deterministic perspective
of elementary particles. Their publications can be found at
commonsensescience.org. I believe their electro-dynamic
model of e.p.s is complementary to my elastic-space model.
But they rejected my proposal to integrate:
Sam,
I had time only for a brief review of your paper. I reject
the proposal that space is an elastic medium. My
approach to describing the physical universe is
incompatible with the assumption that space is a physical
entity.
Dave Bergman

5
It’s unfortunate. I believe an integration of the two models, an
electro-dynamic-elastic-space model, would be the minimal
sufficient model to explain e.p.s, their interactions, nuclei,
atoms, and molecules – in other words, all matter in the
universe. From my calculations, space is not that elastic;
perhaps their master equations can incorporate elasticity
without much ado. The reason we need elasticity is to explain
the origins of curved space-time, gravitation, and the strong
force between nuclei. From my view, they’re all the same
thing.

It’s also unfortunate the scientists at commonsensescience.org


are so verbose about their faith. “This inconsistency in modern
science is incompatible with a Judeo-Christian world view of
consistency where expediency is rejected and contradictions
are never allowed.” (from their website on contradictions in
science) They return to a religious perspective from time-to-
time – occasionally injecting a religious comment in their text.
Science does not need faith – science stands on its own.

I agree that a holistic/systems perspective is required to


understand elementary particles and fundamental processes.
But we don’t need to say “God” in every article to remind
readers God is in everything; God is simply in everything. God
will reveal Itself in due time. Pushing God only excludes
people.

I’ve tried to be open-minded about every theory I meet. Can


the same be said for the scientists I meet? I doubt it. Dismiss-
exclude dismiss-exclude – that’s the “welcome” I get from
conventional scientists. arXiv.org, a famous internet repository
for scientific articles, won’t let me publish there because you
have to publish in a refereed journal first. Refereed journals
won’t publish me because I’m too “speculative” (even when I
6
reference every equation). The most “encouraging” comment I
ever got for any of my ideas was the comment “interesting” in
reference to my charge-spin equivalence equation. I know I’m
not Einstein; I will never be Einstein; I can never be Einstein.
(Even Einstein was dismissed in his later years – for trying to
do what I’ve tried to do.) But just because I’m nobody – does
that justify dismissal and auto-rejection of an idea that is
perhaps more important than E = mc2?

The core equation in my theory is Y0lPX = Z0e2ωe. It means:


elastic energy in space-time (which is mass) is impeded
spinning electrical energy. Implicit in the equation is the
fundamental importance of the dual-quality of space-time:
elasticity-impedance. That’s why the title of this book is N and
Ω.

Richard Feynman was a universally respected and admired


physicist. He developed a path-integral formulation of quantum
mechanics. He called it* jokingly QED (for quantum electro-
dynamics). QED is used in math to end a proof (a Latin
abbreviation of the same meaning). He joked that it was a
“theory to end all theories”. But the problem was that he was
taken too seriously. His joke became a core-pillar of physics.
Again, his math is not the problem – it’s simply a distraction
from the erroneous assumption of virtual shielding. He claimed
a cloud of virtual photons shields all electrons causing the
infamous charge deficit. *note: path-integral QM and QED are
not actually the same

I was able to generate three very different possible causes for


charge deficit. They’re all deterministic in nature. I have
tentatively accepted my third proposal in order to write this
book. Otherwise, I would have to employ an “≈” and no one is
ever happy to use one of those (except engineers;).
7
Must I butt heads wherever I go? Physicists reject me because I
borrow concepts from engineering which are automatically
dismissed as unimportant (Z0 is just a calculation – it has no
physical relevance). Engineers reject me because the ideas I
propose seem to have no relevancy to their domain (a Planck-
sized object? Immeasurable!). But because of my largely
mathematical formal training, I have a tendency to try to create
elegant concepts.

Elegance in science/engineering is something beautiful, simple,


and functional. Einstein discovered an elegant relation between
mass and energy. No wonder he saw the Divine in his
equations. Do I? It’s not the same feeling for me.. When I
discovered spin-charge equivalence, there was a giddy feeling
of discovering something fundamental. But I don’t see the
Divine in my core equation – I simply see an elegant universe:
if we accept my core premise that space-time is elastic, there
are only two things in this universe – space and energy. Matter
in all its forms, including life, is simply an elegant arrangement
of those two things.

8
Chapter Two – Convention’s Approach

The rational reason conventional physics refuses to


acknowledge my ideas simply is – there’s no point. To propose
and propose and propose then speculate a lot – is how they see
my ideas. It’s as if an amusing chimpanzee wandered into their
front yard. They laugh at his antics but wait for him to leave.
The conventional approach to space-time is very simple – it is
nothing and without qualities. The conventional approach to
gravity is twofold: curved space-time and quantum gravity. The
former was employed by diehard Einsteinians who strove to
vindicate general relativity. The latter is employed by
reductionists who strive to incorporate gravity into the standard
model. The conventional approach to the strong force between
nuclei is similar – it is mediated by gluons.

As mentioned above, the conventional approach to elementary


particles is called the standard model. It is an agglomeration of
ideas, assumptions, an enormous lattice of math, and series of
“vindicating” experiments.

No new physicist wants to work in general relativity. It is


treated as an oddity and orphan by convention. They’d rather
work on string theory which someday will be incorporated into
the standard model. String theory is the conventional approach
to creating a unified model of elementary particles.
Unfortunately, it is multidimensional and does not admit any
qualities of space-time. As with any branch of conventional
physics, it is supported by an enormous lattice of mathematics.

However, as can be seen by examining the research of


commonsense scientists and anyone else who chooses to break
away from convention, a lattice of valid math is not a precursor
for acceptance or even consideration by convention. If you
9
break with convention, no matter how perfect or complex your
math is – you will be rejected by convention. That’s one reason
why I don’t bother trying to develop a lattice of my own.
Another reason is that I’m not interested. Finally, I’m a
“concept guy” – I like working with ideas and visualizations.
No matter how much formal training in math I receive, I’m not
a mathematician. To me, math is a tool for modeling systems.
The only two proofs that hold interest to me are for Gödel’s
Theorem in logic and the Central Limit Theorem in statistics.
So for me, proofs in math are useful – but not very interesting.
If you want me to prove my core equations with a lattice of
arcane math, you’re asking the wrong guy. The best I can do is
try to understand the elastic-space model of elementary
particles and explain it to others. That’s why I’m writing this
book.

Don’t get me wrong, some math is elegant and fun to study in


its own right – complex variables, linear algebra, and systems
theory (just to name a few). But when I take a course in math,
I’m constantly distracted by two nagging questions: how can I
improve the elastic-space model of e.p.s? And, how can I
improve systems science and its utilization? (What elements, if
any, can I incorporate into the model or employ in systems
theory?) It should be clear what my priorities are:
understanding, validating, promoting the model, and systems
science. If there was ever a transforming force in my life, it
was studying systems science at MSU.

The lattice of math for quantum mechanics is based in linear


algebra. The same can be said for systems theory. So linear
algebra has wide applicability in science and engineering. I
have endeavored to develop a matrix formulation of the model,
but again – what is the point? It will be rejected by convention
regardless.
10
To me, validating the model is not done by creating a lattice of
mathematics. Validating the model is done by creating and
performing decisive tests – tests that clearly indicate a
preference for convention or the model. So for me, the pressing
obligation over the years was to develop decisive tests. (Please
forgive these digressions in this chapter; they help me write a
better and more interesting book, improve the model, and
hopefully improve systems science.) This book is clearly not
just about the model. It’s also about bringing vigor and
excitement back into science. It’s about the Socratic method.
It’s about the universality of systems principles. Please have
patience as you read.

My love of science and engineering should be clear to you by


now. But I am most emphatically NOT the “eternal student”
some may envision me as. I have this irrepressible urge to DO
something with the knowledge and understanding inside me.
And not just anything – I must lastingly improve the quality of
life for all human beings – in measurable ways. I consider it the
obligation of my existence.

But let’s get this straight right now, I’m not some glory seeker
or egomaniac. My core equation may be more important than E
= mc2, but I don’t deserve the Nobel Prize. That goes to the guy
or girl who validates the model with a decisive test. That goes
to the person who develops the lattice of math required by
convention. I’m just the idea guy. ;)

Perhaps one of the readers of this book will get physics back on
the reality track (as opposed to the elaborate fantasy now
subscribed to by convention). Perhaps convention will be able
to see their current insanity for what it is and award that person
the Nobel Prize. Then again, maybe physics will be doomed to
delusion for eternity .. Let’s hope not.
11
Chapter Three – A Decisive Test

In the process of writing, I have changed the chapter ordering


because of the importance of this concept. Science without
tests is fantasy. The following test is not a test of a core
equation, but it tests a corollary premise that e.p.s are mini-
dynamical systems which are disturbable – and that these
disturbances are measurable.

If two particles are identical in: identity (two electrons for


example), velocity, and position – they are identical. (This is
the conventional perspective – ignoring polarization.) They are
indistinguishable. It doesn’t matter how they got there; they
behave the same from there on. Regardless of how they
arrived, if you later measure some attribute, that value should
be the same with the same level of error/uncertainty. Unless..

Unless particles are dynamical systems with a kind of


‘memory’ for past disturbances. Imagine two electrons arriving
at the same place with the exact same momentum (at different
times of course) but just after a huge difference in disturbance.
If one arrived just after a small disturbance and the other
arrived just after a much larger disturbance, there should be a
larger uncertainty associated with the latter – if elementary
particles have ‘memory’. If elementary particles are dynamical
systems, they should exhibit larger uncertainties after larger
past disturbances. This is the essence of the test.

The setting is somewhat like the inside of a TV tube: it’s


evacuated with electron gun at one end and target at the other.
The EG is adjustable in intensity (number of electrons emitted
per unit time). The target, T, is a thin gold foil leaf which bends
easily under electron impact. The following is a baseline setup:
EG----------------------T
12
The EG is run at various intensities to measure deflection of T.
Perhaps a laser bounced off T could give better resolution. In
any case, we’re attempting to measure uncertainty in electron
momentum – which is the variation in deflection of T.
Theoretically,
∆p = ∆(mv) = 2(m∆v + v∆m) ≈ 2m∆v (1)
since ∆m should be negligible. Once calculated, this can be
compared to the measured uncertainty.

The next setup is called “small disturbance” and introduces


three magnetic deflectors which disturb the beam by pure
reflection: a small magnetic force from MD1 (magnetic
deflector 1) deflects the beam off-target, MD2 over-corrects,
and MD3 re-places the beam axially:
MD2
EG-----MD1 MD3-T

The final setup is called “large disturbance” and introduces a


larger deflection by using stronger magnets (or more powerful
electro-magnets):
MD2
/\
/ \
EG-----MD1 MD3-T

Entire path length – from EG to T is the same – in setups two


and three. This is to minimize the ‘number of changed
variables’ between the two. That means the relative sizes of the
diagrams above is deceptive: the physical separation between
MD1 and MD3 is actually larger in setup two.

Applying Newton’s second law and the relationship between


speed and acceleration (speed is the integral of acceleration),

13
we find uncertainty in momentum is directly related to
uncertainty in force:
∆p ≈ 2∆Ft (2)
where F is the force imparted from MD3, t is the ‘interaction
time’ of an electron with MD3, and uncertainty in time is
negligible. Note that the force here induces an angular
acceleration (a turn) – not a linear acceleration – axial with the
beam. The only confounding factor is t, interaction time with
MD3: in the “small disturbance” setup – that time should be
smaller than in the “large disturbance” setup because there is
less magnetic flux over the same volume (the path of the
electron crosses less magnetic flux). So that factor will have to
be accounted for in (2).

We are trying to calculate an expected uncertainty in deflection


of T as compared to the baseline. Those following convention
are free to employ the path-integral formulation devised by
Feynman and compare with above. What ever you do, examine
your assumptions: if path-integral requires you to account for
uncertainty in forces and interaction times for all three
magnets, then Feynman is assuming elementary particles are
dynamical systems with random state variables. If that’s true,
then convention and determinism differ by only one
fundamental assumption: random state variables vs internal
oscillation.

There are benefits that ‘go with’ determinism which convention


conveniently ignores: the qualities of space-time constrain
elementary particles – these are natural and ‘flow’ from the
properties of space-time – as compared to convention’s attempt
with 11 dimensions and string theory (their dogged adherence
to reduction and probability becomes ludicrous and laughable).
The other benefit of determinism is that it makes sense. Why
appeal to probability when we have the systems approach?
14
Why automatically assign the label “random wave” to
elementary particles – based on appearance, ego, and historical
revulsion of determinism? It boggles my mind – the
intransigence of convention. I’ve realized “a marriage” is not
the proper analogy of convention and probability-reduction.
The proper analogy is a baby clinging to their mother’s breast –
desperate for milk. The conventional adherence to probability-
reduction is infantile.

15
Chapter Four – The Core Equations

In this chapter, we take a deeper dive into the following table –


in order to deepen our understanding of space-time and energy:

m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

The purpose of this book is to show they are not just equations
– that they have deep meaning about basic structures in our
universe. Anyone can scribble down a list of equations, but it
takes years of contemplation to truly understand the fabric of
space-time from scratch. What was my inspiration? In junior
high, a gym teacher mentioned to me that they thought
elementary particles were confined photons. They said they
could not prove it, but they were sure it was true. This planted a
seed in my mind – itching to explain and understand. After
years of paper research (at that time – no internet), I found one
man, published in Physics Essays, who seemed able to prove
auto-confinement. Of course, he is dismissed and ignored by
convention. Since then, I have given up trying to prove
elementary particles are trapped photons. But over the years, in
the process of trying to prove and understand, I have
discovered deeper and more fundamental concepts/relations.
Those are listed above.

The first line is Einstein’s discovery written differently. Some


years ago, it was discovered that the speed of light squared is
equal to the inverse of space-permeability times space-
16
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

permittivity. And separately, that energy is equal to h-bar times


omega, angular frequency. Everyone agrees that h-bar is the
fundamental unit of angular momentum. But the physical
meaning of omega – convention refuses to say. It’s simply the
“amount of h-bars”, a coefficient of h-bar, in particles –
according to convention. So line one is basically a rewrite of
mc2 = E. What does it show? It shows that the energy in mass is
directly related to space-permeability and space-permittivity.
Those two – are components of Z0, the impedance of space.

Line two relates to Einstein’s special theory of relativity. It is a


required definition to keep things consistent in that respect. If
we divide both sides by h, Planck’s constant, we get frequency
is identically equal to the inverse of period times gamma
squared. Gamma comes from special relativity and is equal to
the square-root of one minus speed over light-speed squared. It
is a dimensionless fraction which typically amplifies rest
values when we divide those rest values by it. Nu, frequency, is
a relativistic quantity – which means it is amplified by speed.
Period, T, is also a relativistic quantity. In fact, mass and
angular frequency, from line one, are relativistic quantities. We
normally write m = m0/γ, for instance, which means relativistic
mass is rest mass divided by gamma. We omit the term
‘relativistic’ to avoid confusion, but it is strictly required to be
precise in our statements.

17
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

Frequency is angular frequency over 2π, but convention


ascribes little or no meaning to frequency and period in this
context. Normally, period is the inverse of frequency – and this
is true for many many systems. But because time slows down
for speedy crafts/particles, and because time slows down near
strong gravity sources, we must rationally explain this
somehow. The causal deterministic perspective asserts they are
the same thing. In my theory, I explain them both as curved
space-time. Convention assigns no deep meaning to special
relativity. Convention typically explains time dilation with a
particle bouncing between plates: at rest, it has a fixed distance
of travel, frequency, and period; at high speed, it has a longer
travel path, lower frequency, and longer period. (The direction
of travel is parallel to the plates.) But this conventional
perspective sheds no light on the causal mechanism of time
dilation.

Convention avoids this ‘messy situation’ (having to define the


relationship between frequency and period above) by not
ascribing any physical meaning to omega, nu, and T. h and h-
bar (h-bar is h/2π) are most certainly not relativistic quantities
(they don’t change with speed). So if omega, nu, and T have
any physical meaning, the ‘only room to move’ (the only
quantities above that can be relativistic quantities) is in them.
We know for a fact that mass increases, energy increases, and
time slows down for speedy particles. We know for a fact that
18
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

h, h-bar, and charge don’t change for any speed. So again, if


omega, nu, and T have any physical meaning, they must be
relativistic quantities.

I propose omega-m (m for mass) is the angular spin rate of the


core of elementary particles. This is proposed to be a Planck-
sphere with diameter of Planck-length. The following is a list
of Planck dimensions:
Planck-length = lP = 1.61608*10-35 m
Planck-time = tP = 5.39067*10-44 s
Planck-speed = c = lP/tP = 2.99792*108 m/s
They are conventionally considered to be absolute in the sense:
nothing can be shorter than Planck-length, no time can be
shorter than Planck-time, and no speed can be greater than c,
the speed of light. Anything shorter or faster is physically
meaningless.

h-bar/2 is the conventionally accepted value for spin of


elementary particles. That’s why it appears in line three above.
(Actually, once you define X and C, the rest become easy to
derive.) I arrived at line three by listing and simplifying eight
different ways to describe energy. The identity symbol is really
between X and C. Once you divide both sides by h and
multiply by tP, you find X ≡ 4πC. 4π is conventionally known
as ‘a solid angle’. Temporal curvature, C, through a solid angle
is linear extension of space. This is a definition.
19
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

Linear extension is defined in line two of the lower box. It


follows from the standard definitions of stress and strain in
engineering. Y is the standard symbol for Young’s modulus of
elasticity. It is required for an elastic model of space. In
practice, once Y is determined, X can be calculated (and vice
versa). The ‘per unit length’ must be decided and once it is, the
rest pretty much ‘flow’ from that decision; the choice of Y0 and
‘per unit length’ determine X here.

Y is normally given in newtons. So, in deriving/defining Y0, I


used that as a guide .. I feel like I’ve pretty much lost all of you
except the engineers at this point. Let me insert my original
derivation here.

Due to expansion of the Universe, space is under tension.


When a particle mutually annihilates with its anti-counterpart,
it's as if an ideal stretched string has been plucked – two
photons / e-m waves are emitted in opposite directions. Of
course, space has more qualities than just being under tension.
It has permeability and permittivity.
c2 = τ0/λ0 (1) p3
wave propagation rate squared is tension reduced by mass per
unit length
c2 = 1/μ0ε0 (2) p250
the speed of light squared is the inverse of permeability times
permittivity
20
=> λ0 = τ0μ0ε0 (3)
So, a mass is an element of space (per unit length) under
tension (or internal pressure) subject to permeability and
permittivity. Perceptive readers should notice (3) is a clever
rewrite of E = mc2. But it's more than that – it shows that
masses are a product of the three and only three qualities of
space – elasticity, permeability, and permittivity:
τ0 = Y(Δl/l) (4) p72
tension is linearly related to extension through Young's
modulus under the elastic limit
=> λ0 = Y0μ0ε0(Δl/l) (5)
(Page references are from Physics of Waves, Elmore and
Heald, 1969, Dover.) .. Until now, we have not made the 'per
unit length' explicit. Let's do that and assign the Planck-length:
λ0/lP = Y0μ0ε0(Δl/l) (6)
This is a place to start and we'll follow a similar convention
when the need arises. Let's replace lambda with the standard
notation and move lP to the other side:
m0 = (Y0lP)μ0ε0(Δl/l) (7)
Multiply by unity (where tP is the Planck-time):
m0 = (Y0lPtP)μ0ε0(Δl/ltP) (8)
Now, the first factor on the RHS is 'where we want it' (units are
in joule-seconds). And, the fact we had to 'contort' the
extension by dividing it by the Planck-time should not prove
insurmountable to deal with later. Finally, let's assume the first
factor is equal to the magnitude of spin of electrons and
protons, ħ/2:
m0 = (ħ/2)μ0ε0(Δl/ltP) (9)
By our last assumption, Y0 = ħ/2lPtP ≈ 6.0526*1043 N. To
simplify and isolate the extension:
m0 = (ħ/2c2)(Δl/l)(1/tP) (10)
=> (Δl/l) = (2c2tP/ħ)m0 = 2(tP/ħ)E0 (11)
So, the linear strain of space due to internal stress is directly
related to rest-energy through a Planck-measure. Later, if space
21
allows (pun intended), we will show that (11) reduces to an
even simpler form involving only two factors. If our
assumptions hold, the numerical values for (11), for electrons
and protons respectively, are approximately:
8.3700*10-23 and 1.5368*10-19.
The values are dimensionless – per the definition of linear
strain. The meaning is: 'locally', space is expanded (linearly) by
the fractions above (assumed in each dimension). What exactly
locally means – will have to be addressed later. The numerical
value of Y0 is extremely high as expected. All this says is:
space is extremely inelastic. The numerical values for ∆l/l will
have to be investigated – perhaps as suggested in the previous
paper .. That concludes my original derivation of Y0 and X. It
may help to read it through several times noting the
assumptions.

I could ride the fence like convention and say there is nothing
inside Planck-spheres except energy, but if we think about it
very carefully – we are somewhat forced into a position of
proposing/accepting that there is structure inside. This is what I
have been avoiding for twenty-five years! :(

Twenty-five years ago, it was suggested to me to employ


Planck dimensions. And I believe I understood the danger at
that time. That’s why I doggedly pursued a model with
Compton dimensions. But it simply doesn’t work with an
elastic model of space. Dimensions cannot be so large; force
cannot be distributed over large areas because there is not
enough energy in e.p.s to balance that. The only model that
seems to work is a dual-sized model – Planck-size for mass and
Compton size for charge.

At this very moment of writing, it occurred to me – the


possibility of torii within torii. Bergman and his staff at
22
commonsensescienc.org have developed a quasi-torus model of
electro-dynamic flux. In the process of deriving the Planck-
sphere model of mass, a step in that process was proposing an
ultra-thin torus. But that torus has to be thinner than the
Planck-length for that model to work. That’s why I rejected it.

I must concede that it is possible e.p.s may be torii within torii.


But there are two reasons why I don’t subscribe to that
perspective right now: mass as ultra-thin torii requires extra
assumptions about geometry – assumptions we cannot prove
now. And, do you see Bergman’s staff willing to work with
me? No. So what’s the point of me trying to integrate models
when the other party refuses to collaborate? They are also
currently dismissed by convention.

I believe that model would require a similar onion-like


structure within the inner torus. (Part of my model of the core
is the proposal it is an onion-like spherical standing wave of
temporal curvature.) Except that it would have to be torii
within torii. Until this geometry can be proven to me and
Bergman’s staff becomes willing to work with me, I will defer
accepting this model. The simpler model is sphere within torus.

I suppose we are ready to study the fourth line in the core


equation table. Originally, there was not an equal sign. The
approximation comes from my discovery ħ ≈ Z0e2. For about
fifteen years, I have stared at that ‘≈’ – trying to understand it.
The factor that makes equality is 10.905 on the right side. But
every time I would try to explain/understand it, required
additional assumptions. In order to write and publish this book,
I’m required to ‘take a stand’. To me, it’s better to take a stand
and be wrong than ride the fence for eternity. At least you have
a chance for progress. Riding the fence does not.

23
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

Can 10.905 be absorbed into any term on left or right? If on the


left, we must modify definitions of elasticity or extension (and
justify it). On the right, we basically only have two choices:
Z0e2 and omega. If we choose the former, we are implicitly
choosing some geometry. If we choose omega, we must
understand the consequences and any associated assumptions.

If electric flux is a spinning ring with outer dimension of


Compton diameter (h = mcλC, where lambda-C is Compton
wavelength), and if ωe = 10.905ωm, then tangential speed is
10.905c which is impossible – or is it? There are only two
choices at this point: allow flux speeds greater than c or change
dimensions. Since allowing speeds greater than c tends to
throw a ‘monkey wrench’ into things, we’ll go with the latter.
Let’s tentatively change the outer dimension of the flux ring to
λC/10.905. That way, the tangential speed is exactly light-speed
which agrees with the Bergman model.

Why do I bother to conform my model to Bergman’s? Again,


it’s because in all my searches, I have found only one complete
deterministic model of elementary particles which seems to
make any sense .. A member of the Faraday Group, of which I
am the founder, worked on this model independently a decade
or so earlier than Bergman’s seminal paper. But I won’t give
his name here to help readers discover this for themselves.

24
m/(μ0ε0) = ħωm
hν ≡ h/Tγ2
((ħ/2)/tP)X ≡ (h/tP)C
Y0lPX = Z0e2ωe
ωe ≡ 10.905ωm
X ≡ Δl/l = m/lPY0μ0ε0
= 2tPωm

The following are websites for Faraday Group:


unc.edu/~gravity/
msu.edu/~micheals/
http://groups.yahoo.com/group/faraday_group/
Please join and contribute if you are so inclined. The group is
“an association of physicists and those interested in physics”.
They are most definitely NOT working on unifying my model
with anything else; everybody’s working on their own thing.
For current updates of this theory, please visit:
https://www.msu.edu/~micheal/physics/

The reason I organized the table above – the way it is – is for


the following reason. Everything in the top box is actually the
same thing; they are all equal to energy – they are eight
different ways of looking at energy! Mass has spin; it has
frequency; it has period; it is curvature; it is extension; it is
spinning flux.

The fact we can look at energy (at least) eight different ways is
not a testament to human ingenuity and insight – it’s a
statement about the elegance of our universe. Our universe is a
beautiful and wonder-full place. Just look up on a clear night.

25
Chapter Five – An Intuitive Description

True understanding does not come from regurgitation of facts;


it comes from internalizing concepts. It took me years to
understand the electromagnetic wave, the photon. And I still
cannot bridge the gap between photons and e.p.s if they are
indeed the same thing. My original proposal was that radiation
propagates through space by changing form: from
electromagnetic to gravitational and back.. In gravitational
form, the wave is much like a 3D soliton. The e-m part is well
understood by engineers. In my searches on the net, I could
find only one other who developed a similar model of photons.
But as I mentioned before, focusing on a trapped-photon model
of e.p.s is a ‘dead end’; physicists will auto-reject that idea
faster than you can say “reject”. It’s better to focus on a model
with the minimum number of assumptions. That way, there’s at
least a small chance for consideration.

I visualize the core with layers – much like an onion. In a way,


we must ascribe some structure to the inside – or there is no
way to differentiate between protons and electrons. Electrically,
there is no difference between a positron and proton; there is no
electrical difference between an electron and antiproton. The
difference is about mass. If we can accept that masses are
spherical standing waves of temporal curvature, then the
difference between masses is simply a difference in wave
number inside the sphere. The real (next) question becomes:
why are there only two stable (forms of) elementary particles?
(Why are there only two stable wave numbers inside the
sphere?) If I could answer that, to the satisfaction of
convention, I would have the Nobel Prize.

For me, a more important question is about the physical link


between core and ring of flux. At this point, I can only
26
speculate. If the core was distributed as a torus within a torus,
the physical connection between core and ring of flux would be
easier to visualize: one would be part of the other. Unless the
core generates the ring of flux (or vice versa), I see no other
way to comprehend it (if the true situation is sphere within
torus). The differing spin rates is somewhat alarming. It would
seem to make the physical connection somewhat tenuous. I
would expect the outer rate to be less than inner – if outer
‘dragged’ inner .. As you can see, even I – the theory’s
discoverer, have trouble comprehending it.

From the core equations, spinning flux is an equal expression


for energy of elementary particles. It is just as important as
core energy. For e.p.s, they are inseparable. Whether the core is
a torus or sphere, its spin rate is less than that of the flux ring.
It must ‘drag’ the flux ring in a way. Or else spin rates would
be the same. So imagine an elementary particle as a new
couple: the flux ring is the vibrant and energetic new bride; the
core is her dull and boring new husband. He drags his feet; he
slouches (boy, does this sound familiar;). He acts as if space
impedes his way ;). His bride zips around – she moves at the
speed of light. All he can do to ‘keep up with her’ is spin
around himself – watching her. But he can’t; space impedes his
very spin.

Of course, I don’t imagine e.p.s as ice skating newlyweds


(maybe an old married couple – hobbling around;). The
problem with trying to visualize the system is that we don’t
have good macroscopic analogies for the electromagnetic field.
We don’t have good macroscopic analogies for charge flux. It’s
difficult to connect to the model viscerally when we don’t have
everyday experiences to connect to it.

27
If e.p.s are torii within torii, imagine them as donuts within
donuts. The inner donut is very very thin and resides in the
center – inside the flux-outer donut. Inside the very very thin
inner donut – it has layers and layers. Now imagine them
spinning. But the spin rates are different. The inner donut lags
behind the outer donut. Its spin is impeded somehow.

Just today in a dream, an elderly black man asked me a kind of


‘trick question’: “A building is falling off a cliff. What holds it
up?” I replied “Gimme a minute; I need to think about this.”
Then he said “You’re supposed to answer these on the fly.” I
heard him talking to another guy about more questions –
something about complex numbers. (If you and I have the same
amount of imaginary numbers, what do we have? Answer: the
same complex number.) And then I realized what he was
looking for: “Oh I know what it is!” (He raised an eyebrow
toward me.) “Inertia! Inertia holds the building up!”

What keeps e.p.s spinning? Inertia. What keeps the disk drive
inside your computer spinning? (other than the motor to
overcome friction and accelerate the disk initially) Inertia.
Inertia is the quality of matter that resists acceleration (whether
it be linear or angular). The deep question that ‘no one’ has
been able to answer: what causes inertia? No one is in quotes
because many have tried to answer that question – just no one
has succeeded to satisfy convention with their answer. Some
time ago, I explained inertia as the smeared extension. But if
we think about mass as confined temporal curvature, inertia is
simply the lack of energy to add or take away from the core.
Accelerating a mass adds relativistic energy to the core;
decelerating a mass takes away. A particle at rest has a fixed
minimum amount of energy in the core.

28
What could be more elegant than that? Convention’s resistance
to positive change is like the inertia inside a baby – refusing to
grow up .. One of my theories of personality is about
‘emotional inertia’. When something makes us angry, really
angry, it takes time to cool down. When we love, truly love, it’s
usually for a long time. Our emotions have a kind of inertia. Of
course, I’ve watched my baby change from crying to laughing
in a blink of an eye, but adults rarely do this. I believe the
concept of inertia is important not just to physics and
engineering .. It could be said that the field of physics is all the
teachers, students, and researchers that care about physics.
Their collective belief system is important to the field. Their
resistance to change, their ‘philosophical inertia’, is important:
if a new idea is wrong, take time to confirm it – and reject it; if
a new idea is right, take time to confirm it – and accept it. The
central problem with accepting my ideas is not the lack of
math-lattice supporting them; it’s the fundamental
disagreement in approach. Convention has accepted the
random-wave model of matter. It uses reduction to break a
problem into parts – then tries to solve them separately.
Because of my training in systems, I have a holistic approach
to solving problems. Sometimes, problems are so complex, you
need the systems approach to solve them. In my book on
systems, I define complexity to be “the property of a system
with the following features: a generous frequency of distinct
types of components, a non-trivial arrangement of those
components – in order for the system to function nominally,
and some quantitative evidence of a system-wide synergy.”
Now strictly speaking, e.p.s are not complex structures, but
their behavior inside atoms and molecules suggests we need
the systems approach to understand them.

Convention cannot accept my ideas because it cannot integrate


them into the current framework – ideas clash. I’m not asking
29
them to discard reduction – just amplify it with the systems
approach. But I am asking them to take a hard long look at the
random-wave concept, compare it to the elegance of temporal
curvature, and decide. If they decide to keep random-wave,
that’s their business – their problem. They will find more and
more compelling evidence against it (such as exact atomic
control – we can do it now). Uncertainty in physics is
becoming a relic of the past (the uncertainty relations used to
hold prime importance in physics).

When I was in university, it was my conviction that problem


solving is a matter of perspective: achieve the right perspective,
the problem ‘solves itself’. What this means in practice is:
reformulate the problem in a clever way and the solution
usually becomes obvious. The book called Heuristics confirms
this. It’s an excellent resource for problem solving. I haven’t
finished reading it; it’s very ‘heavy’ mathematically. The first
two or three chapters can be digested by science students; try it.

After years of conventional problem solving, I’m convinced


the systems approach is absolutely required for some types of
problems: space systems engineering (in order to avoid the
Shuttle type disasters), human systems engineering (on a global
scale such as suggested by my book Humanity Thrive!), and
‘microscopic’ systems analysis. In the first two cases, we are
designing systems. In the last case, we are trying to understand
it. Microscopic is in quotes because the systems we are trying
to understand are much smaller than what’s viewable with a
microscope. That’s part of the problem. We cannot view them
directly. We can only infer properties from various kinds of
experiments. The only technique that has any chance of
viewing them directly is electron interferometry. And that
technique is currently in dispute .. So, a chapter on the systems
approach is advisable here.
30
Chapter Six – The Systems Approach

(This chapter was taken from Humanity Thrive! and applies to


the global human system. The principle can be applied to any
system.)

Boundary

What's inside the system, what's outside the system, and


what're the major components of the system? In answering
these questions, we address the system notion of boundary.
Let’s examine the human system. What's inside the human
system? Human beings, social organization (formal and
informal), and our infrastructure – are major sub-systems.
What are inputs? Those are energy, resources, ecologies that
impact our lives, and natural (non-living) systems that impact
our lives. What things “flow” between major sub-systems?
Those things are: resources, energy, information, feelings (can
be thought of as commodities that are exchanged), “control
signals”, and disinformation. What are outputs of the human
system? Those are wastes, heat, information, culture (both
constructive and destructive aspects), and things that affect
non-living systems and ecologies.

Aside: what's war in systems terms? War is the allocation of


resources, energy, information, feelings (such as aggression),
control signals, and disinformation – all directed at one goal:
domination. The “rational” idea behind war (as hoped by
governments waging war) is that long-term gains should
outweigh any short-term malady. Please refer to the chapter
below entitled: The Ends Cannot Justify the Means.

31
So, the system notion of boundary is the view that identifies the
system concerned: what is inside and out, what are major
components, what flows between, and what flows in and out.

Scope

There are three major aspects of the system notion of scope:


feasibility, customer requirements, and design responsibilities.
Tied together in question form: can you design a workable
system that satisfies customer and design requirements within
budget? As applied to the human system: can we re-design a
workable human system (as defined above) that satisfies
humanity and our design constraints within our allocated
budget (assume for the moment we have a design budget and
authority to re-allocate system resources to satisfy design
requirements)? This is an extremely difficult question when
dealing with complex systems. Frequently, the entire process of
“system design”: identify boundary, scope, maintenance
concerns, and reliability – must be repeated several times –
“filling out” details of sub-systems and flows, inputs and
outputs, re-answering the question associated with scope (with
every major change in system design, there is an associated
change in the question of scope), and the concerns below.

Maintenance

Expect to pay at least the same amount for maintenance – as


for “the original system”. In this case, the “end users” are
human beings themselves. If we can design and implement a
human system that satisfies (I would substitute the word fulfills
here) the vast majority of human beings, if we can maximize
quality-of-life while minimizing suffering, and at the same time
– not create a welfare state, we would have accomplished
something truly fundamental. Maintenance is the “upkeep” for
32
the designed system – to satisfy end-user requirements.
Frequently, the designed system does not take into account
many of those (it’s too expensive and difficult to satisfy every
end-user need) – and – it's difficult and sometimes impossible
to anticipate changes in end-user requirements. So, it’s a trade-
off: the more we spend on creating a “maintenance-free”
product, the less we are likely to spend on maintenance –
provided we have the foresight to anticipate the true needs of
end-users. There's risk involved – which brings up the next
topic.

Reliability

What is the risk/probability of failure of a major sub-system?


What is the cost of that particular failure? Multiply the two and
you get a simplistic projection of the relative cost. Let’s
consider a “simple” example: a telecommunication switch (the
device used to route local calls). The risk of total failure (where
the switch “goes down” – it cannot route any new calls and all
calls-in-progress are dropped) – is quite low: perhaps once in
ten years. The cost of that failure can be quite high – depending
on the local customer base and duration. Even considering
averages, the cost can rise into the millions. So, let’s say the
switch is down for three hours and costs the local telephone
company two million in lost revenue and bad publicity. Just
three hours in ten years. If you divide down-time by up-time
(over ten years) then multiply by two million, you get around
$70 which equates to about three hours of technician-time. So,
we're justified if we allocate three technician-hours for switch
maintenance (over ten years) to specifically avoid this kind of
problem. Actually, telephone companies allocate much more
than this to avoid total switch failure.

33
Let’s move the discussion toward the human system.
Catastrophic failure would be where every single human being
would die. Admittedly, the probability of that is extremely low.
Extremely low but non-zero. Some would say the cost of that
event would be “infinity”. A number (no matter how low) times
infinity is still infinity. So, the relative cost is still “just too
high”. So, anything we spend on preventing that event – is
money well spent.

A dynamical system is one in which past inputs affect present


outputs or system state. Reliability usually refers to the domain
of systems concerns – which reflect upon system stability.
Stability refers to the behavior of system state over time. Is it
restricted? Or does vary madly – threatening to destroy the
system itself? (Reliability also refers to dependability or
consistency of good system performance. If a car does not start,
has repeated mechanical breakdowns, or exhibits
uncontrollable vibrations while driving – we say it is
unreliable.)

..In systems theory, much emphasis is put on controllability and


observability – which are pretty much – exactly what they
“say”: a system is controllable if there are finite inputs which
“drive” (or push) system state to desired specifications – and –
a system is observable if there is a set of measurable outputs
which represent the state of the system. State variables are
those which represent system structure. When we are designing
a system “from scratch”, these are all known and explicit.
When we are trying to understand a natural system “from the
outside”, we have to make reasonable guesses about state,
inputs, outputs, and attempt to determine if the system is
observable and controllable..

34
In systems analysis, there are stable systems and there are
unstable systems. A famous image of wind shear causing
increasing oscillations, in this case twists, is recalled by many
of the public. The flexible bridge here is “the system” and the
constant wind shear – the input. The system under the force of
gravity (only) is stable. The system under gravity and wind
shear – unstable.

There are many analogous stresses/inputs on the human


system. Hunger can be thought of – as a kind of stress.
Overpopulation causes hunger which is a stress on the human
system. Disease vectors cause stress on the human system.
Changing weather patterns cause stress on the human system.
Disruption of food supply chains causes stress on the human
system. Lowering the quality of education causes stress on the
human system.

The point of this chapter is to introduce systems concepts,


apply them cursorily to the human system, and provide a
launching point for other ideas below.

(Again, this chapter was taken from Humanity Thrive! The last
sentence above applies to that book.)

35
Chapter Seven – The Website

What came first: the chicken or the egg? ;) The ‘inspiration’ for
this book was the following website. After making it, I realized
I would have to write this book. Before the website, I wrote
another book called Gravitation and Elementary Particles. Parts
of that are used in this but it’s largely mathematical and would
probably confuse most readers. Many details from the website
can be found above but it summarizes well ergo inclusion.

Temporal Curvature and Elementary Particles


Sam Micheal, Faraday Group, 30/OCT/2008

This theory is based on the assumption space is an elastic


medium which can be distorted under extreme force. We define
a new quantity Y0 ≡ ħ/2lPtP ≈ 1044 N which we call the elasticity
of space. Another new quantity is the linear strain of space
which we call the extension: X ≡ m/lPY0μ0ε0 = 2tPω. A related
quantity is temporal curvature: C ≡ X/4π = t Pν. With these new
definitions, it can be shown all significant attributes of
elementary particles are interrelated: energy in mass is energy
in extension which is the same energy in temporal curvature
which is spinning charge. The two qualities of space,
elasticity and impedance, relate the significant attributes of
elementary particles.

Time dilation aboard a speedy craft is an accepted fact. Time


dilation near strong gravity sources is also an accepted fact. For
the moment, let’s ignore spatial curvature near those. Let’s
focus only on temporal curvature. Time slows down the most at
the maximum of curvature. This could be the center of a planet,
star, or neutron star. In a circular orbit, temporal curvature is
constant. In a plunging orbit, temporal curvature goes from
some fixed level to maximum then back to the fixed level
36
(depending on starting position). Analysis in gravitation is
about trajectories or geometry. The two trajectories listed above
are orthogonal in that any trajectory can be made from a linear
combination of both. This is essentially a proof that gravitation
can be analyzed exclusively in the context of temporal
curvature.

In much the same way, the mass component of elementary


particles can be treated as a manifestation of temporal
curvature. Energy in mass can be viewed as energy in temporal
curvature. This is especially convenient when we consider
relativistic effects: relativistic energy is simply an enhancement
of rest energy (in temporal curvature).

Elementary particles have three components of energy: two


that are non-relativistic and one, mentioned above, which is a
relativistic quantity. The non-relativistic components are spin
and electric flux. (The facts that two of three are non-
relativistic quantities, their measured levels, and the ten stable
elementary particles – are not debated here. I believe a full
understanding of temporal curvature and appreciation of
impedance will illuminate all these facts.)

Some years ago, I discovered a relationship between charge


and spin that has been ignored and dismissed:
ħ ≈ Z0e2 where Z0 is the impedance of space.
Spin is impeded charge (moment). There is a kind of
equivalence between spin and charge (moment). If we ignore
the numerical approximation, total energy can be expressed as:
ET = E0/γ + E0/2π + E0/4π where γ = √(1-(v/c)2)
and where the first term is energy in temporal curvature,
second – energy in electric flux, and third – energy in spin.

37
Energy Distribution at Various Speeds

tem p curv
elec flux
spin

v=0

tem p curv
elec flux
spin

v = .25c

tem p curv
elec flux
spin

v = .5c

38
tem p curv
elec flux
spin

v = .75c

tem p curv
elec flux
spin

v = .99c

The next serious question is about the confinement mechanism


– what keeps these “bubbles in space-time” from simply
dissipating? What holds them together? I propose a balancing
of forces: the extreme inelasticity of space with an incredible
internal temporal pressure wave. The elasticity of space can be
calculated with a couple assumptions: Y0 = ħ/2lPtP ≈ 6.0526
*1043 N. If elementary particles are Planck-sized objects, they
must have internal pressure that balances that extreme force. I
propose a spherical standing wave of temporal curvature –
much like an onion in terms of structure. The rest energy of
elementary particles is small but pack that energy into a very

39
small space and you have a good candidate for the confinement
mechanism. Again, the issue here is not the why of ten
elementary particles. I believe that why can be answered when
we fully understand temporal curvature and appreciate the
impedance of space.

The only extended component of elementary particle energy is


electric flux. The other components are confined to the Planck-
sphere.* This could explain the double-slit phenomenon of
self-interference. The electric flux of elementary particles is not
unlike a soliton – a solitary standing wave of electric energy. It
is not unreasonable to propose this is the mechanism of self-
interference. This idea could be tested in simulation and
verified with real particle beams of various configurations. *Of
course, there must be “residual” extensions of spin and
gravitational energy – otherwise, spin and gravitational
interactions (between elementary particles) would not be
present. (As I understand it, spin is manifested via magnetic
moment which is a result of spinning charge. Gravitation must
be an extension of temporal curvature beyond the Planck-
sphere. The proportion of extended energy must be dependent
on number and amplitude of waves inside.) ..An idea I
discarded around twelve years ago was the following.
Temporal curvature acts as an energy reservoir for oscillating
flux and spin. This idea was developed to account for tunneling
behavior. Preliminary calculations were not encouraging
(energy in electric flux must be increased to compensate for
“sinusoidal deficit” – in order to maintain Bohr dimensions.)
Perhaps tunneling can be explained in another semi-classical
way or perhaps there is indeed some oscillation of electric flux
and spin. Further work is required.

This theory has been developing for about twenty-five years –


very slowly at first for three reasons: difficulty in visualization,
40
ironing out seeming inconsistencies, and my reluctance to
employ Planck-size. Visualizing standing waves of temporal
curvature is not easy. There were apparent inconsistencies in
the relativistic domain at first, but these disappear with proper
definitions (ν ≡ 1/Tγ2). Around twenty-five years ago, it was
suggested to me to employ Planck-size but the fact theory
becomes unverifiable when you do that – impelled me to
pursue other avenues at first (Compton dimensions). The
theory “took off” when I took a course in electromagnetism
around fifteen years ago. This is when I discovered the
relationship between spin and charge. And only very recently
did I give up on Compton dimensions in preference for Planck-
size. It took over twenty years to precisely define elasticity – in
part – because of my reluctance to employ Planck dimensions.

Once we arrive at a suitable model of elementary particles –


one with appropriate arrangement of spin and flux, creating
nuclei, atoms, and molecules (as in simulations) – will become
child’s play.

The purpose of this perspective is to present a plausible and


elegant picture of elementary particles – that they are stable
vibrations in space-time. From this perspective, it can be
shown the origin of uncertainty is not a probability density
function – but the vibratory nature of elementary particles
themselves. Energy-uncertainty can be shown to be bounded
by a linear function of position-uncertainty – alone. This
contrasts the conventional perspective which asserts energy and
time uncertainty are complementary and interdependent
random variables – decreasing one increases the other and vice
versa.

No theory is any good – unless it is testable – and a decisive


test is proposed – to compare against convention and this more
41
elegant perspective. It is proposed elementary particles are
“mini dynamical systems” that are disturbable – and that those
disturbances are measurable.

For a more thorough discussion and development of these ideas


– please download a copy of my latest book: Gravitation and
Elementary Particles<link>.

Addendum 1:
“The Universe in Fourteen Lines” ;)

E ≡ Y0lPX ≡ Y0ctP4πC ≡ mc2 ≡ hν ≡ h/Tγ2 ≡ hC/tP ≡ ħω ≈ Z0e2ω

ΔEΔt ≥ ħ/2; ΔpΔx ≥ ħ/2; ΔXΔt ≥ tP; ΔE > -c1Δx + c2; ΔX >
-c3Δx + c4

I was told years ago that “It’s useless to stare at equations for
hours at a time.”, but insights can be garnered by constructing
lists of identities such as above – “proving” things that perhaps
were only suspected before. Reading above in English: energy
is (the force in) the elasticity of space through Planck-length
causing an extension – which is – that same force through
Planck-time causing temporal curvature – which is – mass
times the speed of light squared – which is – Planck-energy
times frequency – which is – Planck-energy divided by period
– which is – Planck-energy times temporal curvature divided
by Planck-time – which is – the fundamental unit of angular
momentum times angular frequency – which is approximately
equal to the impedance of space times charge-moment times
angular frequency. c in line three is a scaling factor to keep
units correct (c is the speed of light). Gamma in line six is a
relativistic scaling factor. E, X, C, m, ν, T, and ω are all
relativistic quantities. Three fundamental identities were

42
garnered in the process of constructing above – insights that I
suspected but could not easily prove:
mass is energy stored in temporal curvature – Y0(4πtP/c)C ≡ m,
energy through time is energy in curvature – EtP ≡ hC,
energy through time is spin causing extension – EtP ≡ (ħ/2)X,
and there is a kind of equivalence between the elasticity of
space and the impedance of space (a relation I’ve been looking
for – a long time) – Y0lPX ≈ Z0e2ω.

Strictly speaking, force through time causes temporal curvature


– which is mass. Energy through time is energy in curvature.
Energy through time is also spin-moment causing spatial
extension. The final relation deserves special explanation. It
shows there’s a correspondence between three sets of
analogous quantities. Elasticity is to length as impedance is to
charge-moment; length is to extension as charge-moment is to
angular frequency; elasticity is to extension as impedance is to
angular frequency. Extended space is spinning charge. The
relation shows how equally important elasticity and impedance
are. ..Some years ago, I abandoned an oscillatory model of
elementary particles – where energy in charge-spin oscillated
with energy in spatial-temporal curvature – I could not prove it
(editors objected: mere speculation). So I attempted to cut my
assumptions to minimum – cutting away parts of the model that
were not absolutely essential. The current model is plausible
and feasible. The more I investigate it, the more it seems to
make sense. We just need to work on modeling flux and spin
(such as proposed by Bergman).

Let’s rewrite above – just keeping the absolute essentials:


m/(μ0ε0) ≡ E ≡ (h/tP)C ≈ Z0e2ω
≡ ≡
Y0lPX ≡ ((ħ/2)/tP)X

43
where μ0 is the permeability of space, ε0 is the permittivity of
space, and Z0 ≡ √(μ0/ε0) ≈ 377 Ω.

m ≡ (h/tPc2)C

Y0lPX ≈ Z0e2ω

Energy in mass;
is: elastic force through distance causing extension;
is: energy over time causing temporal curvature;
is: spin energy over time causing extension;
is: spinning charge.
Curved space-time is mass is spinning charge; it’s all the
same energy – just different manifestations of it. Line two:
mass is energy over time causing temporal curvature; mass is
temporal curvature. Line three: there is a kind of equivalence
between the elasticity and impedance of space.

Addendum 2:
A Note About Approximation

Many will dismiss this theory for the simple reason I use an
approximation above between spin and charge energy. ..After
some contemplation, we could think of the difference (ratio)
between charge and mass energy (.091701) as lag in phase
(phase difference) between them. If we represent energy in
mass as cos2θ, the phase lag for charge energy is -1.26314.
Since mass is a standing wave of temporal curvature, we
cannot detect this phase lag directly – we can only calculate it.
This seems better than summoning a cloud of virtual particles
to explain charge deficit. Of course, the why of charge energy
phase lag still needs to be explained. ..Yet another way of
looking at charge deficit is with vectors (we assume a specific
geometry with this perspective): two electric vectors with equal
44
magnitude of √Z0e lay in x-y plane. Their cross product is a
vector in the z-direction with magnitude Z0e2sinθ where θ is the
angle between electric vectors. Since sinθ = .091701, θ =
.09183 = 5.26149° (the angle is not unique: π-.09183 also
works). Again, if we adopt this approach, we need to explain
why. Finally, a third approach to explaining the factor 10.905 is
to propose a different spin rate for electric flux: if we let ωe =
10.905ωm, ħωm = Z0e2ωe. As with the others, if we adopt this
approach, we must explain why it’s preferable. I prefer the
simplest approach which requires the least number of
assumptions – one that jives with reality. For example, if the
final approach does not agree with measured magnetic
moment, we must throw it out.

Addendum 3:
A Tentative Complete Model

Based on the third assumption above and its qualifications, let’s


tentatively assume it’s correct and complete the model:
m = ħωm = (ħ/2)X ≡ Y0lPX = Z0e2ωe
μ0ε0 tP
≡ ωe ≡ 10.905ωm
hC X ≡ Δl = m = 2tPωm
tP l lPY0μ0ε0
Elementary particles are dual-sized structures with
corresponding dual-spin. Space-time curvature is largely
confined to a Planck-sphere whereas electric flux resides
largely within Compton dimensions. Inner spin is ħ/2 with rate
ωm; outer spin is Z0e2 with rate ωe. The link between them is the
elasticity/impedance of space (Y0/Z0 = 1.60661*1041 AC/m).

Sam Micheal, 30/OCT/2008


micheal at msu dot edu
Chapter Eight – Uncertainty, Part One
45
A New Uncertainty Relation for Conventional Physics
Salvatore G. Micheal, Faraday Group, micheals@msu.edu,
11/19/2007

A new uncertainty relation is derived with the following


parameters: extension of space (linear strain), time, and
Planck-time. An argument on its fundamental nature
and meaning is presented. Two related aether theories
are discussed.

For those unable to divorce themselves from probability (or


those unable to tolerate even a trial separation), the following
train of thought was doggedly pursued to its 'brilliant
conclusion' .. Near the end of the previous paper on temporal
curvature, a relation between the extension of space (a crude
measure of spatial curvature due to the presence of mass) and a
measure of temporal curvature was developed:
X = 4π(tP/T) (1)
where subscripts are omitted for clarity; extension is the ratio
of Planck-time over period through a solid angle
One expression of conventional uncertainty is:
∆ω∆t ≥ ½ (2)
uncertainty in angular-frequency times uncertainty in time is
greater than or equal to one-half
With a little algebraic manipulation, this can be rewritten:
4π(∆t/∆T) ≥ 1 (3)
Notice the form of (3) is almost the same as (1)! Now, let's
examine things from a conventional perspective. Since
extension is directly related to energy, there's some uncertainty
associated with it:
∆X = ∆[4π(tP/T)] (4)
= 4π(tP/∆T) (5)
=> ∆X∆t/tP = 4π(∆t/∆T) (6)
46
=> ∆X∆t/tP ≥ 1 (7)
=> ∆X∆t ≥ tP (8)
uncertainty in spatial extension due to presence of mass times
uncertainty in time is greater than or equal to Planck-time

Planck-time is the lower-bound for uncertainties in space-strain


and time. The purpose of this paper is not to 'bend to
convention' – but to present things in a way that is acceptable
to convention so that the previous papers (and any subsequent)
are not rejected out of hand.

The author prefers deterministic and non-reduction (holistic)


views of quantum behavior. I say this not out of ego but
sentiment similar to Einstein and De Broglie: our lack of full
understanding forces us to employ statistical/probability
analysis. Then we further justify that by unequivocally stating
measurable entities have some inherent uncertainties associated
with them. Of course there are errors associated with every
measurement; of course there are always limits on our
precision. The author does not argue against fundamental limits
on time and space. It is the source of those limits that I
question; it is the source of those 'inherent uncertainties' that I
need to understand.

I have a natural tendency to view things in terms of electric and


magnetic flux because those can easily be visualized. Even if a
time-varying 3D vector field is required, again, that can easily
be visualized. In physical systems – energy form, location, and
flow – are critical to understanding them. I have a natural
tendency to attempt to visualize that also. But when there are
gaps in our understanding, there are gaps in the visualizations
which automatically beg to be filled.

47
Gravity can be visualized in the approach above. Even
exchange of virtual particles and space-time foam can be
visualized. But that does not validate them. It should be clear
why quantum electrodynamics / quantum field theory is
distasteful to me. You cannot question the math, but you can
question the assumptions and techniques. In the first place, it's
not a holistic approach. It wasn't invented to explain gravity or
unify forces. The over-dependency on virtual particles is the
second major issue. Take that away and what are you left with?
A lattice of arcane math with questionable applicability.

What is the source of uncertainty in (8)? Is it space-time foam


or some inherent uncertainty? Is that uncertainty based on
some probability density function (which is truly random – the
conventional approach) or on some internal oscillation? Let's
examine relation (1) again:
X = 4π(tP/T) (1)
Let's rewrite it in terms of Planck-time:
XT/4π = tP (9)
∆X∆T/4π = tP from (5)
∆X∆t ≥ tP (8)
Convention would reject the second line as meaningless
without a ≥ symbol. They might accept uncertainty in
extension being inversely proportional to uncertainty in period,
but they would see the statement as incomplete without the
conventional relation (we are 'born, bred, and raised' to
acknowledge a lower bound on uncertainty). Convention might
find the first line interesting but not ascribe any deep meaning
to it. I doubt they would see the relationship between temporal
and spatial curvatures – even if a conventionalist had derived
and presented the equation. They would focus on the
assumption of internal oscillation and reject any conclusions
based on that. After all, we did not precisely define uncertainty
in energy: 'Amplitude is associated with the variation in rest-
48
mass/energy.' Even if we did precisely define it (we might
make an attempt later), there is the issue of validation. In any
case, the physics 'atmosphere' is extremely hostile toward
determinism and any aether-like associated proposals (a few
will be discussed below). The third line is important to
convention – if they want to unify gravity with
electromagnetism (with or without quantum field theory and
virtual particles). I'm certain that it can be derived within the
conventional framework. I'm certain that it holds fundamental
importance.

A dear associate of mine, Mayeul Arminjon, has developed a


model of space as a 'super-fluid ether'. It's intriguing, but space
behaves more like a highly elastic solid with 'strain bubbles' as
'matter waves' (G S Sandhu). But even he misses the mark in a
way: he defines elasticity to be 1/ε0 (with corresponding
inertial constant μ0). This allows him to derive Maxwell's
equations by correspondence of form (correspondence to stress
equations). That's a bit contrived to me. If he had started with a
mechanical definition of elasticity (such as in the previous
paper) and derived Maxwell from that, I'd find him more
believable. He also 'disproves' the primary postulates of special
and general relativity thereby rejecting both theories – only
later to state 'at higher velocities and corresponding high
energy interactions, adequate study and analysis of the
associated phenomenon can only be made by using the
techniques of special theory of relativity and Wave Mechanics.'
(p25, Elastic Continuum Theory of Electromagnetic Field &
Strain Bubbles), so he's a little inconsistent and tautological.
Perhaps some of his ideas can be salvaged and incorporated
into an integrated model of space-time and elementary particles
– without tautology and inconsistency.

49
Relation (8) will be dismissed because it was derived with
unconventional assumptions. But the associated insights are
profound and far reaching. If there's an equivalence between
spatial and temporal curvatures, gravity can be analyzed
exclusively as a distributed temporal distortion, energy can be
stored there, and this opens the door to a fully unified and
integrated model of space-time and elementary particles.

50
Chapter Nine – Uncertainty, Part Two

The Nature of Uncertainty


Salvatore G. Micheal, Faraday Group, micheals@msu.edu,
11/22/2007

Position-momentum uncertainty is analyzed to be


dependent only on two uncertainties: position and
energy. That relation is found to be additive and a
fourth uncertainty relation is discovered. The nature of
uncertainty is discussed.

As of this moment, there are three fundamental uncertainty


relations: energy-time, extension-time, and momentum-
position:
∆E∆t ≥ ħ/2 (1)
∆X∆t ≥ tP (2)
∆p∆x ≥ ħ/2 (3)
Let’s examine the last in detail. First, we need some basic
relations:
p ≡ mv, p = mv, and ∆(ab) = 2(b∆a + a∆b) (4)
Where the first is the standard definition of momentum (a
vector identity), the second is the scalar version of that, and the
third can be verified by the reader (make an assumption about
symmetry).
So, ∆p = ∆(mv) = 2(v∆m + m∆v) (5)
 (v∆m + m∆v)∆x ≥ ħ/4 (6)
For simplicity, let initial time and position equal zero:
(v∆m + m(2((1/t)∆x + x/∆t)))∆x ≥ ħ/4 (7)
Since mass is directly related to energy
and ∆E/(ħ/2) ≥ 1/∆t (1),
(v∆E/c2 + 2m((1/t)∆x + x∆E/(ħ/2)))∆x ≥ ħ/4 (8)
which is of the form:
(b∆E + c∆x)∆x ≥ ħ/4 (9)
51
where b and c are functions of v, m, t, and x (c here is not the
speed of light).
 b∆E∆x + c(∆x)2 ≥ ħ/4 (10)
Now, adding something positive on the left does not change the
direction of the relation (but we do lose some information –
with proper choice of a, we’re ‘completing the square’):
a(∆E)2 + b∆E∆x + c(∆x)2 > ħ/4 (11)
 (a1∆E + a2∆x)2 > ħ/4 (12)
 |a1∆E + a2∆x| > √ħ/2 (13)
 a1∆E + a2∆x > √ħ/2 (14)
since a1 and a2 are positive functions of v, m, t, and x (with
proper choice of coordinates). (a2 = √c, a1 = b/a2, a = a12 = b2/c,
b = (v/c2 + 4mx/ħ), and c = 2m/t.) (15)

This implies that the ‘momentum-position’ uncertainty relation


is actually an energy-position uncertainty relation that is
linear-additive – not multiplicative!

And since energy is directly related to extension,


a1∆X(ħ/2tP) + a2∆x > √ħ/2 (16)

This gives us four fundamental uncertainty relations: two that


are multiplicative and two that are additive; one that is bounded
below by ‘Planck-energy’, another that is bounded below by
Planck-time, and two that are bounded below by linear
functions of position-uncertainty:
∆E∆t ≥ ħ/2 (1)
∆X∆t ≥ tP (2)
∆E > √ħ/2a1 – (a2/a1)∆x (17)
∆X > tP/√ħa1 – (2tPa2/ħa1)∆x (18)

If we rewrite (1) and (2) to isolate energy and extension and


think in terms of distortions in space-time, uncertainty in

52
energy/extension is bounded below by linear functions of
uncertainty in 1/t. This highlights two things: uncertainty in
time directly ‘forces’ a lower bound on energy/extension – and
– the reciprocal nature between space and time. ‘Random
distortions in space’ provide a lower bound on uncertainty;
concurrently, ‘random distortions in time’ provide a lower
bound on uncertainty. No wonder the ‘space-time foamers’ feel
justified in their approach. (Actually, energy-time uncertainty is
not bounded by linear functions – UEt is bounded below by
hyperbolic functions in time. We could have applied the same
approach above to UEt (completing the square), but those linear
functions would not be unique (because we are free to choose
an infinite variety of a-s and c-s, the squared term coefficients).
So the nature of energy-position uncertainty is fundamentally
different than energy-time uncertainty. One is bounded by
linear functions; the other is bounded by hyperbolic functions.
That ‘right there’ is evidence against space-time foam (because
uncertainty is not symmetric between space and time). (Or, that
is evidence of a weak-link between space and time.) The other
‘juicy’ piece of information we get from above is that energy-
position uncertainty suggests negative energy states are
bounded above by symmetric linear functions of position
uncertainty. (Mirror the linear functions, shaped like a delta,
through the position axis.) Negative energy is suggested by a2 =
-√c being a valid solution in (15) above.)

It appears uncertainty has one of three sources: internal


oscillation, space-time foam, or inherent randomness. We have
not made explicit – exactly how internal oscillation could
exhibit itself in terms of the fundamental relations above. That
is our next task.

If the source of UEt is exclusively internal oscillation, the


simplest natural model is sinusoidal:
53
∆E = ħ/2t(sin2(ωt–ω0t0)+1) (19)
where ω0 represents unknowable internal phase at t0 – our
‘measurement time’ and ω is relativistic angular frequency
(E/ħ). The reader can verify the upper bound for this function is
ħ/2t. There’s no reason to use ≥ in the relation above since
we’re defining uncertainty here to be solely based on internal
oscillation. Any measurement uncertainty is separate. In a
sense, we’re defining ∆t = t(sin2(ωt–ω0t0)+1) with the
stipulation above. But that’s distracting at this point so we’ll
focus on energy:
Et = E ± ∆E (19)
= ħω ± ħ/2t(sin2(ωt–ω0t0)+1) (20)
energy of a particle, at a certain measurement time t0, is
relativistic energy with uncertainty defined above

In practice, we can replace t by ∆t, our uncertainty in time, but


we’d still have to deal with internal phase, so let’s focus on the
form of (20). We can analyze in terms of frequency:
Et = ħ(ω ± 1/2t(sin2(ωt–ω0t0)+1)) (21)
So what we’re really saying is:
∆ω = 1/2t(sin2(ωt–ω0t0)+1) (22)
energy-time uncertainty is dependent on angular-frequency
uncertainty which is a decaying periodic function of time-
uncertainty and initial phase

Here, time-uncertainty is not assumed to be caused by ‘random


distortions in space-time’ but rather simply – caused by
measurement uncertainty. So we’ve arrived at a completely
deterministic model of uncertainty caused essentially by
unknown internal phase.

If we could create an electron beam, all with the same internal


phase, we could verify above as distinct from inherent
randomness. The point of the discussion above is not simply to
54
discuss the possible causes of uncertainty – but to present
internal oscillation as a viable alternative. It is hoped the reader
now has a deeper understanding and appreciation of
uncertainty and its forms.

55
Chapter Ten – The Source of Uncertainty

Convention says the source of uncertainty is inherent


randomness. Last chapter, we proposed the source of energy-
time uncertainty is uncertainty in angular-frequency which has
its root in unknown internal phase. Admittedly, the function
describing ∆ω was somewhat arbitrarily assigned/constructed
to conform to the lower bound of energy uncertainty. But the
fact it could be constructed at all indicates the possibility of
veracity. Now we need to show some theoretical evidence that
the function has potential basis in physical reality (in other
words – justify it) – or – investigate other candidate sources of
uncertainty.

First, let’s look at (3) from chapter 4. A conventionalist would


never express energy-time uncertainty that way because it
implies internal oscillation. But let’s rewrite and ponder it:
4π∆t ≥ ∆T (1)
uncertainty in time is bounded below by uncertainty in period
The more I consider that relation, the more I think it has no
great importance. All it really says is: uncertainty in measured
time is bounded below by the uncertainty in some quality of
the particle under examination. It’s a fundamental statement
about measurement-error – not a fundamental relation about
the nature of elementary particles. When we write it like this:
∆E∆t ≥ ħ/2 (2)
it is fundamental because we know:
E = hν = ħω (3)
which provides some insights into elementary particles and the
nature of uncertainty.

When we watch a pendulum swinging, it’s beautiful because of


its elegance. It’s also beautiful because it illustrates gravity and
56
the conversion/conservation of energy. Energy oscillates in
form – between potential and kinetic. Energy is never lost.

Energy, as expressed above, has two parts: angular-momentum


and frequency. But this ignores energy in electric flux and
energy in extension. Some years ago, I investigated oscillatory
electric field – to simulate the hydrogen atom under that
assumption. It’s a good idea to explain tunneling, but the size
of the resulting atom/orbit doesn’t agree with Bohr. So we are
left with only three possible sources of uncertainty based on
oscillation: ħ, ω, or X.

As stated, the function describing ∆ω is arbitrary and doesn’t


satisfy a required physical connection (yet). If ω oscillates –
like an FM radio signal – then there’d be some physical basis
for ∆ω. But before we pursue that angle (pun intended), let’s
consider the other candidates.

We haven’t seriously considered an oscillatory ħ – but it’s


possible – and would explain its presence and dependence in
energy-time uncertainty nicely. If the simple pendulum is an
analog of ħ, then perhaps energy oscillates in form between
twist and extension: perhaps the twist in space oscillates out-
of-phase with the extension such that extension energy
maximum corresponds to twist energy minimum. Some clock
pendulums are made to twist this way – a spring stores the
angular momentum and vice versa.

Previously, I proposed energy oscillates between the e-m field


and extension because of the Poynting vector – which indicates
power flow. But there was the issue of Bohr disagreement
(which was ignored at the time). So at this point, it’s down to
the three aforementioned candidates. I believe the reason most
would dismiss X oscillating is that they assume particles would
57
radiate gravitational energy in that scenario. And there’s a
serious problem with ħ oscillating (through zero energy):
overall energy would appear to disappear periodically. So if ħ
oscillates, there must be restrictions on that. Those restrictions
must be ‘built in’ the structure of space-time and theoretically
explainable (that goes for any of the three candidates).

So perhaps the best candidate at present is ω. If ω oscillates,


then perhaps a useful analogy is the ‘radio on a rotating
satellite’ or ‘horn on the end of a spinning string’. In the first,
(if the satellite’s moving fast enough), you get a doppler shift
on your receiver. In the second, you get a doppler shift in your
ears (you hear the sound oscillate – up and down). This is not a
justification of the idea – just a couple illustrations. Perhaps the
oscillation is caused by a disturbance. When a sensitive
dynamical system is disturbed (disturbed from some
equilibrium), it typically oscillates around some ‘attractor’
(stable region). So from a systems point of view, it’s definitely
possible for a disturbed electron/proton to oscillate around
some stable frequency (assuming those particles are something
like sensitive dynamical systems).

Imagine particles as water droplets in zero gravity. Initially,


they are spherical due to cohesion and surface tension. If you
disturb one by trying to move it, it flattens where you touch it –
then moves away – oscillating in shape (the shape oscillates in
various forms of an ellipsoid). If there was a characteristic
frequency associated with the original spherical drop, I’m sure
the frequency would be disturbed because frequency is tied to
wavelength and wavelength is associated with size/shape.
Every simple object has a characteristic frequency associated
with it (basically – de Broglie’s hypothesis). This is the ‘ring of
the bell’ when you strike one with a hammer (impulse input). If
you imagine particles as little ringing bells that can change
58
shape (under input), it’s easy to imagine their characteristic
frequencies changing under input. It’s basic systems theory that
a transfer function can be determined by impulse input. The
transfer function of a system represents system structure. So,
structure can be discovered with impulse input. The only
‘problem’ with that is – an impulse can only be approximated
in practice (nothing can impart infinite force/power
instantaneously). That would destroy a system anyways. But it
turns out physics and systems are not totally disjoint ;).

A dynamical system is one where past inputs affect present


output or state. If we imagine elementary particles as tiny
simple dynamical systems that are inherently stable (due to
qualities of space-time), physical inputs (such as photon
absorption or flux interaction) will clearly disturb those
systems. If there are some characteristics that are fixed (spin,
charge, and rest energy), then there are some that are flexible
(relativistic energy, extension, and omega). Those flexible
characteristics could oscillate (dependent on constraints listed
above) or there could be just one that oscillates. Clearly, from a
systems vantage, elementary particles are ‘disturbable’ with at
least one oscillatory characteristic. It’s not a stretch to
tentatively assign that to ω.

So right now, there are two competing primary assumptions


about elementary particles: inherent stability vs inherent
randomness. Let’s use Occam’s razor to cut away the fat:

59
Associated Assumptions
Inherent Stability Inherent Randomness
e.p.s are stable simple e.p.s are probability waves
dynamical systems
random behavior is due to random behavior is due to
unknown internal phase implicit probability density
functions
state variables are explicitly state variables are
deterministic interdependent random
variables
uncertainty is due to physical uncertainty is due to bounds
bounds and measurement on frequency analysis
uncertainty
e.p.s are ‘distinguishable’ by e.p.s are ‘distinguishable’ only
internal phase and any in their flexible characteristics
consequences of past
disturbances

According to Occam’s razor – the primary assumption with the


larger number of associated assumptions (given all else is
equal) – should be thrown out. There are five high-level
assumptions associated with each primary. There are low-level
assumptions for each high-level assumption:

e.p.s are stable simple dynamical systems


‘stable simple’ is defined by constraints on space-time
past inputs affect present state

e.p.s are probability waves


waves are constrained by setting
past inputs don’t affect present state

60
random behavior is due to unknown internal phase
internal phase is based on relativistic frequency
frequency is based on internal oscillation

random behavior is due to implicit probability density fcns


density functions are based on setting

state variables are explicitly deterministic


relationships are defined by qualities of space-time

state variables are interdependent random variables


relationships are bounded by frequency analysis

uncertainty is due to physical bounds and measurement unc.


physical bounds exist at some extremely low resolution

uncertainty is due to bounds on frequency analysis


frequency analysis applies to elementary particles

e.p.s are ‘distinguishable’ by internal phase and any


consequences of past disturbances
internal phase is unobservable or currently misinterpreted
e.p.s are dynamical systems

e.p.s are ‘distinguishable’ only in their flexible characteristics


e.p.s are not dynamical systems
e.p.s are indistinguishable in inflexible characteristics

Let’s regroup and delete the repetitions:

61
Inherent Stability:

e.p.s are stable simple dynamical systems


‘stable simple’ is defined by constraints on space-time

random behavior is based on internal oscillation


internal oscillation is directly unobservable or currently
misinterpreted

uncertainty is due to physical bounds and measurement unc.


physical bounds exist at some extremely low resolution

Inherent Randomness:

e.p.s are probability waves


waves are constrained by setting
past inputs don’t affect present state

state variables are interdependent random variables


relationships are bounded by frequency analysis

I’ve tried my best to regroup both sets of assumptions deleting


repetitions and implicit assumptions. At the same time, I’ve
deleted intermediate assumptions. The tally ‘at the end of it all’
is six vs five. It’s clear why convention prefers the latter set
though it ‘wins’ by only one assumption. Examining the
historical evolution of physics, it was more than Occam’s razor
that decided the preferred set. It was the rejection of the aether,
determinism, and the bent toward reduction which impelled
physics toward probability. I’ve devised a test which should
give some evidence one way or the other. It’s possible that
conventionalists could ‘pervert’ the test by defining everything
62
in terms of angles and probabilities, but that’s up to them. They
have three choices: dismiss the test as meaningless, explain the
test in terms of probability (which is likely ;), or accept the
results as confirmation of inherent stability. (Let’s do the
experiment and see what happens!)

63
Chapter Eleven – Energy Distribution

The previous chapter [on systems] would have made a good


finish for the short book, but as things go – good ideas tend to
take on a life of their own. I’ve always been concerned with
total energy and energy distribution within elementary particles
– that was the basis for my first attempt at this theory, but that
first attempt was too ambitious, ill conceived, and lacked
appropriate insights. I proposed an inner structure for e.p.s –
depending on luck (a lucky guess about inner structure) and the
few insights I possessed at the time.. I won’t say that I was
incorrect – just too ambitious in attempting to explain too many
things.. So this chapter is not written in iridium – I won’t stake
my meager reputation on the veracity of it, but it seems to
make sense in the bigger picture of inherent stability; it’s
consistent with the idea of internal oscillation.

We’ve previously proposed total energy is distributed in three


components:
ET = EX + Es + Ee (1)
= (ħ/2tP)X + (ħ/2)/T0 + e2Z0/T0 (2)
= E0/γ + E0/4π + E0/2π (3)
= (1/γ + 3/4π)E0 (4)
= ((4π/γ)/(4π/γ + 3) + 3/(4π/γ + 3))ET (5)

These relations assume a couple things: that these energies are


distinct (there’s no oscillation or sharing between them), they
hold for all e.p.s, and under annihilation – the first term
dominates (the others vanish). Line (1) describes energy in
extension/temporal-curvature, spin, and electric-flux. Line (2)
is more explicit and is based on earlier derivations. Line (3) is a
simplification based on known relations. Lines (4) and (5) are
algebraic simplifications. Line (5) is interesting in that it
illustrates the relationship between relativistic and non-
64
relativistic forms of energy in e.p.s. It does not highlight the
split of energy between spin and electric-flux because both of
those are static quantities. But it illustrates the limiting nature
of the static fraction – energy can never wholly reside in
temporal-curvature because there will always be a small
fraction in spin-flux – regardless of kinetic energy.. This is an
alternative view compared to the ‘limiting nature of c’.

The nagging question in my mind has been the ‘confinement


mechanism’: how do e.p.s ‘stay together’ – why don’t they
simply dissipate? Perhaps the answer is in the extreme
inelasticity of space. The numerical value of Y0 is extremely
high which indicates space has a natural tendency to ‘crush the
living daylights’ out of e.p.s. Considering this, it’s amazing
they exist at all. So, these ‘strain bubbles’ must exert an
internal pressure balancing the crushing force of space. (The
question then becomes – where is the balancing point and why?
This is equivalent to asking e.p. radius or volume. It’s also
equivalent to asking energy density or ‘shape’ of e.p.s in terms
of energy. It should be obvious I think the answer is in the
qualities of space-time. The answer to this question must be
‘phrased’ in a non-tautological way. The answer is ‘there’
waiting to be discovered. I’m waiting for inspiration.)

If disturbances affect ω and therefore E, there must be a


mechanism to restore equilibrium. Since X0 = 4πC0 ≡ 4πtP/T0
and X = 4πC where C is relativistic temporal-curvature, the
dissipation must be in the form of minute gravity waves (C =
(tP/h)E so C is a relativistic quantity like X and we’ve
established gravity can be treated exclusively as distributed
temporal-curvature). These must be released such that
uncertainty in ω conforms to 1/2t(sin2(ωt-ω0t0)+1) or similar
function. (This proposed phenomenon makes sense, but so does
retention of disturbance energy – if there’s no mechanism for
65
release. The experiment proposed above could be extended to
include various distances between MD3 and T so that the idea
can be tested. If disturbance energy is dissipated over time, and
that time is significantly larger than the Plank-time, then we
should be able to measure restoration to equilibrium.)

The nature of ET above indicates photons cannot possess


electric-flux, have zero intrinsic spin (modern e.p. physics
asserts this), and are ‘pure’ waves of temporal-curvature.
Perhaps these ‘travelling strain bubbles’ oscillate out-of-phase
with e-m field vectors. ET does not explain neutrinos unless
they are travelling strain bubbles with no oscillation. ET does
not explain why there are two or three E0 – a thorough analysis
still needs to be performed on the ‘three properties’ table (of C,
μ, and Q):
C μ Q
-24
electron 6.6606*10 μe -e
proton 1.2229*10-20 μp e

Before string theory, multiple dimensions, and exotic


geometries, I proposed e.p.s have structure based on the
structure of space-time. Looking at C alone in the table above
hints at this. The coefficients are almost 6/9 and 11/9. Is there
some deep meaning in these numbers? Perhaps. Perhaps not. 9
is 32 where 3 is the number of spatial dimensions we perceive.
But because there are only two stable e.p.s, we don’t have
enough information to say any more.. My original idea was that
space provides a rectangular box where standing waves can
reside – in one direction or the other. But the ‘box idea’ is
equivalent to extra dimensions – is it not? If the ‘box’ resides in
‘time’, then time needs extra dimensions to accommodate it
(which is an extremely ‘cool idea’ – but I must resist
temptation). Let’s try to explain the table above within the four

66
dimensions of space-time – before we appeal to multiple
dimensions.

Before we discuss the conventional approach to that, let’s talk a


bit about ω and internal oscillation. ω could represent the
angular-frequency of a spherical standing wave within e.p.s. A
standing wave of what? The ‘only thing that makes sense’ is a
wave of temporal-curvature. A standing wave of spin or
electric-flux makes no sense. So perhaps e.p.s are:
spherical standing waves of temporal-curvature
bounded by the extreme inelasticity of space
possessing discrete twist and electric-flux.

This year, a very important (conventional) paper was published


and arXived. It’s entitled: Statistical Understanding of Quark
and Lepton Masses in Gaussian Landscapes. The authors are:
Hall, Salem, and Watari. Partial funding for the research was
supplied by the National Science Foundation and the US
Department of Energy. Any project that can acquire both NSF
and DOE funding is obviously important (to convention). After
skimming the small book, I tend to agree with them – within
the framework of convention. If the Standard Model is correct,
if the approach of string theorists is correct, if reduction is a
basic premise of the multiverse, if multiple dimensions are
compacted in our and other universes, if the multiverse exists,..
Maybe you get my point. That’s a lot of “ifs”. And they’re not
just any “ifs” – they’re big-fundamental “ifs” about the nature
of our universe and all others. As I was reading the paper, I got
the distinct impression that “this is a paper on high-energy
physics and cosmology”. When you try to explain all particles,
no matter how short lived, there is little choice but to employ a
framework such as the one convention has. The paper is
beautiful in its consistency and scope. But it’s a monster in
implementation. If you can absorb the concepts without getting
67
bogged down by the math, it’s actually not that complicated.
Try to read/skim it. The arXiv number is: 0707.3446v2.

Nuclei behave as extended objects (objects with size), but


protons and electrons behave as point-masses. The fact nuclei
exhibit size is not a huge mystery to me: protons cannot exist
near each other because of electrostatic repulsion. The ‘spacers’
in nuclei are neutrons. They also act as ‘glue’. (So, of course,
do protons.) The problem with convention is to automatically
assign a particle to that: gluon. Anyways, nuclei are extended
basically because of proton repulsion. They have geometry,
excitation modes, energy release modes, and of course – the
fascinating quality of stability/instability. If we examine the
alpha-particle (helium nucleus), this highlights the differences
between convention and determinism. Convention says that
particle has a finite (non-zero) probability of changing identity
or decay. Determinism says: that particle will never decay
unless it is unstable or disturbed. In my opinion, they’re stable
and – no matter how long you wait, an alpha will remain an
alpha will remain an alpha. Some nuclei are unstable because
of geometry or vibrational/spin modes, some nuclei are
unstable because of (relative) lack of ‘glue’, and some nuclei
are unstable because they’re simply too big. Nuclei are
fascinating systems, but they’re not elementary particles – just
as short-lived particles, no matter how fundamental they may
seem, are not e.p.s. A good example is the neutron. A free
neutron is unstable: it decays in about eleven seconds. A bound
neutron is stable – if the particular nucleus binding it is stable.
A free neutron always decays into the products: proton,
electron, and antineutrino. They’re obviously composite
particles; they’re obviously not elementary .. An interesting
challenge is to model the interior of a neutron deterministically,
but more important presently is the issue of elementary particle
size.
68
The Compton-wavelength, identified by h = m0cλ0, has been
dismissed by convention as meaningless because if e.p.s are
point-masses, λ0 ‘obviously’ means nothing in terms of radius
or anything geometric. In the process of looking for e.p. size,
I’ve found interesting features; I’ve found that the assumption:
e.p.s behave as point-masses implies they are point-masses – is
basically incorrect. E.p.s appear to be point-masses because
they’re so small.

The question of size arises from the consideration of balancing


forces: the crushing inelastic force of space – balanced with the
internal pressure of e.p.s. If e.p.s are spherical standing waves
of temporal-curvature (with twist and charge), they must have
boundary. The natural reference to use is Compton-wavelength.
Y0 is already in units of force – we don’t have to modify it in
any way. (We’re examining the equation: Fext/A = Fint/A and
trying to determine the nature of A.) If e.p.s have size, the best
first guess is based on Compton-wavelength:
Y0 = E0/aλ0 (6)
where a is a dimensionless scaling constant (we assume the
equation must possess in order to ‘work’). Now, E = hν = hc/λ
which implies:
Y0 = hc/aλ02 (7)
where a can be solved for the electron/proton and works out to
be about 10-45/10-39. So, if e.p.s are Compton-spheres, they are
only fractions thereof because of the extremely small scaling
factors.

What about torii? The surface area of a torus can be controlled


by adjusting relative radii, so we may be able to use that model
for e.p. shape. The surface area of a very thin torus can be
approximated by: 4π2rprm where rp is the primary (larger) radius
and rm is the smaller/minor radius. Since λ0 >> lP, we can assign
the following: 2π2λ0lP (here we’re assuming λ0 is the primary
69
diameter – just for simplification). Now, in order to get that
form in (7), we must divide E0 by lP (assuming rest energy is
somehow packed into a Planck-length – giving e.p.s a ‘fighting
chance’ to balance the crushing force of space), but there’s still
a scaling factor we must assume is there to ‘make things work’
(size wise):
Y0 = hc/a(2π2λ0lP) (8)
Note that λ0 appears below because of E0 and lP appears below
because of our assumption above. When we solve for a, we get
a = 2lP/πλ0 = X0/2π2 which implies:
Y0 = hc/(2π2λ0(X0/2π2)lP) (9)
Which implies – if e.p.s are torii, they are ultra-thin torii – with
minor radii much smaller than the Planck-length. The
‘interesting’ feature of this scenario is that when we plug in the
first value of a into (8), we get:
Y0 = hc/(4πlP2) (10)
where the denominator is the surface area of a Planck-sphere!
So even if we assume e.p.s are shaped like ultra-thin torii, the
shape we’re forced to accept is the sphere! It seems we cannot
escape it. Of course, when we start with that assumption, (10)
is easy to derive.

The Planck-sphere seems inescapable.. If indeed e.p.s are


energy ‘packed into’ Planck-spheres, that’s why they appear to
be point-masses – and why λ0 seems to have nothing to do with
particle radius. Is λ0 anywhere in (10)? No. (10) proposes all
non-composite particles with spin and charge have Planck-
radius. What makes them different? We’re left with very few
choices: wave-number inside the boundary and perhaps the
orientation of μ with respect to ħ. (10) explains why protons
and electrons have the same charge magnitude (treated as a
surface charge over the same area). And again, why λ0 is
merely a relational factor with no geometric meaning (λ 0 has
meaning when we equate it with c/ν0). Of course, explaining
70
the relative magnitudes of magnetic moments is a chore
determinism cannot deny – and – defining a suitable κ, or
wave-number, that fits the framework above – is required.
(Many would point out that we defined Y0 in terms of Planck-
measures and it’s very easy to derive (10) from the definition.
But quite honestly, I had forgotten the relation lP = ctP until
very recently. The pattern of development above is actually
how I derived the Planck-sphere. I was avoiding it as best I
could precisely because – many would balk at the conception. I
had started the derivation based on balancing forces, but
noticed that area on the RHS seemed to fall quite easily into the
denominator. I realize that the equation is incomplete in that
area is not expressed on both sides as intended. But the form on
the RHS is what’s interesting and in order to ‘make the units
work’, we must choose some length-measure (to reduce energy
into force). I preferred the Compton-wavelength because I had
previously used it to define relativistic-measures. I think one
main reason most theoreticians have discarded determinism is
because they ‘get stuck’ on some particular point – like the
geometric meaninglessness of λ0 – and basically give up on all
associated concepts. But the point of this book and those
previous – is to ‘run’ with the idea as long as possible – until it
is proven inconsistent or invalid. I’ve had some limited
successes in explaining decay patterns in nuclear physics. And
the concepts presented in this book (albeit presented in a non-
sophisticated way) are remarkably consistent, intuitive, and
seem to have physical justification. I’ve personally seen a very
strict and conservative nuclear engineer teach E = hν (about
particle energy) but ignore the oscillatory implications (I
proposed some internal oscillation but he balked and moved
on). So evidently, conventioners treat E = hν somewhat like I
treat λ0 – useful but not meaningful. Many would say I’ve
created a ‘house of cards’ – a lattice of assumptions which is
easily destroyed by a single removal. But I’ve tried to be very
71
careful, restrictive, and explicit in employing any assumption.
The fact we can explain physical things deterministically at all
alludes to the possibility of veracity (as we stated in chapter
six). I was fairly confident that I could not derive a function for
uncertainty in omega – that makes sense. I’m personally very
skeptical of both determinism and probability (for different
reasons). “If it could be done, it would have already been
done.” – is how part of me and many feel about determinism.
But.. But perhaps most missed some crucial insight required to
‘put it all together’ (like gravity = distributed temporal-
curvature). If an average nim-rod like me can stumble around
and discover something fundamental, just imagine what a
brilliant guy/gal could do – if they carried the ideas long/far
enough. ;)

κ should be dimensionless and larger for heavier particles. The


inverse of C or X does not work because that’s larger for
lighter particles. If we try νtP, that actually equals C (getting
lost in the numbers here ;). So, if we use C, we need a scaling
factor that also ‘integerizes’ it. In order to accommodate our
significant digits in e.p. masses, let’s use a 10n integer for κe
and derive κp based on that. If we choose 1000000 for κe and
our scaling factor is πα where α = 58.6875025189, then κp =
1836081243 (give or take a few waves ;). Until we derive a
more intuitive and physically-related κ, this will have to do for
now. It illustrates the flexibility/arbitrariness of this factor. All
κ ‘needs to do’ is be an integer (for both electron and proton)
and display the ratio of masses exactly (to known precision) of
mp/me. (This is not equivalent to multiple dimensions
compacted to undetectability. Sure, we’re saying we have a
wave smaller than anything we can ever measure. But it’s
qualitatively different than proposing some extra dimensions
compacted to ‘nothingness’. A scaling factor is like a
renormalization factor – and we’ve tried to avoid that. But in
72
the process, we’ve arrived at a Planck-sized object with
internal structure. In theory, we can never verify that. But
theory’s been known to be wrong. ;)

A note about hc. Is there some deep meaning about the


product? It’s basically spin-energy times the speed of light. If
we look at it in (10), we see that it’s bounded by the Planck-
sphere. So, spin-energy times the limit is bounded by the
Planck-sphere. There’s nothing remarkable about it – it simply
begs the question: what’s the purpose of c in the equation?
Does it mean spin is revolving on a second axis at the speed of
light? Perhaps; perhaps not. The fact we were able to derive a
size for e.p.s is ample justification for the form of (10). If we
have time and space (pun intended), we’ll consider any ‘deep
meaning’ of hc again – later.

So let’s summarize our findings. If our assumptions hold,


elementary particles are:
spherical standing waves of temporal-curvature,
bounded by a sphere of Planck-radius (defined by Y0),
containing an integral number of standing waves,
possessing discrete twist analogous to spin,
possessing discrete electric-flux,
and possibly possessing an alignment
or anti-alignment of spin and magnetic moment.

(The final statement is introduced to account for the notion of


positive and negative charge.) It may seem like a ‘monster’ to
some (especially probability-reductioners), but it’s preferable
to multiple dimensions and random character. The ‘only’
problem that comes to mind is the double-slit phenomenon. I
need to think about it. ;)

73
(Enough? ;) Well.. even with an infinitesimal ‘core’, the flux is
extended. It’s possible the electron ‘detects’ both slits
simultaneously via its electric field. This could explain double-
slit phenomena – as long as the physical extension of electric-
flux is large enough to accommodate all double slit
experiments. So it’s possible..

..The only two ways to successfully attack probability are:


create a sophisticated and accessible formalism such as the
arXived paper mentioned above, but bent toward determinism
– or – attack uncertainty and provide a viable alternative. I
don’t have the formal training to provide the former; the best I
can do is attempt the latter. I’ve tried to do that within a
consistent framework. I’ve tried to refresh the ‘tired old ideas’
of determinism and loosely – the aether. I’ve asked Mayeul
Arminjon to mentor me because I felt I needed his formal
training to give some conventional credibility to those ideas
(assuming I could acquire some of it from him). But he’s too
busy with his own pursuits. And no matter what area of science
you focus on – you have a necessity to ‘pay your dues’ in order
to pursue your own interests (typically, you must follow a
research path that is not really to your tastes or interests – only
later allowing you to focus on those). Being on the ‘outside’
has advantages and disadvantages. I’m free to focus on
something until I ‘drop dead’. But I lack mentoring, guidance,
and funding.. I’ve read many-a-crackpot and feel unfairly
lumped with those brave souls. I’ve gleaned some precious
nuggets from my meager middle-class public education (such
as systems and error analysis). I’ve tried my best to pursue this
track from a scientific/test/disprove/invalidate perspective.
Admittedly, I’ve proposed a couple untestable ideas, but most
seem to ‘jump and dance’ of their own accord (acquire a life of
their own).. I have this insatiable curiosity; I’m naturally a
researcher, but.. I didn’t have the discipline or ‘smarts’ to get
74
all ‘As’ in university (I think it was the latter). Once I realized
that, I sort of ‘gave up’ (for a time). By the time I came to the
point of applying to graduate schools, I couldn’t get accepted
into any program that inspired me. Systems wouldn’t have me,
physics was clearly out,.. What were my options? Work as a
technician and pursue physics in my ‘free time’? That’s what I
did for several years – only to be dismissed and ignored by
convention – and – dismissed and ignored by those that were
not. It’s funny – but not .. When I was young, it was my dream
to leave a positive lasting significant contribution to humanity.
In my book on systems – I feel I’ve done that. But it’s been my
secret desire to help physics ‘see the light’ as well.. My brother
insists I’m too ambitions in these regards. Perhaps so.

My new (and only) baby boy has just come into the world.
New life is always amazing.. I don’t know if I can be a good
father – all I can do is try my best .. Sometimes I feel like such
a complete and utter failure in life – such a loser ;) ..I had
‘friends’ who criticized and inspired many points in this book.
I’m not looking for sympathy or pity .. I would like to be
understood. I would like to be appreciated (a little bit). I would
like these ideas to be treated without ego or arrogance. They
deserve it; I don’t own them.

As I wrote Humanity Thrive! for the innocent of the world – I


write this book for the open-minded .. I feel we’re on the verge
of a deterministic renaissance. For near a century, we’ve
doggedly pursued probability-reduction. We’ve tried to justify
it with every result and observation. But isn’t it nigh time we
gave chance (pun intended) to determinism? Research the
indicators – they’re there.

Bless your patience if you’ve made it this far. We’ve got a long
way to go baby – a long way to go..
75
Chapter Twelve – Eulogy/Christening

The previous four chapters were written about a year ago just
before Arthur was born. They were intended for Gravitation
and Elementary Particles, but that book was never published in
paper form. My wife insists I ignored her and “treated her like
garbage” at that time. My justification to her was/is the
following (it was difficult to explain in English and broken
Thai – to someone who has a middle-school education): who
else is working on this theory? As far as I can tell, no one. Who
will publish this book? No one but me. Why is it important? If
the central premise is correct, physics has been ‘wasting time’
for about a century and needs correction. It’s not just for theory
– that I’ve been working on it: it’s for all the erroneous
textbooks and misguided students.

Many of the concepts ‘fall out of the woodwork’ (or percolate


if you prefer) after you decide on Y0 and X. I didn’t invent
them – they appeared before my eyes after deciding on Y0 and
X. Good examples of this are temporal curvature and inertia.
The more I studied the theory (and its implications), the more I
came to realize the centrality of temporal curvature and how
simply things can be explained from that perspective. The new
definition of inertia above simply popped into my mind – from
the new perspective. So please don’t make my “treat like
garbage” of my wife be for nothing. Dear reader, please
consider the theory with an open mind. I know, I slam
convention and insult them with every possible taunt. But
consider how I have been treated over the years: ignored,
dismissed, and ill-mentored. Simply, no one cared. Should I
give them any slack? To me, they earned every insult.

Should this ‘baby’ be aborted before it’s born? That’s up to you


and history to decide. Read thoughtfully. Consider.
76

Vous aimerez peut-être aussi