The Science of Existence – Toolkit

Toolkit

“A person with a new idea is a crank until the idea succeeds.” ~ American author and humorist Mark Twain

Math is the tool of the trade for theoretical physics, equations the universal language.

“Models are a means of extrapolating from what is known to create proposals for more comprehensive theories with greater explanatory power.” ~ American theoretical physicist Lisa Randall

A physical model is a mathematical model, typically geometric or algebraic, providing a symbolic description of the embodied phenomena. The quality of a model is how well it agrees with empirical observations and its predictive power. Newton’s motion laws came from a physical model.

“All great discoveries in experimental physics have been due to the intuition of men who made free use of models, which were for them not products of the imagination but representatives of real things.” ~ Max Born

A physical theory describes relationships between various measurable phenomena, often considered as cause and effect. A physical theory may include a model of physical events.

In the late 5th century BCE, Greek philosopher Pythagoras explained the relation between the length of a vibrating string and the musical note it produced. In the early 3rd century BCE, Greek polymath Archimedes understood that a boat floats by displacing the water that would otherwise be there.

“What is especially striking and remarkable is that in fundamental physics a beautiful or elegant theory is more likely to be right than a theory that is inelegant.” ~ American particle physicist Murray Gell-Mann

Physical models bias physics. Physicists are understandably fond of mathematical simplicity, termed elegance, which comes via reducing independent variables. Symmetry is also essential to simplicity.

Otherwise, models become unwieldy if not insolvable, however better they may reflect Nature, and thereby offer predictability. Complexity is considered a nemesis, as it is a hindrance in workability, and an encumbrance to comprehending what are taken to be fundamental operating principles.

The result has been a strong inclination toward simplifying reduction that is often amended with exceptions when a model is found wanting, as most are. Putting a patch on a model lessens its elegance. Applying multiple patches can bring a model to its knees, as predictive power wobbles on an increasing number of variables and/or contingent conditions.

“Just because the results happen to be in agreement with observation does not prove that one’s theory is correct.” ~ Paul Dirac

Mathematically, any system with 3 or more independent variables is unpredictable. Patches to improve predictability leave room to ponder if something else essential is being left out. Many theories, and the models on which they rely, flounder on these shoals. Such as been the case with the standard cosmological model (ΛCDM) and quantum physics’ standard model.

“It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.” ~ English physicist Arthur Eddington

Beyond description and prediction lays explanation. The most powerful theories go beyond mere mechanics, yielding insight into the nature of phenomenal relations. Ultimately, this is what physics, and every branch of inquiry, strives for: knowledge.

“Although we live in a world of constant motion, physicists have focused largely on systems in or near equilibrium.” ~ American physicist Michael Kolodrubetz”

A theory is a statement of how a relationship is presumed to behave, based upon some evidence. In contrast, a law is a conclusion of a universal natural tendency.

Whereas a theory is confined to specific relations, a law applies to everything. Laws invariably underpin theories.

While a theory may not necessarily ratify an implied aspect not central to the theory, it at least suggests that any implications of the theory are as valid as the theory’s central tenet. After all, if a theory appears to well-describe its intended target, its ancillary implications intrigue.

Sometimes the implications of a theory turn out to be more important than the target phenomena described, in the doors they open, as questions raised of issues previously unconsidered. Maxwell’s implicit discovery of wave/particle duality, leading Einstein to his relativity theories, is exemplary.

The open-ended nature of mathematics has engendered the belief that the fundamental constructs of physical existence can be formed into formulas.

“We exist in a universe described by mathematics. But which math?” ~ American theoretical physicist Antony Garrett Lisi

While math has paved a remarkable path, ultimate understanding of existence via science remains as easily reached as the end of a rainbow. Behind every model and theory is insight which only opens more doors. The atomistic unraveling of reality is endless.

“The hypotheses we accept ought to explain phenomena which we have observed. But they ought to do more than this: our hypotheses ought to foretell phenomena which have not yet been observed.” ~ English polymath William Whewell

The Inscrutability of Infinity

“To infinity and beyond!” ~ action figure Buzz Lightyear in the movie Toy Story (1995)
In the early 20th century, modeling subatomic particles in classical electrodynamics hit a stumbling block: infinity. The mass-energy of a charged particle field veered to the infinite as its mass approached zero.

2 divergent preeminent models emerged in modern physics: relativity and quantum mechanics. Attempts to bridge the relativistic cosmological realm with the infinitesimals of quantum theory foundered on the ethereal rocks of infinity. At the quantum level, gravity became an infinite cavity. So too relativity, where gravity culminated in black holes.

The quantum Standard Model, which is the supposed success story of quantum theory, was long beset with pathological infinities. For instance, quantum electrodynamics – the quantum theory of the electromagnetic force – initially posited the mass and charge of an electron as infinite.

Infinity won’t do for theoretical glue, so a workaround was developed, along with a euphemism for ejecting infinity from equations: renormalization – pitch the infinities with any workaround that works.

Once introduced, renormalization became the norm. If a relative measure could be established in relation to an experimentally known quantity, such as the electron’s mass and charge, the infinities popping out the equations could be ignored. The residual finite results from renormalization serve as a workable approximation.

“The horrible “infinity” could be subsumed, hidden from view as if didn’t exist, leaving an apparently pristine theory on display.” ~ English particle physicist Frank Close

Ubiquitous in their cameo appearances, virtual parti-cles are a phenomenon that smacks of infinity. The model for virtual particles is anomalous in multiple ways. Photons are massless, but virtual photons have mass. Virtual particles also have an energy-momentum that is not allowed by any relativistic equation for a particle of that mass.

Virtual particles are accounted for using integrals. These integrals are often divergent: their solution yields infinity. So virtual particles are mathematically ignored, while their physical effects are factored in.

A crucial particle in the quantum Standard Model – the Higgs boson – supposedly gives mass to all matter particles (fermions) via the Higgs mechanism, where a universal field imparts just the right amount of juice. The Standard Model states that the Higgs boson has infinite mass. This unacceptable result is washed out by mathematical trickery.

“Infinities, when considered absolutely without any restriction or limitation, are neither equal nor unequal, nor have any certain proportion one to another, and therefore, the principle that all infinities are equal is a precarious one.” ~ Isaac Newton

While all infinities are not necessarily created equal, the models that turn up infinity are all making the same statement: that their accounting is incomplete, that something essential is missing.

Fields

Many acts of physics transpire as fields. A field is a physical quantity represented by a scalar, a vector, or a tensor.

A physical quantity is a mathematical representation of a property in a physical system, which is an arbitrary geometric region under examination. The adjective physical refers to a study of something which may be observed, as contrasted to utterly imaginary concepts.

A scalar is a quantity which is representable as a point on a scale. Scalars are expressed as real numbers.

A vector is a geometric quantity with both magnitude and direction. Vectors are typically represented by arrows. Although a vector has a significance and an orientation, it lacks a position. In physics, a vector is a movement of energy at a certain strength.

A tensor is a geometric object describing linear relations between other geometric entities (scalars, vectors, tensors). Tensors are typically entangled with other tensors, forming a tensor network.

In physics, a field is considered a region of space with integral energy. More pointedly, a field is an energy associated with a spacetime point.

Like energy, fields do not exist. Physics fields are just a mathematical modeling technique: a geometrical way of characterizing phenomenal transformations of matter.

Newton’s law of universal gravitation was an expression of force fields, though Newton lacked the idea of fields. English physicist Michael Faraday coined the term field in 1849, referring to electromagnetism.

Gravity, which is an entropic distortion of spacetime, acts as if it was an attractive force by a body because of its mass. Mathematically, gravity is a monopolar field.

An electrical field has 2 opposite point charges, so is dipolar. The 2 point charges of an electrical field are negative and positive. Electrons carry a negative charge. Protons carry a positive charge. By definition, an electric current flows away from a positive charge and toward the negative charge.

All matter is loaded with electric charges. We are usually unaware of this because the opposing charges within matter – between protons and electrons – neutralize one another.

An electric charge creates a field which exerts an outward-radiating force, called the Coulomb force. The lines of force of an electric field flow between the oppositely charged dipoles.

Charles-Augustin de Coulomb published his speculations on electricity and magnetism in 1785. The Coulomb force came from characterizing static electricity.

The strength of an electric field decreases as the square of the distance from its source. Moving twice the distance from a point charge saps the felt field strength by 1/4th. This inverse-square dynamic is termed Coulomb’s law, though German physicist Franz Aepinus suspected as much in 1759, before Coulomb published his law.

A moving electric charge – an electric current – creates a magnetic field. As everything is always moving, electric and magnetic fields are coincident.

The charges of a magnetic field are like those of an electric field, with the south pole of a magnet analogous to a negative electric charge, and the north pole like a positive electric charge.

The 2 charges in a hydrogen atom, with its single proton (+) and sole electron (–), may be pulled apart to distinguish electric monopoles. That cannot be done with magnets. Magnets are always dipolar. Theorists hypothesize that there may have been magnetic monopoles in the early universe, but none have ever been observed.

The Imaginary Complexity of Reality

“Standard quantum theory is based on a complex Hilbert space.” ~ William Wootters et al

For over 2,000 years, there was only 1 geometry, devised by Greek mathematician Euclid of Alexandria in the 4th century bce. What belatedly became termed Euclidian geometry was described in the most influential mathematics book of all time: Elements, the primary textbook for math, especially geometry, into the early 20th century. The 3-dimensional spatial world has long been described as Euclidean space.

With general relativity, Einstein inadvertently introduced an extra spatial dimension. Quantum theory and its poster child, the Standard Model, upped the ante with even more dimensions.

These models required geometric description that exceeded Euclid’s conception; a non-Euclidean geometry.

Work by German mathematician David Hilbert and others in the 1st decade of the 20th century provided the mathematical means; so arose Hilbert space.

In supporting any number of dimensions, Hilbert space generalizes Euclidean space. To do so, complex numbers are employed.

In construing geometric points, complex numbers are inherently 2-dimensional, with real and imaginary parts. While the real number is real enough, quantitatively speaking, the imaginary part (i) is unworldly in satisfying the equation: i2 = −1.

Despite their surreality, complex numbers have been a mathematical convenience since the 16th century. Waves of all sorts are most easily expressed using complex numbers. The rotations and oscillations of quantum mechanics are neatly described via complex numbers. In contrast, real numbers alone cannot describe the wave/particle duality that is the fuzzy foundation of quantum theory.

There is some saving grace in that the imaginary i washes out when measuring a quantum phenomenon. The uncertain complex plane collapses to a real measured point.

If one takes the quantum model to be a map of reality, which physicists most certainly do, the issue is what the imaginary part means. It must somehow represent information that is required for the system as a whole, but not in the perceived instant. In other words, conceptually taking complex numbers as representing something real, the imaginary portion must provide a necessary context that is not apparent when viewing moments of spacetime.

Perhaps the best way to see what the imaginary brings to reality is to try to set it aside. American theoretical physicist William Wootters did so.

Wootters and his colleagues constructed a real-number quantum theory. An extra bit was needed to fill the void of the imaginary; a supposed physical entity that Wootters dubbed the ubit (for “universal quantum bit”).

The ubit turned out to be a master of entanglement: an information conduit interacting with all the other ubits in the system describing the universe. In a word, the ubit signified entanglement.

With the ubit, the modeled world is all real. The same could be said for the imaginary i in complex Hilbert space, which takes the same backseat driver role that the ubit has; a difference with scant distinction. Except, in building the real-only model, the ubit came in as a necessary accouterment, rather than being built-in as part of the complex mathematical fabric.

With his real-number model, Wootters was able to shine a spotlight on an essential element that drops out of view in ever-emergent actuality: the meaning of i in Hilbert space. Wootters’ work showed that our world has a complexity which contains a bit of the imaginary, all entangled.

“People always thought of complex numbers just as a tool, but increasingly we are seeing that there is something more to them.” ~ English mathematician Dorje Brody

Transmission Optimality

Around 60 CE, Egyptian mathematician Hero of Alexandria noted that reflected light takes the shortest path.

During the 160s, Ptolemy characterized perceived properties of light, including reflection, refraction, and color. Though he construed refraction tables, Ptolemy failed to discover the exact math relating angles of incidence and refraction (Snell’s law).

Persian mathematician Ibn Sahl discovered the law of refraction in 984. Sahl’s derivation was unknown to later Europeans who rediscovered the law multiple times. The mathematical rule of refraction is now attributed in name to Dutch astronomer Willebrord Snell, who derived the law in 1621 but never published it in his lifetime.

In 1658 French mathematician Pierre de Fermat proposed an optics principle of least time: that light always travels most efficiently: from one point to another in the least time, regardless of being reflected or refracted. Fermat’s principle (as it is commonly called) was broadened to encompass all wavefront behavior by Dutch physicist Christiaan Huygens in 1678. (Huygens’ principle applies only with 1 time dimension and an odd number of space dimensions. The principle fails with an even number of spatial dimensions.) Augustin Fresnel supplemented Huygens’ principle in 1818 with mathematical treatment of interference (diffraction).

In the figure, a ray of light going from a to b would travel the least distance via the hypothetical straight line. Instead, light actually traverses a longer distance that takes less time, as light moves slower through water than air – the straight-line path would incur longer, sluggish passage in water.

The law of wave transmission optimality (the Huygens–Fresnel principle) was generalized for all dynamics in any physical system by Irish physicist and mathematician William Hamilton in 1827. This principle of least action is based on a single function: the Lagrangian. Hamilton’s principle was a rehash of the same discovery independently-made by Gottfried Leibniz, Leonhard Euler, and Pierre Maupertuis in the first half of the 18th century.

Lagrangian mechanics was a 1788 reformulation of classical mechanics by Italian French mathematician and astronomer Joseph-Louis Lagrange. The Lagrangian is widely used in physics. Lagrangian equations provide that any motion may be calculated by incorporating all the information about the dynamics of the system.

Although originally formulated for classical mechanics, Hamilton’s principle also applies to all physics theories; notably playing a key role in quantum mechanics.

As has long been known, light travels optimally, as do all propagating energy waves. The mathematics behind this shows that such matchless motion necessitates omniscience: knowing all the instant information in the universe.

This profundity is no casual conclusion. It is a statement of fact. For light, or any energy wave, to behave as it does, all information about actuality must be instantaneously incorporated. Every physics theory accepts this axiomatically.

Optimal propagation clearly indicates a unified, coherent intelligence from the quantum level on up, and strongly suggests teleology: that the game afoot which we call Nature has intention.

○○○

The history of physics has been an unwinding, from describing observed Nature to formulating Nature as an illusion, from equations that made sense of what the senses sensed to formulas that make foolery of what is perceived.

“Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?” ~ Stephen Hawking

Albert Einstein

“I am no Einstein.” ~ Albert Einstein

In 1895 Albert Einstein failed the entrance exam at the Polytechnic University in Zurich, Switzerland, so his parents sent him to a secondary cantonal school in northern Switzerland. After a year there he made his way into Polytechnic.

After graduating from university, Einstein got a temporary teaching position at a school in Schaffhausen. His 2-year search for a permanent teaching post proved fruitless.

A former classmate’s father used his influence to get Einstein a job at the Swiss patent office in 1902, where Einstein became a 3rd-class examiner for patent applications related to electromechanical devices. Although his job became permanent, he was passed over for promotion to 2nd-class until he “fully mastered machine technology.”

In the meantime, the lackluster but ambitious Einstein was writing papers. His 1901 paper on capillary action in straws was published in a prestigious physics journal.

1905 was Einstein’s jackpot year. In what has been called his annus mirabilis, or “miracle year,” Einstein completed his thesis – on molecular dimensions – and got his PhD. He also published 4 papers over various topics: the photoelectric effect; Brownian motion; a terribly simple formula equating matter and energy (E = mc2); and special relativity, a mathematical statement striking at the heart of Newtonian absolutist physics.

Mass-Energy Equivalence

“The mass of a body is a measure of its energy content.” ~ Albert Einstein in 1905

Classical physicists conveniently multiplied an object’s mass by the square of its velocity (mv2) to come up with a useful indicator of its kinetic energy: E = mv2. In his relativistic equation E = mc2, Einstein simply substituted the speed of light (c) for the classical notion of velocity (v).

“Mass and energy are both but different manifestations of the same thing.” ~ Albert Einstein

Max Planck quickly followed Einstein in expressing mass as a form of energy. Other physicists contemporaneously converged on the same equation.

The relationship between matter and energy is asymmetric. Whereas disbanding matter releases energy, as atomic bombs illustrate, matter cannot construct energy.

The formula is astonishing in its implications. The speed of light squared is a huge number. m = E/c2 means that even the smallest amount of matter locks up an unimaginable amount of energy. In contrast, chemistry, which works profound transformations by tweaking chemical bonds, involves the slightest flutterings.

○○○

For his efforts in mastering machine technology, Einstein was promoted to technical expert “second class” at the Swiss patent office in 1906. Einstein left the patent office in 1909, headed to the type of university position he had sought a decade before.

Light Speed

If you are in a spaceship that is traveling at the speed of light, and you turn the headlights on, does anything happen? ~ American comedian Steven Wright

In the observable dimensions, light seems to travel as fast as anything can. Nonlocality has shown light speed as an apex to be an assumption with a limited domain.

That light even has a speed was something long unconsidered: light just was, pervasive from its source. Studying the movement of Jupiter’s moon Io in 1676, Danish astronomer Ole Rømer first demonstrated that light traveled at a finite speed, not instantaneously.

In 1865, with a paper on electromagnetism, James Clerk Maxwell started using V – for velocity – as the symbol for light speed. That was the notation adopted by Einstein in his 1905 papers on relativity.

By the end of the 19th century, c commonly denoted the speed of electromagnetic waves. This convention began in 1846, with a paper by German physicist Wilhelm Weber that aimed to unify electrostatics with electrodynamics. As light chauffeurs electromagnetism, you can tell where this story is going.

Writing about electrons in an influential paper published in 1904, German physicist Max Abraham used c rather than V as the speed of light. In 1907, Albert Einstein started using c to signify the speed of light, editing his seminal relativity papers to use c instead of V.

Increasingly precise measurements of light in transit culminated in 1975 with the speed now used: 299,792,458 meters per second (in a vacuum).

The Horizon Problem

Cosmic microwave background (CMB) radiation left a fossilized imprint of the universe. By that time the universe was already quite spread out, to put it mildly.

The CMB indicates that the temperature and other fundamental physical properties of the universe were largely uniform then. Such consistency should not be possible. There is no mechanism to explain how the universe had a consistent temperature long before heat-carrying photons had time to scour the cosmos and deliver uniformity.

American physicist Charles Misner considered this conundrum in the late 1960s and termed it the horizon problem. Alan Guth dreamed up cosmic inflation to address this inscrutable homogeneity. But there is another possible explanation: an inconstant light speed; or, as it is commonly called, a varying speed of light (VSL).

Einstein first mentioned VSL in 1907 and seriously considered it until propounding general relativity in 1915. His conclusion was that light speed was subject to gravity (by warping spacetime).

French astrophysicist Jean-Pierre Petit first proposed VSL in 1968 to solve the horizon problem. Others have since variously modeled how VSL might work.

The CMB reflects the speed at which light and gravity propagate as the temperature of the universe changes. Following the ΛCDM model, some astrophysicists propose VSL in the feverous early universe, with light outracing gravity by exceeding the blazing speed it now travels. Though merely speculative, VSL is not contradicted by evidence like cosmic inflation is.

Varying light speed would invalidate Einstein’s relativity theories, which anyway were inapplicable in the infant universe before matter took form and mass had much meaning, which meant that gravity would have been nebulous. The physics of the early universe are not understood.