The Echoes of the Mind – Mathematics


“Mathematics is the music of reason.” ~ English mathematician James Joseph Sylvester

“If you want to get people to believe something really, really stupid, just stick a number on it.” ~ American author Charles Seife

To those who appreciate it, mathematics is an art form. But mathematics is too abstract an art for many. As with other organisms, the human mind has certain mathematical aptitudes, and certain natural limitations. For most people, beyond basic arithmetic, mathematics hurts the head.

“Do not worry about your difficulties in mathematics. I can assure you mine are still greater.” ~ Albert Einstein, who was not an especially adroit mathematician

Faculty in math is innate in all organisms. Even microbes must be able to tell when food supplies are plentiful or running short, in order to take appropriate action: which they in fact do, based upon estimated quantity.

Orb-weaving spiders keep a tally of how many silk-wrapped prey there are in the larder on their webs. Many fish live in schools, which statistically gives them better odds of escaping predation. Fish can distinguish at a glance a small difference of group sizes. A threespine stickleback can tell between groups of 18 and 21 in an instant: a speedy comparative power humans cannot match.

Inborn numerosity and mathematical proficiency are not the same. We hear that the price of something rose by 50% and then fell by 50%, and we mistakenly think: “oh, back to where it started.” Our minds have a natural number sense that readily goes astray with fractions.

In humans, mathematics ability is related to intelligence in the broadest sense. Hence, mathematics and language are the 2 facets of intelligence tests.

◊ ◊ ◊

“For a physicist, mathematics is not just a tool by means of which phenomena can be calculated, it is the main source of concepts and principles by means of which new theories can be created.” ~ English-born American theoretical physicist and mathematician Freeman Dyson

Mathematics reigns over many disciplines of modern life. Physics and economics are essentially mathematical studies. Hypotheses become theories by empirical buttressing which takes mathematical form in models. Existing theories are crushed by the weight of numerics.

Capitalism is an empire of numbers. Its copious failures owe to not encapsulating numerically factors critical to the well-being of societies which employ the market system.

“Without mathematics, there’s nothing you can do. Everything around you is mathematics. Everything around you is numbers.” ~ Indian writer and mental calculator Shakuntala Devi


“A set is a Many that allows itself to be thought of as a One.” ~ Georg Cantor

Ultimately, mathematics is about sets. A set is an enumerated or rule-specified collection of entities. A portion of a set is a subset.

Though sets may be of any entity, many sets have numbers as their members. Mathematical operations are the manipulation of sets, typically with 1 or more members (elements). This characterization is conventional but has been controversial.

German mathematician Georg Cantor developed set theory 1874–1884. In his work, Cantor found within sets an “infinity of infinities.”

Integers are a subset of real numbers, even as both sets have an infinite number of members. Cantor understood that there are more real numbers than integers, and so realized infinity can be relative.

“The essence of mathematics is its freedom.” ~ Georg Cantor

Some of Cantor’s findings relating to infinity were so counterintuitive as to shock contemporaries. Cantor posited transfinite numbers: numbers larger than finite numbers, but shy of being absolutely infinite.

German mathematician Leopold Kronecker, a contemporary of Cantor who worked in number theory and algebra, was a stern critic of Cantor, calling him a “scientific charlatan” and a “corrupter of youth.” (Socrates was similarly accused of corrupting the young with foul ideas.)

“God made the integers, all else is the work of man.” ~ Leopold Kronecker

Some Christian theologians considered Cantor’s work blasphemous: challenging the absolute infinity of God. Cantor, a deeply religious man, rejected the charge.

In the late 19th century, French polymath Henri Poincaré called set theory a “grave disease” infecting mathematics. Criticism was mixed with accolades. Cantor received the highest honor from The Royal Society of London in 1904 for his work on set theory. Yet this did not stop the carping. Austrian English philosopher Ludwig Wittgenstein lamented that mathematics had been “ridden through and through with the pernicious idioms of set theory,” which were “utter nonsense.”

The internal contradictions in set theory prompted Dutch mathematician and philosopher L.E.J. Brouwer to tautologically remark:

A false theory which is not stopped by a contradiction is nonetheless false.


Sets are notated within braces. The set of seasons is stated thusly:

Seasons = {winter, spring, summer, fall}

A set may be empty (Æ or {}). Note that Æ is not the same as {Æ}. Whereas Æ (or {}) is an empty set, like an empty bag, {Æ} is a bag with an empty bag.

Members of a set are designated by Î, while nonmembers are Ï.

1 Î {1, 2, 3}

4 Ï {1, 2, 3}

Sets can be compared, or otherwise operated on in various ways. Sets may, for example, be equal (the same) or share some elements. A few simple examples illustrate, using sets A & B below.

A = {1, 2, 3}

B = {2, 3, 4}

A union (∪) of 2 sets is a set that has all elements.

A ∪ B = {1, 2, 3, 4}

The intersection (∩) of 2 sets is a set that has shared elements.

A ∩ B = {2}

To simplify conceptualization, sets are often visualized using Venn diagrams, introduced by English logician and philosopher John Venn in 1880. (If the sets A and B had no shared members, their representative Venn diagram circles would not overlap.)

A set difference (\) comprises those elements of one set not in another.

A \ B = {1}

B \ A = {4}

The most widely used sets, which are of numbers, are all infinite, even as integers and rational numbers are eminently countable. That there are sets with an infinite number of elements ({∞}) cannot be proven from first principles, and so is accepted as axiomatically true.

The foregoing is just a warm-up to the wondrous world of sets, whose scope is so vast as to envelop both mathematics and logic.

Set theory has its paradoxes. Not surprisingly, almost all revolve around the ever-troublesome notion of infinity (∞). A few other paradoxes arise in the unaccountability of relations between sets.

“Most astonishing is that world of rigorous fantasy we call mathematics.” ~ English anthropologist Gregory Bateson


“Approximate number systems are universally displayed across animal species.” ~ American cognitive scientist Jessica Cantlon et al

Organisms have an innate sense of quantity. Most can instantly tell few from many, can sense a quantity change from what had previously been experienced, and can count to some degree.

Newborn chicks understand both relative and absolute quantities. Their mental number line – running left-to-right small-to-large – is identical to innate human conception.

This is just the beginning of math by organisms. Complex algebraic functions are performed by plants to maximize growth potential.

Even cells internally allocate and organize limited resources to obtain optimal productivity: a problem of quantities so complicated that humans have no idea how to program computers for such solutions.

The evolution of numeration notation illustrates that our own sense of mathematics is only innate for tallying, as with other animals. All else owes to symbolic manipulation, which is closely related to language aptitude.

Consumer research has shown that round numbers appeal to the emotions, whereas non-rounded numbers invoke logic processing. Consumers are more likely to buy a product that is roundly priced when a purchase is primarily driven by feelings, such as a camera for family vacations. In contrast, a non-rounded price is better for utilitarian products, where reason comes into play.


Arithmetic was the first and most basic branch of math to emerge in the earliest societies. The more primitive the society, the closer representation of its number system tended to sets of straight lines. The earliest writings of the Mesopotamians and Egyptians from 3400 BCE showed vertical straight lines to signify quantities.

In the 3rd century BCE the Hindus made a most important advance in numeral designation. A straight line represented the number 1, but distinct symbols were used for greater quantities.

Meanwhile, the Etruscan civilization was still using tally sticks. These evolved into Roman numerals, with a notch for 5 (V) and a crosscut for 10 (X).

Roman numerals were similar to the Babylonian numeral system, which first appeared ~3100 BCE. The Babylonians had the first known positional numeral system, in which the value of a particular digit depended upon the digit itself and its position within a number.

The Babylonians understood the notion of nothingness, but it was seen as a lack of a number, not a number unto itself. The Babylonians used a space to mark the nonexistence of a digit at a certain place.

In calculating Jupiter’s orbit, Babylonian astronomers came close to discovering calculus. Their mathematical techniques ~350 BCE were long thought by historians to have developed only in the 14th century.

“The Babylonians developed abstract mathematical, geometrical ideas about the connection between motion, position and time that are so common to any modern physicist or mathematician.” ~ German astroarchaeologist Mathieu Ossendrijver

The Arabs learned of the Hindu numbering system a millennium after its invention, in the 8th century ce. It first appeared in European arithmetic in 976, using the 9 digits: 1, 2, 3, 4, 5, 6, 7, 8, 9.

Though a tremendous step forward, the lack of a digit for zero precluded important arithmetic operations. Subtraction was problematic if not impossible.


“The point about zero is that we do not need to use it in the operations of daily life. No one goes out to buy zero fish.”  ~ English mathematician and philosopher Alfred North Whitehead

Zero was slow in its incorporation into mathematics, partly from the primal fear of the void it represented.

Most ancient peoples believed only nothingness and chaos were extant before the universe came to be. The Hebrew creation myths said that Earth was a chaotic emptiness before God showered it with light and formed its features.

The Greeks believed that Darkness once reigned as the mother of all, and that Darkness begat Chaos. Together, Darkness and Chaos spawned creation.

The Greeks and Romans hated zero so much that they clung to their own Egyptian-like notation rather than convert to the Babylonian system, which was easier to use.

That did not mean that zero was utterly unemployed. Ancient Greek astronomers used placeholder zero in their calculation tables: an import from Babylonian practice.

Once the calculations were done, Greek astronomers cast zero aside, writing the results in clunky Grecian numerals sans zero. The Greeks understood the usefulness of zero but wanted nothing to do with it.

The fear of zero went beyond its signification of nothingness. Zero’s mathematical properties were spooky.

The axiom of Archimedes, though conceived geometrically as geographic quantities, is the linearly progressive ordering of integral numbers {1, 2, 3…}. The axiom is named after 3rd-century-BCE Greek mathematician Archimedes.

Zero alone mystically defies the axiom. It adds nothing to anything; takes nothing as well.

“Nothing can be created from nothing.” ~ Roman philosopher and poet Lucretius

Zero is more than mysterious: it is malevolent in undermining the arithmetic operations of multiplication and division.

Less-superstitious ancients cottoned to numeric nothing. The Mayans began using zero as a number in the 1st century.

500 years later the Hindus adopted zero, having used it as a placeholder symbol for a couple of centuries. In the 5th century, Indian mathematicians changed their number system from the Greek to the Babylon style. Unlike the nonplussed Greeks, Indians were nonchalant about nil.

“In the earliest age of the gods, existence was born from nonexistence.” ~ Rig Veda

Zero slowly spread from India across the Middle East before reaching Europe after the Dark Ages. Italian mathematician Fibonacci popularized the Hindu-Arabic numeral system in the 13th century.

As math evolved, zero became increasingly important. It became clear that zero was essential to arithmetic, which would otherwise be incomplete.

Zero is unique in 4 properties which other digits lack.

1) Adding or subtracting zero leaves a number unchanged.

2) Multiplying any number by zero always results in zero.

3) Raising any number to the power zero always yields unity (one).

4) Dividing any number by zero always equals infinity (∞), and is therefore forbidden, since ∞ is not a number.

Beyond its purely digital properties, zero serves as a starting point of a graph, or in a coordinate system for analytic geometry.

Zero completes the base-10 (decimal) number system, acting as the essential ballast to algebra. Calculus could not be developed without zero.

“In the history of culture, the discovery of zero will always stand out as one of the greatest single achievements of the human race.” ~ Latvian mathematician Tobias Dantzig

Unlike superstitious humans, other animals have no problem conceptualizing nothing. Honeybees understand the concept of zero quite well, as do parrots and other primates.


4 times 5 is 12, and 4 times 6 is 13, and 4 times 7 is – oh dear! I shall never get to 20 at that rate. ~ Alice in Alice’s Adventures in Wonderland (1865) by English novelist Lewis Carroll

Our hands have 10 fingers, and so a base-10 number system seems natural for tallying, which is where arithmetic got its start. The ancient Chinese and Egyptians used base-10.

Some early societies employed other bases. The ancient Sumerians and Babylonians had sexagesimal (base-60) systems, which simplified handling fractions in calculations.

The Mayans (1500 BCE~1700) had a vigesimal (base-20) system. Mayan priests used a mixed numeral system of base-20 and base-360. The Mayan calendar had 360 days in a year.

The modern decimal (base-10) system eventuated in Europe by the 15th century with the incorporation of zero into the Hindu-Arabic number system.

Owing to memory storage being of bits either on or off, electronic computers are built upon a binary (base-2) system. To achieve decimal handling capability, bits are concatenated into a hexadecimal (base-16) system.


“It is the civilized world’s facility with manipulating the representations of reality provided by numbers that has led to such awesome material progress in the past few centuries.” ~ English statistician David Hand

There are numerous sets of numbers. The numbers used for tallying are natural numbers {1, 2, 3,…}. Adjoining zero to the natural numbers gives the set of whole numbers.

Extending whole numbers into negative territory yields integers {…, -3, -2, -1, 0, 1, 2, 3,…}.

It took mathematicians quite a while to accept negative numbers. The Greeks, so interested in geometry, had no need for negatives.

By the 7th century Indian bookkeepers were using negative numbers to represent debt. Italian polymath Gerolamo Cardano used negative numbers in his 1545 algebra textbook Ars Magna.

Among the natural numbers, a number greater than 1 which has only itself and 1 as factors is a prime number {2, 3, 5, 7, 11, 14, 17, 19, …}. A natural number that is not a prime is called a composite. Euclid demonstrated ~300 BCE that there are an infinite number of primes.

Prime numbers have long been a fascination for mathematicians. Many questions about them remain open. One of the oldest and best-known issues is Goldbach’s conjecture: that every even number greater than 2 can be expressed as the sum of 2 primes. (Goldbach’s conjecture arose from an aside that German mathematician Christian Goldbach wrote in a letter to Leonhard Euler on 7 June 1742.) Generally assumed to be true, and shown so far to 4 × 1018, the conjecture remains unproven despite strenuous effort.

The set of real numbers includes integers, rational, irrational, algebraic, and transcendental numbers.

Any number that can be expressed as a fraction of 2 integers is a rational number (e.g. 3/4). A rational number may have an infinitely repeating decimal fraction (e.g. 82 / 111 = 0.738738…).

A real number that is not rational is irrational (e.g. √2, which equals 1.41421356…). An irrational number has decimal representations which are neither terminating nor repeating.

An algebraic number is one that can be expressed as the root of a non-zero polynomial with 1 variable and rational coefficients. (It is easier to understand that algebraic numbers are real numbers that are not transcendental.)

A polynomial is an algebraic expression consisting of variables and coefficients, where the expression (aka function) is limited to addition, subtraction, multiplication, and non-negative integer powers (aka exponents). An exemplary algebraic expression is:

3y2–5x + 7

In the above expression, x and y are variables (aka indeterminates): the letters represent numbers which may vary in value. A variable is a placeholder in a mathematical formula. A number before a variable is a coefficient, which stands ready to multiply whatever value a variable takes.

There are specific naming conventions for variables in the different branches of mathematics. For instance, the axes of 3d coordinate space are always x, y, and z.

An exponent signifies multiplication to a certain power (number of times). For example:

y3 = y ´ y ´ y

The irrational, real, or complex numbers that are not algebraic are transcendental. Transcendentals are numbers with decimal quantities that go on and on without repeating.

Though only a few classes of transcendental numbers are known, they are not rare. Indeed, most real and complex numbers are transcendental. But it can be extremely difficult to prove that a given number is transcendental.

π and e are the best-known transcendental numbers.

π (3.14159…) is the ratio of a circle’s circumference to its diameter. But p shows up in many situations unrelated to circles. For instance, the series 1/12 + 1/22 + 1/32 + 1/42 + 1/52… (= 1 + 1/4 + 1/9 + 1/16 + 1/25…) gets closer to the value π2/6 (= 1.645…) as more terms are added. π is also intimately (and mysteriously) involved with how prime numbers are distributed.

e (2.71828…) is the base of the natural logarithm, which is a logarithm with e as its base. In other words, the natural log is the inverse of e. Practically, e is about time under continuous growth.

The miraculous powers of modern calculation are due to three inventions: the Arabic notation, decimal fractions, and logarithms. ~ American mathematics historian Florian Cajori

A logarithm is the exponent of the power to which a base number must be raised to equal a given number. The 2 equations below are equivalent.

y=bx x=logb(y)

For example, the logarithm of 1,000 to base 10 (the decimal system) is 3.

1000=103 3=log10(1000)

e can be defined in many ways. For one, e = (1 + 1/n)n as n approaches infinity.

The natural log – written loge or ln (from the Latin logarithmus naturali) – gives the time needed to reach a certain level of growth, given constant growth.

For example, for 10 times growth of something that grows 100% annually, the wait time is 2.302 years (ln(10)).

With g as growth (at a 100% compound growth rate) and d as duration:

g=ed d=ln(g)

10 (times growth)=e2.302 2.302 (years) =ln(10)

ex is a scaling factor: the amount of continuous growth after a certain (x) duration. The inverse – ln(x) – is the duration needed to reach a certain level of growth.

ex = erate x time; with rate = 100% → e1.0 x time = etime

If the growth rate is only 5% rather than 100%, then the duration is 20 times longer (1.0 / 0.05).

Suppose we want to know how long a wait for a 30-times growth at a growth rate of 5% per period.

ln(30) = 3.4; i.e., rate x time (at 100%) = 3.4

for: (rate = 0.05) and nominally (time = 3.4)

0.05 (rate) x 3.4 (time) = 68

◊ ◊ ◊

The sets that comprise the all-encompassing real numbers form a nested hierarchy, with a bifurcation at the top between rational and irrational numbers. One oddity is the set of algebraic numbers, which includes all rational numbers but also selectively some irrational ones.


Materialism drove mathematics. The earliest and essential need for math was accounting of property: both quantities of goods and tracts of land. Thus arose arithmetic and geometry.


“The hardest arithmetic to master is that which enables us to count our blessings.” ~ American moral and social philosopher Eric Hoffer

The ancient Sumerians, Egyptians, and Babylonians took some of the tedium out of their calculations with the abacus, which was invented in Sumer ~2700 BCE. The Babylonians advanced computation from Egyptian times partly by their sexagesimal system.

5th-century-BCE Greek historian Herodotus noted that the ancient Egyptians had calculated with pebbles. Such counters continued into Roman times. The word calculate derives from the Latin calculus, which means pebble.

Abacus addition and subtraction is accomplished by regrouping and exchange. Multiplication is done by repeated additions, division by repeated subtractions.

Division sometimes creates remainders, which begat fractions. Fractions are a natural outgrowth of practical arithmetic in treating portions of a whole.

Over the centuries, mathematicians came to eschew this pragmatic view of fractions. Fractions became abstract symbols, governed by definite rules which bound them to whole numbers. Nonetheless, fractions ushered in a whole new level of complexity: by emphasizing the ordinal nature of numbers as contrasted to their cardinal or quantitative property. As points on a line, ordinal numbers, and particularly fractions, provided a bridge between arithmetic and geometry by presenting an infinite set of points.


“This knowledge at which geometry aims is of the eternal. Geometry will draw the soul toward truth.” ~ Plato

Geometry arose in addressing the practical problems of measuring land areas and granary volumes. Whereas the Babylonians had an algebraic bent, the Egyptians were stronger with geometry. Both the Babylonians and ancient Indians anticipated the geometric work of ancient Greek mathematician and philosopher Pythagoras. The Pythagorean theorem was known in Mesopotamia long before Pythagoras.

Mathematicians in ancient China were also familiar with basic geometry but did not know all the correct modern formulas.

What Pythagoras had that those before him lacked was a cult of true believers: the Pythagoreans. Pythagoreanism was a brew of esoteric and mystical beliefs influenced by mathematics, astronomy, and music. Their lifestyle discipline included purification rites and vegetarianism: all designed to empower their souls.

The Pythagoreans considered the universe ordered by numbers. A person who fully understood the harmony in numerical ratios would become divine and immortal.

“Numbers rule the universe.” ~ Pythagoras

The Pythagoreans’ interest was not in manipulating numbers – arithmetic – but in understanding the properties of numbers. Geometry was a prominent means.

The famous Pythagorean theorem states that the square of the hypotenuse (c) of a right triangle is equal to the sum of the square of its 2 sides (a, b).

c2 = a2 + b2

In the instance of a right triangle with sides each equal to 1, the square of the hypotenuse is 2.

2 = 12 + 12

√2 is not a rational number, as it cannot be expressed as a ratio of 2 counting numbers.

At the time, the Greeks believed that all numbers were rational. The shocking discovery of irrational numbers by 5th-century-BCE Pythagorean Hippasus was something that the secret society wanted to keep secret. Hippasus drowned at sea shortly after divulging this revelation; one of history’s lesser unsolved mysteries.

Not surprisingly given its history, the square root of a number (√x) is called a radical. The symbol √ is the radical sign, and x is the radicand.

Euclid is considered the father of geometry. His book Elements (~300 ce) developed all known mathematics into an integrated whole. Elements served as the primary textbook for teaching math, especially geometry, for nearly 2,000 years: from its original publication into the early 20th century.

In solving problems and providing proofs geometry is intertwined with formal logic, particularly deduction. Euclid developed the basic concepts which continue to be used: axioms, postulates, and theorems. An axiom is a self-evident truth requiring no proof. A postulate is an assumption. A theorem is a proposition proved via axioms and postulates.

Euclid’s 5th postulate – the parallel postulate – posits that parallel lines run to infinity. As an assumption its disparate complexity long troubled mathematicians.

“I have had my results for a long time: but I do not yet know how I am to arrive at them.” ~ Carl Friedrich Gauss

Relaxing the assumption underlying the parallel postulate resulted in non-Euclidean geometry, by way of hyperbolas and ellipses. This breakthrough was developed in the early 19th century.

Though others did earlier work that they kept to themselves, including Gauss (~1818), the first-published essays on hyperbolic geometry were ~1830 by Hungarian mathematician János Bolyai and Russian mathematician Nikolai Ivanovich.

In a famous 1854 lecture, German mathematician Bernhard Riemann presented higher-dimensional manifolds: a non-Euclidean geometry which in its simplest form is elliptic geometry. Riemannian geometry enabled Einstein’s general relativity theory. Bizarrely, Einstein refused to countenance extra-dimensionality.


Elementary arithmetic teaches how to add, multiply, subtract, and divide. This is practical mathematics sufficient for workaday uses. Elementary algebra, serving subtler purposes, asks about general relations among the 4 arithmetic operations. ~ American mathematician Avron Douglis

Algebra introduces variables into arithmetic: a conceptually simple extension, but one that takes mathematical operations to a level of abstract sophistication.

In the late 16th century French mathematician François Viète introduced the use of letters as symbols for variables in equations. He also coined the term coefficient for a stated quantity multiplicatively applied to another value, which was typically a variable.

Algebra is a unifying thread throughout mathematics, as it the study of symbols and the rules for manipulating them.

“Algebra deals with operations upon symbolic forms.” ~ Tobias Dantzig

Algebra affords the creation of models which mathematically abstract phenomena. These models are packaged as equations, which define relations: typically, of equality, though equations may also be of some other relationship.

“The leading characteristic of algebra is that of operations on relations. This also is the leading characteristic of thought.” ~ William James

A common equation is a function, which examines variance in an independent variable. In the function: y = ƒ(x), x is the independent variable (variable input) and y is the dependent variable (output). The value of y depends upon x.

To solve or model complex problems, related equations are grouped into a system.

“Algebra is the metaphysics of arithmetic.” ~ English naturalist John Ray

Imaginary Numbers

“That imaginary numbers has hitherto been surrounded by mysterious obscurity is to be attributed largely to an ill-adapted notation. If, for instance, +1, -1, √-1 had been called direct, inverse, and lateral units, instead of positive, negative, and imaginary (or even impossible), such an obscurity would have been out of the question.” ~ Carl Friedrich Gauss

One radicand stands apart from all others: √–1. The number repeatedly cropped up, and was shunted aside as misunderstood, for a millennium.

The ancient Egyptians knew how to roughly calculate the volume (frustum) of a pyramid; quite an accomplishment for a people who had no knowledge of integral calculus. In figuring a frustum, the ancient Egyptians encountered √–1. They ignored the result and its implications. So did 1st-century-CE Greek-Roman mathematician and engineer Hero of Alexandria in dealing with the same problem. As did 3rd-century- Alexandrian Greek mathematician Diophantus, who made advances in algebra. And too Hindu mathematician Mahavira in the 9th century, who (wrongly) affirmed the suspicions of Hero and Diophantus: that a negative number cannot have a square root.

“As in the nature of things, a negative number is not a square, it has therefore no square root.” ~ Mahavira

Negative attitudes about negative numbers were long-standing. The general line of logic was that there is something seriously wrong about a tally that not only does not exist but is an inherent deficit.

Negative numbers were long thought of as an inexplicable pit of quantity. As the ancients had a phobia about voids, banish the thought.

As we shall shortly see, negative numbers can make strange bedfellows for their positive companions even in arithmetic calculations.

The pox was not just on negative numbers. Positive square roots that don’t square were insufferable. As 20th-century Russian theoretical physicist and cosmologist George Gamow wrote:

There was a young fellow from Trinity who took √∞.

But the number of digits gave him the fidgets.

He dropped Math and took up Divinity.

Cubic equations when graphed render a sine curve. Figuring the solutions to cubic equations (e.g., x3 + a1x2 + a2x + a3 = 0) were long a source of consternation to mathematicians. Even depressed cubics, which lack the x2 term, caused conniptions.

The struggle of cubics reached Gerolamo Cardano in the mid-16th century, who managed the feat. In doing so he encountered √–1 and embraced it without fear. Though there were hiccups, others followed, and furthered the sense that √–1 was a workable concept.

In the late 17th century, English mathematician John Wallis was the first to try to attach some physical significance to √–1. He fumbled. Wallis figured that since a ÷ 0 when a > 0 is positive infinity (∞), and since a ÷ b is a negative number when b < 0, then this negative number must be greater than positive infinity, because the denominator b < 0 is less than the zero, which causes infinity when a is divided by it. This left Wallis with the astounding conclusion that that a negative number is simultaneously both less than zero and greater than infinity. No wonder that negative numbers long received an unwelcome reception by mathematicians.

René Descartes was the culprit who, in 1637, pinned the term imaginary on the square roots of negative numbers, which were formerly known as sophisticated or subtle. His frustrated failure to make geometric sense of these numbers was no doubt instrumental in his derision, which unfortunately stuck for this sensibly subtle set of numbers.


“Fractals are dimensionally discordant.” ~ Benoît Mandelbrot

A fractal is a set of scale-invariant, self-similar, iterative patterns. Fractals are abundant in Nature and can be produced via formulas of complex numbers.

Looking down on a sea shoreline from 100 meters displays a series of arcs where the water has receded from the sand. At a closer scale, such as 10 meters, one’s field of vision is smaller, but the pattern of arcs is self-similar. At 1 meter and 100 cm selfsame iterative patterns recur. A shoreline is fractal.

Fluid turbulence forms fractal patterns – similar, though not quite identical – at numerous scales of observation.

The patterns that tree leaves and branches form are fractal. Romanesco broccoli (pictured) is floridly fractal.

Fractals got their mathematical start in the 17th century, with Leibniz pondering recursive self-similarity, though his thinking was limited to straight lines. 2 centuries later, German mathematician Karl Weierstrass presented the first function and graph that had fractal properties.

Numerous mathematicians became fascinated with the multidimensional properties of fractals and elaborated mathematical constructs which produced them in a wide variety of ways.

Polish-born French American mathematician Benoît Mandelbrot popularized fractals. He was first to use computers to generate fractal images. His Mandelbrot set (pictured) is well-known. It is viewed on a computer as an infinite series of planes with ceaseless sets of self-repeating patterns which vary by location within the set and at different depths (by zooming in). Exploring the Mandelbrot set is like looking through a microscope at an endless universe of patterns that seem simultaneously both natural and otherworldly. (The covers of Spokes books are Mandelbrot-set fractals.)

Complex Numbers

“I have not the smallest confidence in any result which is essentially obtained by the use of imaginary symbols.” ~ 19th century English mathematician George Airy

Early on, imaginary numbers were the unexpected product in taking the square root of a real number. Cardano was first to combine imaginary and real numbers, creating complex numbers, which he expressed in the form a + bi, where a and b are real, and i is the imaginary unit (√–1).

Complex numbers form a plane. On one axis are real numbers, the other imaginary.

The man who tamed complex numbers geometrically was not a mathematician, but a surveyor: Caspar Wessel, a Norwegian . His brilliant paper on the subject was published in Danish in 1799. It made no impact in the mathematical world until rediscovered nearly a century later, in 1895. By that time others had trod the same path.

Wessel’s insight, which looks obvious when understood (hindsight bias), was to imagine complex number points within the Cartesian coordinate system, with real and imaginary number axes, as shown above.

Next, consider a point (a + bi) as being at some length (l) from the origin (0,0), and at some angle (φ) from the real number axis. This puts a complex number point in polar form.

Using the polar coordinate system, each point can be considered as a distance (l) from a fixed point (typically (0,0)) at an angle (φ from a fixed direction (e.g., the horizontal (real) axis). Hence, the point a + bi can be stated as (l, φ).

Using this scheme mathematical operations on complex numbers are greatly simplified. Multiplying by √–1 is, geometrically, simply a 90° rotation counterclockwise.

Nowadays, complex numbers have essential employment in a variety of scientific and engineering fields, including physics, fluid dynamics, electromagnetism, and signal processing. They are also used to model Nature: complex numbers employed to fathom the ultimate complexity.


“All finite things reveal infinitude.” ~ American poet Theodore Roethke

Infinity is an abstraction of something without limit. In mathematics, infinity is often incorrectly treated as if it were a number rather than an open possibility. In physics, infinity is abhorred: treated as a malady that rots any model which produces it. (Modern physicists’ attitude about infinity is aptly analogous to that of the ancient Greeks toward zero: stubborn ignorance.) With nothing else to go on, physicists patch their ersatz models when infinities arise; the Standard Model of quantum physics is exemplary.

Infinity rears up numerically when series run amok, going on and on. In geometry, infinity presents a paradox.

The earliest record of infinity was in the mid-6th century BCE by Greek philosopher Anaximander. Its first mathematical conception came a century later from Greek philosopher Zeno of Elea.

The ~3rd-century-BCE Indian mathematical text Süryaprajñapti classified all numbers into 3 types: enumerable, innumerable, and infinite. Innumerable numbers were countless, while infinite numbers were endless.

John Wallis first used the symbol ∞ for infinity in the 1655. He was particularly interested in the infinitesimal: 1/∞, upon which calculus – the concept of limits – was concerned. The foundations of calculus were laid in the 17th century.


“If chance is defined as an event produced by random motion without any causal nexus, I would say there is no such thing as chance.” ~ Roman philosopher Boethius

The term probability is used in 2 distinct ways: 1) as an intrinsic property of a system; and 2) as a gauge of belief. Mathematically, probability is a measure of how likely a certain event will happen.

Though biases readily enter into assessing odds, primates have an innate sense of probability. 6-month-old babies can estimate probabilities. This faculty provides an intuitive basis for belief.

Probability exerts a peculiar fascination even over persons who are nothing for mathematics. It is rich in philosophical interest and of the highest scientific importance. But it is also baffling. ~ American mathematician James Newman

Man’s urge toward games of chance raised an interest in probability millennia ago, but the fundamental issues were long obscured by superstitions.

In the 16th century Cardano demonstrated the efficacy of defining odds as a ratio of favorable to unfavorable outcomes. But it was not until 1657 that Dutch mathematician and scientist Christiaan Huygens penned the first scientific treatment of the topic.


Dice were invented innumerable times tens of thousands of years ago. They often turn up in archeological digs.

Dating from 2,000 years ago, 90% of Roman dice were typically asymmetrical and would not roll randomly. The Romans knew how to make symmetrical dice but chose not to. Perhaps they thought other forces were at work in determining the outcome of a roll – dice as tumbling Ouija boards.

Only from the mid-15th century were dice typically symmetrical, as gamblers began demanding steady odds.

The arrangement of numbers on dice evolved as well. There are a variety of ways to arrange the numbers 1 to 6 on a die. Several were seen in Roman-era dice.

Between 1250 and 1450, a single arrangement was dominant: 1 opposite 2, 3 opposite 4, and 5 opposite 6. This configuration is called “primes” because opposite faces sum to prime numbers (3, 7, and 11).

Then die denominations suddenly changed. Primes were replaced by the modern “sevens” configuration: 1 opposite 6, 2 opposite 5, and 3 opposite 4, where opposite faces sum to 7.

A standard arrangement makes it easy to check that a die is authentic. Primes and sevens are readily verified.

Primes might have become unpopular because the configuration was perceived as unbalanced, whereas sevens has a symmetry in that the opposite faces all add to 7; symmetry as a proxy for fairness.

“As dice design became more ‘regular’, successful players were better able to see patterns in play that might have led to early probabilistic thinking.” ~  American mathematician Edward Packel


In inherently creating a risky business environment, probability finds full employment among capitalists, especially in the financial sector, where randomness regularly reigns. Reliability theory, an offshoot of probability, is used in product design to regulate quality.

“Many things happen between the cup and the lip.” ~ English scholar Robert Burton

In a stochastic process a system moves from one state to another with a goodly degree of randomness. Many systems in the natural world seem stochastic. Stochastic processes are typically memoryless: the next state emerges from the current one owing no influence to what had happened before.

Russian mathematician Andrey Markov developed Markov chains in 1906, which provide techniques for modeling stochastic systems, and so estimate the probabilities of certain events occurring, recurring, and their reliability. Stochastic systems contrast with deterministic ones, where predictability is relatively high, as history acts as a constraint on future actions. In mathematics, a deterministic system is one that entirely predictable: the outcome depends utterly upon the initial condition.

“It’s hard to believe in coincidence, but it’s even harder to believe in anything else.” ~ American author John Green


“There are 3 kinds of lies: lies, damned lies, and statistics.” ~ American author and humorist Mark Twain

Death inspired the science of statistics. King Henry VII’s fear of the Black Plague probably prompted his publishing reports on deaths, beginning in 1532. Mortuary tables started ~1662 with the work of English demographer John Graunt, who saw patterns in the statistics. This was followed by English polymath Edmund Halley, who published an article on life annuities in 1693, thus providing the mathematical grounding for insurance via actuarial science.

Probability tells the likelihood of an event. A graph may be made by collating sample outcomes. This results in a probability curve. Carl Friedrich Gauss derived an equation for the probability curve and analyzed its properties.

In 1749, German philosopher Gottfried Achenwall quantitatively characterized government statistical data and coined the term statistic. It had been called political arithmetic in England.

A bell-curve figure shows an idealized symmetrical probability curve with a normal distribution: a continuous probability spread. The mean (median) is its apex, which is the peak number of occurrences, and therefore the most likely point (expected value).

Less likely are events to the left and right, which are respectively fewer and more of whatever is being measured in a sample than those at the median.

“A vast body of statistical theory and methods presupposes a normal distribution.” ~ American statistician Leroy Folks

The central limit theorem is the term used for the statistical assumption that a large number of independent events will be normally distributed. Logically it is a flagrant fallacy to blithely apply to specific instances in the present what has been generally found in the past. Nevertheless, assumed continuity is a central tenet of statistics.

In charting statistics, such as a mortuary table, the shape of a probability curve may be skewed. In the exemplary curve, illustrating age of death, demise is spread over a greater range before reaching the midpoint, after which folks start dropping like flies.

“The standard deviation is a reliable measure of dispersion.” ~ American mathematician Thomas Pirnot

The standard deviation is the amount of variance from the mean. Another view, from a predictive context, is that the standard deviation represents a measure of errors from the expected norm. A low standard deviation betokens data points clustered close to the central tendency (mean). A high standard deviation indicates a greater dispersion.

Descriptive statistics is the discipline of quantifying a sample, which is a set of collated data. In contrast, inferential statistics draws predictive conclusions from statistical information about the population that is sampled.

Inferential statistics is used to test hypotheses, and to forecast via sample data. As such, statistical inference from a random sample has a confidence interval: the probability percentage that a certain event will occur, or data not sampled, or found in the future, will correspond to an asserted characterization. In other words, a confidence interval is a range of values indicating the uncertainty surrounding an estimate. Confidence intervals represent how “good” an estimate is.

Confidence intervals are frequently misunderstood, even by scientists who should know better. A confidence interval expresses a statistical sense of reliability in an estimation, not the likelihood that the result (the sought-after population parameter) is within the interval (that is, the probability that the interval covers the population parameter).

Confidence intervals are given in the form of estimate ± margin of error, where estimate is the measure (the center of the interval) of the unknown population parameter being surveyed for, and margin of error is the potentiality of the estimate being erroneous.

Attached to every confidence interval is a confidence level, which is the probability (%) indicating how much certainty should be attributed to a confidence interval. In general, the higher the confidence level, the wider the confidence interval.

The confidence level is not a statement about the population or sampling procedure. The confidence level is instead an indication of the success in constructing the confidence interval. For example, confidence intervals with a confidence level of 80%, will, over the long run (repetitiously), miss the true population parameter 1 out of every 5 times.

Confidence intervals were introduced into statistics by Polish mathematician and statistician Jerzy Neyman in 1937.

“Although the field of statistics is rooted in mathematics, and mathematics is exact, the use of statistics to describe complex phenomena is not exact.” ~ American economist Charles Wheelan

◊ ◊ ◊

“Children are taught the mathematics of certainty: algebra, trigonometry, geometry and the like. That’s beautiful but often useless. We should be taught uncertainty.” ~ German psychologist Gerd Gigerenzer

Statistics is difficult for many because it is an obtuse abstraction. The mind is naturally inclined to think in terms of outcomes having causes.

Probability is relatively easy because it is in the realm of causality: how likely an event is. Even babies detect patterns that form the base of probability.

“Correlation doesn’t tell you anything about causation, but it’s a mistake even researchers make.” ~ American statistician and psychologist Rand Wilcox

Statistics is essentially acausal, which is a paradigmatic shift that has no everyday application. The very concept of statistics is a significant step away from how mind appreciates the world.

“There’s order in the form of correlations.” ~ English physicist David Jennings

The closest statistics comes to everyday application is averaging. The mind can rather readily suss the average of a tangible set of something (e.g., how big the average orange is of 4 sitting there). But how representative a sample is of a larger, unseen population, seriously stretches the imagination; and how confident one should be about the randomness or representativeness of such a sample is a mathematical chimera.

The difficulty is with distributions, which are crucial to understanding statistical reasoning. The idea that individual phenomena can be independently distributed is comprehensible, but that the “distribution” of a large collection of random events can be mathematically characterized with regularity is at best a mystery wrapped in an enigma.

The notion of independence within distributions is at the root of the problem. People expect samples, even small ones, to be representative. Consequently, reconciling independence with an abstract distribution is a mental challenge.

A common misunderstanding of independence and distribution is exemplified by a statement about coin tossing: “if 10 heads have been thrown in a row, the next few tosses have to be tails for the results to represent the distribution.” (There is no statistical population distribution of coin tosses, as each toss is independent. The stated confusion about distribution is with probability distribution (aka odds), which for tossing coins is 0.5.)

Data Quality

“Statistics begins with data.” ~ David Hand

Statistics aims at a specific, quantified characterization of a population. The data going into developing a statistical analysis is almost always a sampling of a population rather than the entire population. Hence, statistics are almost always a rough sketch, not a complete picture, and so inherently are of uncertain quality, though they often serve as good approximations, which is the best a fact can ever be.

“Garbage in, garbage out.” ~ first used in a newspaper article about the US Internal Revenue Service computerizing their data (1 April 1963)

The quality of a sample determines the quality of the statistics associated with it. For statistics to be decent, the sample upon which they are based must be representative of the population being examined.

A population is the entire set of objects about which information is wanted. A parameter is a quantitative characteristic of a population; it is a fixed and mysterious number. The goal of a statistical exercise is gaining insight into one or more parameters.

In statistics, characteristics are commonly called variables, with each object having a value of a variable.

◊ ◊ ◊

“Raw data, like raw potatoes, usually require cleaning before use.” ~ American statistician Ronald Thisted

Data provides a window to the world. The problem is getting a clear view. That requires good data and unbiased examination.

A sample is data about a subset of the target population. A statistic is a numeric characteristic of a sample.

There are 2 basic types of statistical studies: observational and experimental. In observational situations, data is captured without interference in the process. In contrast, experimental studies consist of manipulating the objects measured. The quality of experimental data is directly related to the design of the experiment.

A sampling frame is the source material or method by which a sample is selected. Sampling frames must be designed to collect representative data, and, once amassed, cleaned as necessary to reflect that goal. Sample size is a critical aspect of data quality.

The law of large numbers is a theorem relating to sample quality. The theorem states that the average result should come closer to the expected value with larger sample size, or greater number of repetitions in experimental results.

The term random sample is used to describe the technique of randomly picking sample objects for examination. The happy thought and fond hope is that random selection will result in population representativeness. Many times, sampling, though intended as random, is no such thing. This is because certain members of a population are more accessible than others, and so more likely to be chosen.

Market research long used landline phones to survey consumers. The problems of obtaining a representative sample, once mainly limited to the demographics of geography and income/wealth, have been compounded in recent decades by the facts that many people now exclusively use cell phones, and that phone books are no longer the population compendium they once were.

Data is evidence. In scientific experiments phenomena are characterized via data. Data quality is a problem in every sort of analysis.

The larger the data set, the more hands involved in its compilation, and the more processing stages involved, the more likely errors creep in. The law of large numbers may be a mirage.

“Too many cooks spoil the broth.” ~ proverb

 Cosmic Inflation

In 2013, a sizable group of astrophysicists aiming to prove the false hypothesis of cosmological cosmic inflation collated a crock of inputs, ran the prodigious set of numbers they had amassed, then proudly proclaimed that they had found a particular pattern of gravitational waves in the early universe that buttressed the conjecture they wanted to believe in. More than a few well-respected academics ate the bait.

“These results are a smoking gun for inflation.” ~ Israeli American astrophysicist Avi Loeb, chair of the Harvard astronomy department

An ensuing review of what had been done revealed shoddy work, beginning with dirty data, and ending with the fact that, even if the data had been any good, the result would have been unsupportive of the hypothesis (cosmic inflation occurred).

The team had used preliminary, uncleansed data from another project. Their own data were marred by not accounting for interstellar dust.

The people involved did not have the dignity to retract their work, even as it was thoroughly discredited. The gross degree of slipshod statistics is somewhat atypical, and unusual in being accompanied by such stellar publicity. But the astrophysicists involved were well-respected, and the evidence sorely wanted, as the standard cosmological model is blatantly bogus if cosmic inflation is nonsense. (Cosmic inflation has been thoroughly discredited, as has the standard cosmological model; but both are still believed in by the ruling cult of religious astrophysicists.)


Data may incomplete or incorrect. Selection bias often occurs, and can be killer, literally.

 NASA’s Challenger

The US space shuttle Challenger spectacularly blew up 73 seconds after launch in 1986, instantly killing everyone on board. The cause was the failure of the O-rings which sealed segments of the launch rockets; a result of cold weather.

The reason that O-rings even existed on the rockets was that they were made hundreds of miles away and assembled on site – an abysmal engineering practice. The reason they were not assembled near the launch site was entirely political: a decision to spread NASA money around to different congressional districts.

The night before the launch, a meeting was held to discuss whether to proceed, since the forecast temperature was exceptionally low for central Florida.

Data were produced showing no apparent relationship between air temperature and booster rocket segment seals. But the sample set was incomplete. It failed to include all launches involving no damage – launches which were mostly made at higher temperatures.

This unfortunate omission was the statistical root of the wrong decision, as a plot of all data showed a clear correlation between low temperature and O-ring damage.


Incorrect data is common for a variety of reasons, beginning with reading instruments or recording values wrongly. Data errors sometimes arise over units of measurement.

 NASA’s Climate Orbiter

In 1999 the NASA Climate Orbiter Mars probe failed to enter the planet’s atmosphere at the correct angle, resulting in its disintegration in the upper atmosphere. There was confusion about the unit used for pressure measurement: whether in pound-seconds or newton-seconds. Different software systems were calibrated to different units.

The discrepancy between the actual and calculated positions caused an error in orbit insertion altitude. 2 navigators had previously pointed the problem out, but their concerns were dismissed; so, $655 million dollars and hundreds of man-years of effort were wasted.

“People sometimes make errors.” ~ NASA space administrator Edward Weiler on the Climate Orbiter catastrophe


“Data allow us to steer our way through a complex world – to make decisions about the best actions to take. We take our measurements, count our totals, and we use statistical methods to extract information from these data to describe how the world is behaving and what we should do to make it behave how we want.” ~ David Hand

Statistics in Science

“There are large numbers of experts – not just laypeople – who have no training in statistical thinking.” ~ Gerd Gigerenzer

The bane of empirical science is uncertainty. For a modern scientist, a pattern of anecdotes may provide fodder for a hypothesis but is far too flimsy a foundation to float a theory. So scientists invariably rely upon statistics to muster support. Therein damning problems lurk.

“Statistics per se is acausal.” ~ Canadian econometrician James Ramsey

The overriding issue in employing statistics is confusing correlation with causality. The best any statistical result can show is a coincidence between 2 factors. Statistics can never prove one event causes another.

The shift from disciplines with an all-pervading causal interpretation to one that is inherently acausal represents a major fundamental shift in viewpoint, and one that cannot merely be dismissed as an alternative “explanation.” ~ James Ramsey

For statistics to bolster any claim the sample size must be large and the result unambiguous. Excepting physics and chemistry – where experimental reproducibility is relatively easily had – both criteria are rarely met. Failure to compile unassailable data of sufficient sample size is particularly true in the life sciences, notably the medical field.

Then there is the scale of potential effect. Whereas large effects may rather readily be determined, and therefore the use of statistics rather superfluous, small effects are tough to suss.

That smoking cigarettes hurts health – damaging lungs, causing cardiovascular disease and cancer – is practically a no brainer. Statistically, it helps greatly that there a lot of smokers about. But the health effects of eating something considered food are so difficult to discern that the statistics of all such studies are worthless.

A common technique is meta-analysis: aggregate a large number of studies, none of which individually may be considered conclusive, or even worth a damn, but then conclude that altogether a causal conclusion may be drawn. However appealing the rational, the technique is bogus from a statistical standpoint; yet it remains a popular ruse.

“Very few areas of science are uncontaminated by the pseudo-certainty of statistical conclusions describing individually uncertain events.” ~ American biomechanist Steven Vogel

 Eating Meat Causes Cancer

In October 2015, 22 scientists from 10 countries convened under the auspices of the World Health Organization (WHO) “to evaluate the carcinogenicity of the consumption of red meat and processed meat.”

The group assessed over 800 epidemiological studies on the topic, weighing the value of each study based on qualitative criteria that bore no relation to statistical validity. For example, “the studies judged to be most informative were those that considered red meat and processed meat separately.”

Conclusions were as follows (emphases added).

The mechanistic evidence for carcinogenicity was assessed as strong for red meat and moderate for processed meat. Mechanistic evidence is mainly available for the digestive tract.

The so-called “mechanistic evidence” involved chemical analysis of greater or fewer specific molecular byproducts in urine after consumption. Such results bear no statistical relation to cancer. Moreover, “strong” and “moderate” are qualitative interpretations of causality from an acausal technique. In other words, there were no statistics supporting the stated conclusions.

Then, in apparent self-contradiction, the group decided that the overall evidence was more damning for processed meat than red meat.

Overall, the group classified consumption of processed meat as “carcinogenic to humans” on the basis of sufficient evidence for colorectal cancer. Additionally, a positive association with the consumption of processed meat was found for stomach cancer.

The group concluded that there is limited evidence in human beings for the carcinogenicity of the consumption of red meat. Chance, bias, and confounding could not be ruled out with the same degree of confidence for the data on red meat consumption, since no clear association was seen in several of the high-quality studies and residual confounding from other diet and lifestyle risk is difficult to exclude. The group classified consumption of red meat as “probably carcinogenic to humans.”

Again, with statistical evidence being cast as “sufficient” or “limited,” associations being “positive,” and a conclusion of “probably,” where (causal) probability plays no part in (acausal) statistical correlation.

As to causing cancer from eating red meat and processed meat: well, the scientists just couldn’t say.

There is inadequate evidence for the carcinogenicity of consumption of red meat and of processed meat.

Because the public supposedly possesses even less understanding than scientists who themselves misuse statistics, the WHO scientists felt compelled to interpret their unfounded findings in meritless terms that nevertheless imply credibility. Statistically speaking, there is sufficient evidence to conclude that the blind are misleading the blind.

WHO’s evaluation of meat-eating causing cancer is exemplary of the common misapplication of statistics to draw unfounded conclusions. You can read such ilk regularly in health magazines and on the health pages of newspapers, as well as in science journals.

“Bacon, ham and sausages rank alongside cigarettes as a major cause of cancer, the World Health Organisation has said, placing cured and processed meats in the same category as asbestos, alcohol, arsenic and tobacco.” ~ English health writer Sarah Boseley in The Guardian newspaper

◊ ◊ ◊

There should be no doubt that eating animal flesh is detrimental to health. To prove it to yourself, eat nothing but fresh fruit, vegetables, and small portions of nuts and grains (seeds), for 3 weeks, and see how you feel afterwards. Then eat nothing but red meat, processed or not, for 3 weeks and compare with how you felt before. Damn statistics – there’s proof.


The more flexible a scientific field is in its definitions, experimental designs, analytic modes, and outcomes, the less likely that research conclusions are reliable. Biology, psychology, sociology, and economics are exemplary fields where empirical problems loom large.

Getting good data is the 1st hurdle, and is where many studies falter, often without the acknowledgement of those involved. The 2013 study aimed at cosmic inflation is an egregious example – caught only because the study was heavily scrutinized (which is itself quite unusual), and the flaws in data aggregation method so obvious.

Once amassed, data is then subject to statistical interpretation. If a study has not already been invalidated for lack of decent data, it is in this step that results readily go awry.

Even when performed properly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless scientific conclusions that make the news are erroneous.

“A lot of scientists don’t understand statistics. And they don’t understand statistics because the statistics don’t make sense.” ~ American epidemiologist Steven Goodman

 Probability Value

In the 1920s and 1930s, English statistician and biologist Ronald Fisher mathematically combined Mendelian genetics with Darwin’s hypothesis of natural selection, creating what became known as the modern evolutionary synthesis, thereby establishing evolution as biology’s primary paradigm. Fisher’s work revolutionized the experimental design and the use of statistical inference.

In his approach, Fisher expressly wanted to avoid the subjectivity involved in Bayesian inference, which became popular in the 1980s and remains so. Bayes’ theorem is now badly abused in science, medicine, and law to conclude causation when this shaggy approach at best suggests conditional plausibility when critical data is missing.

Fisher statistically assessed significance using a probability value (p-value). The p-value simply suggests the probability that a proposed hypothesis is plausible.

“The problem is that the p-value by itself is not of particular interest. What scientists want is a measure of the credibility of their conclusions, based on observed data. The p-value neither measures that nor is it part of the formula that provides it.” ~ Steven Goodman

Scientists now use p-value as a backhanded way of determining whether their data and attendant conclusions are valid. This is a fundamental misconception.

“This pernicious error creates the illusion that the p-value alone measures the credibility of a conclusion, which opens the door to the mistaken notion that the dividing line between scientifically justified and unjustified claims is set by whether the p-value has crossed a “bright line” of significance, to the exclusion of external considerations like prior evidence, understanding of mechanism, or experimental design and conduct.” ~ Steven Goodman

“Random variation alone can easily lead to large disparities in p-values.” ~ Swiss zoologist Valentin Amrhein et al

Fisher used “significance” only to suggest whether an observation was worth following up on.

“This is in stark contrast to the modern practice of making claims based on a single documentation of statistical significance.” ~ Steven Goodman

“p-values used in the conventional, dichotomous way decide whether a result refutes or supports a scientific hypothesis. Bucketing results into ‘statistically significant’ and ‘statistically non-significant’ makes people think that the items assigned in that way are categorically different. The false belief that crossing the threshold of statistical significance is enough to show that a result is ‘real’ has led scientists and journal editors to privilege such results, thereby distorting the literature. Statistically significant estimates are biased upwards, whereas statistically non-significant estimates are biased downwards. Consequently, any discussion that focuses on estimates chosen for their significance will be biased. On top of this, the rigid focus on statistical significance encourages researchers to choose data and methods that yield statistical significance for some desired (or simply publishable) result, or that yield statistical non-significance for an undesired result, such as potential side effects of drugs – thereby invalidating conclusions.” ~ Valentin Amrhein et al


“Claimed research findings may often be simply accurate measures of the prevailing bias.” ~ American epidemiologist John Ioannidis

As with all endeavors involving pecuniary interest: to find the fraud, follow the money. Corporate-funded scientific research is inherently untrustworthy for this reason. People are paid to find a desired result.

Further, the hotter a scientific subject is, the less likely that research findings are reliable: more teams are involved, and de facto under implicit competitive pressure to produce results. But the converse also presents the same problem.

The smaller the number of studies conducted in a scientific field, the less likely the research findings are to be true. ~ John Ioannidis

Researchers in a noncompetitive field need to produce noteworthy results to have any hope of continuing their work. The temptation for a little fiddling to sustain one’s livelihood is strong.

◊ ◊ ◊

Statistical inference is a nuanced mathematical art related to correlation which is widely misused as a yardstick of causality. Further, misunderstanding the concepts of statistics has meant falsely designating or denigrating significance – conclusions based on experiment design and measurement without proper accounting of actuality. As ubiquitously exercised, the employment of statistics in science is mostly delusion.

“Most claimed research findings are false.” ~ John Ioannidis


“Nothing takes place in the world whose meaning is not that of some maximum or minimum.” ~ Leonhard Euler

Calculus is the mathematical study of change. It originated with the concept of infinity: the treatment of limits that leads to infinitely small quantities.

The fields of calculus are usually described as bifurcating into differential calculus and integral calculus. More generally, calculus refers to any calculation system that symbolically manipulates expressions. An expression is any finite set of symbols obeying contextual rules.

Differential calculus addresses rates of change. Integral calculus concerns accumulation of quantities. Whereas integrals pile them up, differentials examine how quickly they change. Differentiation is the reverse process of integration.

Both differential and integral calculus are concerned with infinitesimals: something so small as to be at the edge of measurability – an infinity of the tiny.

The sine qua non of differential calculus is the derivative, which measures the sensitivity to change a dependent variable has to an independent variable. The 1st derivative measures the instantaneous rate of change, while the 2nd derivative measures the rate at which the rate of change (the 1st derivative) is changing. The 2nd derivative is often used as a measure of volatility, such as in financial markets.

Integral calculus is useful for measuring speed, distance, and velocity, and the volumes of objects with odd shapes, as well as anything involving accumulation.

Calculus is used in physics, biology, engineering, economics, and medicine – anywhere that measuring change is important.