The Fruits of Civilization – Computers

Computers

We are looking at a society increasingly dependent on machines, yet decreasingly capable of making or even using them effectively. ~ American technologist Douglas Rushkoff

Computing consists of inputting, processing, storing, and outputting data (information). A computer is a machine for computing.

Abetted by the miniaturization of components, the generality of data processing shows why computers have become so significant in modern societies. Mental life is, after all, a ceaseless process of consuming information.

Computers today process everything that can be digitized: from numbers to sounds to images to every imaginable data. Any wave of energy that can be captured by a sensing device is subject to computing.

Computers represent the technological fruition of human abstraction. Today, no society is considered “advanced” without ubiquitous computer usage. Addictive handheld devices which plug one into the networked world have become a norm.

Computers are useless. They can only give you answers. ~ Spanish painter Pablo Picasso

Computer Components

Before delving into the history of computers, it helps to be familiar with the components of modern ones.

A computer consists of hardware and software. Hardware is the part of the computer that can be kicked. Software is the part of the computer that lies behind the screen that deserves the kicking it can never get.

Software is code that does data processing of every sort. Code comprises programmed instructions that a computer’s processor understands. Firmware is software that interfaces to hardware devices.

The overarching software that provides the platform for application programs is an operating system (OS). An OS manages all basic computer operations, including peripheral-device firmware and file storage. Modern operating systems also provide common services for application programs, which are the software widgets that users fiddle with.Hard

The hardware heart of every computer is a central processing unit (CPU). The CPU is ensconced on a single semiconductor chip called a microprocessor.

The CPU chip is clamped into a socket on a motherboard, which is a printed circuit board (PCB) that is thoroughly etched with connections among various components.

A CPU employs random-access memory (RAM) for storing programs and data. RAM chips are soldered onto PCB sticks which are stuck in slots that connect the memory to the motherboard.

RAM remembers only when power is on. The contents of this memory are lost when the power is switched off.

Memory is measured in bytes. A byte is 8 binary bits. The decimal capacity of a byte is 256 (0–255).

A kilobyte (properly kB; typically, kb or KB, or simply k) is 210 (1,024) bytes; nowadays used loosely for 1,000. A megabyte (MB) is 220 (1,048,576) bytes (= 1,0242).

A gigabyte (GB) is 230 (1,073,741,824) bytes (= 1,0243). Storage drive manufacturers cheat and call a GB a billion bytes (1,000,000,000).

A computer has lasting storage in the form of drives, the most common being disc drives, which are spinning platters in a case. The platters store data electromagnetically.

There are also solid-state drives (SSD), which have no moving parts, as an SSD comprises a memory IC (integrated circuit) chip. The most common form of SSD is the thumb drive (aka flash drive or USB drive) which are small sticks of retentive memory which easily fit into pockets and crawl into hidden places.

Data skitter around a motherboard on electrical pathways that comprise a communication system called a bus. The bus connects the CPU to RAM and all other components that connect to the motherboard, including external devices, such as a keyboard or USB drives.

USB is the acronym for Universal Serial Bus. USB is a specific protocol for transferring and translating bits of data to and from external devices serially so that they can be understood by the CPU.

The protocol for buses may involve transferring data serially (sequentially, bit by bit) or in parallel (multiple bits: that is, word by word). Serial may seem slower, but serial links can be clocked much faster than parallel ones, and serial transfer is inherently more reliable, because the circuitry for handling serial transmission is simpler.

Unlike disc drives and SSDs, which can interactively store new data, optical discs, such as CDs (compact discs) and DVDs (digital video discs), store preformatted data.

Writing to an optical disc is called burning. Copying from an optical disc is called ripping. The term ripping is especially used when the material on the disc is copyrighted, because it feels that way.

Computer input devices include keyboards and hardware which indicates a location on a bitmapped display, such as mice and trackballs. A finger becomes an input device on touch-sensitive screens.

A computer display is also called a screen or monitor. Bitmapped means that each dot (pixel) on a display is independently accessible. On the early black-and-white displays, a pixel could be mapped to a single bit in memory: 0 for black, 1 for light; hence the term bitmap. On grayscale or color screens multiple bits are needed for each dot. But the term bitmapped is still used.

A display is an output device. Printers are another common output device. A scanner is an analogous input device to a printer in reading in a page of data at a time.

○○○

A sophisticated conception of computing was realized by 100 bce with an analog computer designed to predict astronomical positions and eclipses. The Antikythera mechanism was a complex clockwork device comprising at least 30 meshing bronze gears; named after the Greek island where it was found in an ancient shipwreck. Such technology was lost at some point in antiquity, with its equivalence not appearing again until the advent of mechanical astronomical clocks in 14th-century Europe.

Getting to the point of being able to process data in a generalized fashion, as opposed to some specific calculation, took most of the history of hardware: nearly 5 millennia. Only in the 1950s, with the advent of IC chips, did computers evolve into general data-processing machines. After that, the advance of computer hardware has been a process of miniaturization.

We begin our survey of computing hardware at the time when fingers were the input device, as they have become again on the handheld electronic tablets which are now so ubiquitous.

Computer Hardware

The Sumerians invented the abacus ~2700 BCE. This finger-fueled calculator was the progenitor of an endless array of devices for intellectual work.

In 1614, Scottish mathematician John Napier published his discovery of logarithms, which were intended to simplify calculations: specifically, to reduce multiplication and division – the 2 most difficult arithmetic operations – to addition and subtraction. Though the principle was straightforward, calculating logs with ease required lookup tables.

Logarithms were a lasting contribution to mathematics. But, to his contemporaries, Napier was more celebrated for his bones. Napier’s bones were a set of 4-sided rods that afforded multiplication and division by physical manipulation. They came to be known as bones because the most expensive models were made of ivory. Square roots and cube roots could also be sussed with Napier’s bones.

Napier’s bones were warmly welcomed by the mathematically challenged throughout Europe, which was practically everyone. At the time, even the lower rungs of the multiplication table taxed the well-educated.

English mathematician and Anglican minister William Oughtred invented the slide rule in the mid-17th century, which was used primarily for multiplication and division, though it also functioned for logs, roots, and trigonometry. Welsh clergyman and mathematician Edmund Gunter had developed the logarithmic scales upon which the slide rule was based.

The predecessor to the slide rule was Gunter’s rule: a large engraved plane scale that helped answer navigational and trigonometry questions, aided by a pair of compasses. Gunter also devised a device for calculating logarithmic tangents, something a slide rule could not easily do.

Gunter’s interest in geometry led him to develop a method for surveying using triangulation. To aid in that task, he invented an instrument – Gunter’s quadrant – which let one figure many common problems associated with spheres, such as taking the altitude of an object in degrees and figuring the hour of the day.

I have constructed a machine consisting of 11 complete and 6 incomplete (actually “mutilated”) sprocket wheels which can calculate. You would burst out laughing if you were present to see how it carries by itself from one column of tens to the next or borrows from them during subtraction. ~ Wilhelm Schickard in a 1623 letter to his friend, German mathematician and astronomer Johannes Kepler

German polymath Wilhelm Schickard invented the mechanical calculator in 1623. French mathematician, physicist, engineer, and Christian philosopher Blaise Pascal, who is generally credited with this invention, created his 1st calculator in 1642.

A prodigy, Pascal was a piece of work. His 1st essay on conic sections was so precocious that French mathematician and philosopher René Descartes was dismayed with disbelief that such mathematical matters “would occur to a 16-year-old child.”

A high-strung perfectionist, Pascal labored on his calculator prototype for 3 years, experimenting with different designs, materials, and components. The resultant Pascaline was conceptually more ambitious than Schickard’s contraption, but the German’s mechanism worked perfectly, whereas Pascal’s Pascaline was problematic.

Besides producing several seminal mathematical treaties in his 20s, Pascal demonstrated the existence of atmospheric pressure and vacuums. In his 30s he invented the syringe and hydraulic press, as well as enunciating the basic principle of hydraulics, now known as Pascal’s principle.

Along with Swiss mathematician Pierre de Fermat, Pascal laid the foundations of probability. Pascal’s project began as a favor for a card-playing nobleman who wanted to know the odds of a draw.

Pascal was also interested in worldly affairs. Shortly before he died at 39, in great pain from ulcers and stomach cancer, Pascal and other farsighted Parisians established one of the earliest public transport system in Europe: a bus line in central Paris.

At 32, Pascal entered a Jansenist convent outside Paris. Jansenism was a mostly French Catholic theological movement that emphasized original sin, human depravity, predestination, and the necessity of divine grace.

Pascal’s extreme religiosity was fueled by his repressed homosexuality and agonizingly poor health. He flagellated himself for more than his share of sins.

At the behest of his Jansenist order, Pascal generally abstained from scientific pursuits, devoting himself to castigating atheists and Jesuits in religious tracts that were considered masterpieces of prose. Thus, Pascal positioned himself as one of science’s greatest might-have-beens.

The 3rd 17th-century calculator was concocted by German mathematician and philosopher Gottfried von Leibniz, who was one of the first Western mathematicians to study the binary system of enumeration. It was this shift in perspective that enabled Leibniz to progress where others had stumbled.

Binary has only 2 digits: 0 and 1. Binary would find eternal fame as the basis of electronic computers, whose memory was based upon the presence or absence of electrons. The term bit is an acronym of “binary digit.”

To Leibniz, the significance of binary math was more religious than practical. He regarded binary as a natural proof of the existence of God, arguing that the all-knowing One had created existence out of naught.

Leibniz’s mechanical reckoner spawned a swarm of imitators. Almost every mechanical calculator built in the next 150 years was based upon Leibniz’s device.

The 1st mass-produced calculator appeared in 1820. The Arithmometer was invented by French inventor and entrepreneur Charles Xavier Thomas de Colmar. It was dependable enough to be used in government and commercial enterprises. The Arithmometer was only one of hundreds of mechanical inventions ushered in by industrialization, by which the commercial world moved at a faster and harsher pace.

Since the advent of logs, the tools of the trade of those who worked with figures were mathematical tables. These were indispensable for finance, science, navigation, engineering, surveying, and other fields. There were tables for interest rates, square and cube roots, hyperbolic and exponential functions, and mathematical constants, like Bernoulli numbers and the per-pound price of meat at the butcher. Many mathematicians devoted the greater part of their careers to tabular calculation.

The need for accurate tables became a matter of state concern. France had a national project for it that took 2 years, resulting in 2 handwritten copies of 17 volumes of tables. Fearing misprint, the tables were never published. Instead, they were stored in a library in Paris, where anyone could consult them.

Despite the cost and effort that went into making mathematical tables, they were invariably erroneous. Britain’s navigational bible was sprinkled with mistakes grievous enough to have ships run aground and get lost at sea.

Mathematicians were stymied for a remedy. Then a young English polymath, Charles Babbage, lit upon a solution: a calculating gadget he called the Difference Engine.

Powered by a steam engine, the Difference Engine was designed to calculate tables by the method of constant differences, and record the results on metal plates, from which results could be printed; thus, the gremlins of typographical errors could be eliminated.

Babbage managed a proof-of-concept prototype that proved the feasibility of his conceptual contraption. A full-fledged version would have required thousands of precision gears, axles, and other parts that were far beyond his budget.

In the end, Babbage managed to secure funding from the British government; an unprecedented move in support of private enterprise at the time. Through Babbage’s own lack of diligence, along with a petty, corrupt partner, Babbage blew his chance. In 1842, 19 years later, the venture was officially canceled.

If the Difference Engine had been built, it would have stood over 3 meters high and wide, 1.5 meters deep, and weighed 1.8 tonnes, filled with an intricate array of machinery connected to 7 main vertical axles. Its calculations would have been slow and cumbersome, but still preferable to pen reckoning.

In 1834, Babbage had a vision of a machine that could solve any mathematical problem. He produced the first workable design of his Analytical Engine by mid-1836, then tinkered with the idea off-and-on for the rest of his life; producing 6,500 pages of notes, 300 engineering drawings and 650 charts. It was a dream Babbage never realized.

In its supposed heyday, the Difference Engine received profuse publicity. It was only a matter of time before such a machine was manufactured.

Inspired by Babbage, Swedish lawyer and inventor Pehr Georg Scheutz and his son Edvard managed to make their own mechanical calculator, which they called the Tabulating Machine. Several difference engines were constructed in Sweden, Austria, the US, and England. The British Register General, responsible for vital statistics, had a copy of Scheutz’s tabulator built in the late 1850s. While the machine made the work somewhat easier, it often malfunctioned and required constant maintenance.

In 1884, American inventor Herman Hollerith filed patents for an electromechanical system that stored and tallied the perforations on punched cards. The cards were punched to represent particular statistics of any sort, whether census or inventory data.

Hollerith powered his device with batteries. His equipment was smaller, faster, simpler, and more reliable than the mechanical machines that preceded it.

The US Census Bureau was impressed with Hollerith’s work, but it first demanded a test against 2 other manual systems designed by Bureau employees. Hollerith’s system emerged triumphant, completing the job 8 to 10 times faster. The agency ordered 56 tabulators and sorters for the 1890 census, paying $750,000 in rental fees (the machines were rented, not purchased).

The 1880 census took 9 years to tally, at a cost of $5.8 million. The 1890 census took less than 7 years, but the tab was $11.5 million.

The benefits of automation seemed uncertain. But the comparison between censuses was somewhat apples-and-oranges.

The 1890 census was far more comprehensive. Indeed, the Census Bureau estimated that it had actually saved $5 million in labor costs.

Hollerith’s equipment did have hidden costs. Used to the hilt in the 1890 census, the price of providing electric power was considerable.

Nonetheless, Hollerith’s system became welcomed all over the world. By the early 1900s, the company couldn’t keep up with demand. In 1911, Hollerith’s firm merged with 3 others to become the Computing-Tabulating-Recording Company, which evolved into International Business Machines (IBM).

To facilitate the tally of the 1890 census, Hollerith devised a typewriter-like counter. This was one of many mechanical aids to calculating and writing to be constructed.

In 1647, English economist William Petty patented a copying machine in the form of 2 pens for double writing. What followed in the next 2 centuries were innumerable clumsy attempts in a similar direction.

The typewriter was first patented by English inventor Henry Mill in 1714. Others soon followed in a variety of configurations.

The 1st semi-practical typewriter was conceived in 1866 by American inventor Christopher Sholes. Technical difficulties delayed its manufacture until 1871.

The typewriter’s initial principal defect was that the type-bars had no return spring, making for a tediously slow mechanism. Sholes fixed this with springs.

Initially, Sholes typewriter keys were arranged alphabetically. But then Sholes came upon the idea of arranging the keys by combinations of most frequently used letters. After some study, the familiar querty keyboard was born.

The Sholes typewriter wrote only capital letters. Because the letters were struck on top of the platen, they could not be read while typing.

The Remington company – manufacturer of guns, sewing machines, and agricultural equipment – took an interest in Sholes’ typewriter, despite skepticism by one of its directors who could see no reason for machines to replace people for work which they already did well.

Starting in 1873, the Remington Model I was the 1st typewriter marketed in the United States. It followed on the heels of a writing ball-style typewriter by Danish inventor Rasmus Malling-Hansen in Denmark, which was sold in Europe beginning in 1870.

The 1878 Remington model offered lowercase letters. Finally, in 1887, the type-bars got mounted so that the text could be read while it was being typed. Hence the modern typewriter emerged.

American inventor David Parmalee constructed the 1st arithmetic calculator with a keyboard in 1849, receiving a patent for it the next year. It was an adding machine limited to single-digit numbers. Results had to be written down, and larger numbers added separately.

Parmalee’s invention deserved skepticism as a machine that replaced work done better by people without it; so too the ones that followed it. Numerous inadequate improvements followed in succeeding decades. The mechanical adders continued to require much preliminary manipulation and conscientious attention on the part of the operator.

What was worse was that the machines offered no numerical capacity of consequence and were tediously slow. Further, they were fragile in use, and so gave wrong answers if not treated with delicacy and dexterity.

The earliest usable adding machine was the Comptometer, by American inventor and industrialist Dorr Felt. Short on funds, Felt constructed the prototype in a macaroni box in 1885.

The Comptometer was manufactured without interruption from 1887 to the mid-1970s, with continual improvements in speed and reliability. It went electromechanical in the 1930s. In 1961, the Comptometer became the 1st mechanical calculator to have an all-electric calculating engine, thereby being the evolutionary link between mechanical and electronic calculators.

Of course, Felt was not the only one with an adding machine on the market. As a bank clerk, William Burroughs knew too well the inadequacies of those then-available. After working in his father’s model-making and casting shop, where he met many inventors, Burroughs began tinkering.

In 1884, Burroughs developed an adding machine with a keyboard, driven by a handle (shown). With backing from local merchants, he rushed his machine into production in 1887. Burroughs’ haste proved an expensive mistake: the machines did not stand up to everyday use. Furious with himself, he walked into the stockroom one day and pitched them out the window, 1 by 1.

In 1892, Burroughs patented another calculator. This one had a built-in printer. It proved a winner, far outselling all others on the market. But Burroughs did not live to enjoy his success. Suffering from poor health, Burroughs died at age 41 (1898).

Early calculators found homes in banks, company accounting departments, and universities. By the 1920s, electric ones were available: just push the buttons and results were printed out on neat rolls of paper.

While decent at basic arithmetic, these calculators were no good at more complicated math. To up the mathematical ante, many Rube Goldberg contraptions were devised by engineers, but they were cumbersome and expensive.

Differential equations were widely used in scientific and engineering circles. Dealing with derivatives, these functions were well beyond the ken of the tallying machines to date.

Differential equations can be attacked numerically via variable equations, or graphically, with waves on paper substituting for numbers. Beginning in 1814, when differential calculus was in its infancy, all manner of clever analog gadgets were devised to work these equations, targeted to specific applications. Scales and rules are exemplary analog instruments, as contrasted to the digital devices exemplified by Napier’s bones, mechanical calculators, and electronic computers.

Lord Kelvin realized that these special-purpose analog devices were the conceptual seeds of much more powerful machines. He outlined a generalized “differential analyzer” in an 1876 paper. But current-day technology was not up to the job. It was not until 1930 that a differential analyzer was built, and then by a man, American engineer and MIT professor Vannevar Bush, who claimed not to have been familiar with Kelvin’s paper.

Kelvin himself was not all theory. In 1873 he built a hand-cranked mechanical tide predictor that could calculate the time and height of ebb and flood tides for any day of the year, printing its predictions on rolls of paper; an especially handy device for an island nation like Britain.

Bush’s differential analyzer was inelegant but it worked: generating solutions off by no more than 2%, which is about the best that could be expected from an analog calculator. The machine became quite influential. Copies were made in several countries. Bush went on to build a larger, faster electromechanical version using vacuum tubes in the early 1940s.

But Bush and his MIT colleagues were barking up the wrong tree: analog devices were inherently ill-suited to accurate, versatile computing. Although special-purpose analog calculators continued to be built, the future was digital.

Spurred by the success of the differential analyzer, by the mid-1930s a handful of engineers and scientists in the US, England, and Germany gave serious thought to the mathematical potential of machines. Although they occasionally wrote articles or spoke at conferences, these men worked in relative isolation.

Babbage’s flexible Analytical Engine was all but forgotten except in Britain, and even there its underlying principles had to be rediscovered. The first to do so was young German civil engineer and inventor Konrad Zuse.

Zuse hated the mathematical drudgery of his chosen profession and did not relish the prospect of a career hunched over a desk twiddling equations. He also had the good sense to realize that another mechanical calculator with endless gears wasn’t the answer.

After carefully considering the problems with mechanical calculation, Zuse made 3 conceptual decisions that put him on the right track. 1st, he decided that the only effective solution was a universal calculator, which meant the same flexibility that Babbage had envisioned.

2nd, Zuse decided to use binary notation: an unobvious inspiration that ensured his success. The irreducible economy of binary meant that the calculator’s components could be as simple as on/off switches.

Larger-base number systems, such as decimal, could be had by ganging binary bits together to form words. An 8-bit word can represent numbers up to 256 (0–255).

Decimal had long been regarded as a God-given sine qua non until Zuse and a few other contemporaries independently questioned the axiom. Even Babbage had left the decimal assumption unquestioned; but then, the gears of the analog Analytical Engine were ideally suited to the decimal system.

3rd, Zuse devised a simple set of operating rules to govern his hypothetical machine’s internal operations. Although he did not realize it at the time, Zuse’s rule set was a restatement of Boolean algebra, named after English mathematician George Boole, who developed Boolean logic in papers published in 1847 and 1854.

Before Boole’s publications, formal logic was a sleepy discipline, with little to show for thousands of years of cogitation. Its most powerful analytical tool was the deductive syllogism, which Socrates had sussed.

Most logicians criticized or simply ignored Boole’s work; but mathematicians were interested. Babbage called Boole’s 1854 paper “the work of a real thinker.” In 1910, English logicians Alfred North Whitehead and Bertrand Russell extended Boolean algebra into the formidable intellectual system of symbolic logic.

Binary logic is relatively simple, and easily realized in electrical circuits. 2 bits may be operated upon via and, or, and not, which are the most basic of many Boolean algebra operations.

The tables below show how these Boolean operators work. The contemporary symbols for the operations are also shown.

Zuse graduated with a civil engineering degree in 1935. That same year he began working on the Z1, his 1st computer. Zuse finished the Z1 in 1938. It had ~30,000 metal parts, and never worked well owing to poor mechanical precision.

Zuse was given the resources by the German military to build the Z2, which he completed in 1940. The Z3 followed in December 1941. It was the first fully operational electromechanical computer.

Confident that the war would be won within 2–3 years, the military had refused to fund the Z3. Instead, an aerodynamics research institute indirectly funded the project.

No one was interested in a general-purpose computer. But the institute was interested in Zuse solving a problem regarding airplane wing flutter; thus Zuse got the resources he needed.

The Z3 could perform mathematics rather quickly, including finding square roots; but it could not execute conditional jumps, which are fundamental to logic processing. None of Zuse’s machines could jump, as the idea never occurred to him.

Zuse built a faster and more powerful Z4 toward the end of the war. It was discovered by the Allies when they invaded Germany, but Zuse and his computer were not deemed a security risk, so he was allowed to go his way.

In 1950, the Z4 was installed in a Zurich technical institute: the only mechanical calculator of consequence in continental Europe for many years.

The core component of Zuse’s calculators had been telephone relays. A relay is an on/off electromechanical switch, consisting of an electromagnet that closes an electric circuit when power is applied.

Zuse knew of vacuum tubes, which could switch on and off thousands of times a second; but they were hard to come by in Germany, and expensive, so Zuse stuck with relays.

A vacuum tube (aka electron tube or just tube) is a device that controls electric current between electrodes in an evacuated (airless) chamber. British electrical engineer and physicist John Ambrose Fleming invented the vacuum tube in 1904. American inventor Lee de Forest extended Fleming’s idea in 1906 to create a tube called a triode: 3 electrodes inside a tube. The triode’s ability to act as an amplification device was not discovered until 1912.

The triode revolutionized electrical technology, creating the field of electronics. It allowed transmission of sound via amplitude modulation (AM), replacing weak crystal radios, which had to be listened to with earphones.

Tubes made transcontinental telephone service possible (1915). Radio broadcasting began in 1920 in the wake of triode technology. Triodes were the key to public address systems, electric phonographs, talking motion pictures, and television.

Putting 2 vacuum tubes together, British physicists William Eccles and F.W. Jordan invented the flip-flop circuit in 1918. Coupled with a clock, flip-flops were the means to implement Boolean logic in a synchronous fashion. This was the essence of the computer hardware that evolved with tubes.

Contemporaneous to Zuse were 3 American computing projects. The first started in 1937, when American mathematical physicist George Stibitz, a Bell Labs researcher, tinkered with telephone relays and came up with an electromechanical adder in his kitchen.

In 1938, Claude Shannon, a student at MIT, published a thesis on implementing symbolic logic via relay circuits. Upon graduation, Shannon went to work at Bell Labs.

Bell was a telephone company, and its management saw no future in computation machines. Experiments in calculators by its researchers went nowhere.

American physicist Howard Aiken wrote a proposal for an electromechanical calculator in 1937. His motivations were similar to Zuse, but Aiken did not have as much savvy, notably in comprehending the advantage of using a binary system.

After being turned down by one calculating machine company, Aiken got the ear of IBM president Thomas Watson Sr. through a personal reference by a Harvard professor. Aiken taught math at Harvard at the time.

Watson was interested enough to take Aiken’s ideas and run with it, though he doubted there was much of a future for scientific computing equipment, which is how he viewed the project.

I think there is a world market for maybe 5 computers. ~ Thomas Watson Sr. in 1943

The result – the Mark I – was an electromechanical calculator working in decimal. A technological dead end, the Mark I was born a one-off dinosaur at the enormous cost of $500,000.

Once the 2nd World War started, the US military wanted ballistic firing tables, so that gunners might properly aim their weapons. It started with mostly women college graduates as calculators.

In 1942, American physicist John Mauchly, a teacher at the University of Pennsylvania’s Moore School of Electrical Engineering, wrote a paper about using vacuum tubes to build an “electronic computer.” The paper was poorly organized, conceptually unsophisticated, and badly written. It mostly wildly speculated about the hypothetical potential calculation speed of such a device.

Mauchly’s paper was sent to the US ordinance department as a proposal, where it was ignored. But then the trajectory calculation backlog became so bad in 1943 that the military decided to gamble on Moore to build an electronic calculator.

200% over budget and too late to help with the war effort – 3 months after the Japanese had surrendered, ending WW2 – the ENIAC was operational. The beast was nearly 3 meters high, 24 meters long, and weighted 27 tonnes.

Working ENIAC was tedious in the extreme. It took many hours of setting thousands of switches and cables to set up. ENIAC could not store programs or remember more than 20 10-digit numbers, so problems had to be solved in stages. Further, the machine required constant maintenance, as vacuum tubes frequently blew.

Having experimented with nuclear death during WW2, at the cost of 2 Japanese cities, atomic bombs became a US military obsession as the cold war began heating up. Getting a gauge on how to destroy cities by crushing atoms required serious number-crunching.

The military called on Moore for the next generation of calculator. This go-round the team had the wits to go with binary rather than decimal, thanks to a new consultant, esteemed Hungarian American mathematician John von Neumann. The Moore School would never have gotten the contract had it not been for Neumann, as the ENIAC had been a fiasco. Neumann lent a badly needed air of legitimacy.

As it was, a contract to build a new computer was inked in April 1946 at an initial budget of $100,000. The machine – EDVAC – was not completed until 1952, at a cost just under $500,000.

EDVAC weighed 7.85 tonnes and covered 45.5 m2 of floor space. Staffed by 30 people at a time, the computer worked reliably.

Computers in the future may weigh no more than one-and-a-half tonnes. ~ Popular Mechanics in 1949

Though others contributed, von Neumann was the chief architect of how the machine was to be structured. Besides using binary, the design included a central processor operating serially using random-access memory. This came to be known as von Neumann architecture, and would dominate how computers were built from then on.

ENIAC had operated on all bits in a word in parallel, which is faster but more difficult to build; hence, von Neumann suggested serial processing. In contrast, RAM allowed data and programs to be stored and retrieved from memory directly (rather than serially). This considerably enhanced performance.

The Brits were not idle. English scientists were invited to see ENIAC as the war drew to a close.

The British government bungled its first attempt to build a computer, thanks to bureaucratic indecision and miscomprehension.

Victoria University of Manchester began work on a computer in August 1948 and had its first working version by April 1949. The Manchester Mark I was the 1st fully electronic computer that could store a program in memory.

Computers proliferated in the 1950s. Most projects ran over budget and came in late.

Unsurprisingly, given their cost, a handful of business machine corporations held the market for the behemoth computers at the time. One came to dominate them all: IBM. But it was the phone company that came up with the innovation that would revolutionize computer technology.

Besides being expensive, the tubes that comprised early computers were fragile, cumbersome, gluttons for electricity, and gave off a lot of heat that required dissipation. They were, in short, a severe constraint on computing capacity.

The rise of quantum mechanics in the 1920s gave scientists a theoretical tool for the tremendously tiny. Semiconductors were given a look, but the research did not get very far.

Semiconductors are highly susceptible to contamination which alters their electrical characteristics. It was not until the early 1930s that semiconductor substrates – pure silicon and germanium – were available. Even then, scientists remained baffled by the behavior of semiconductors, especially its ability to convert light into electrical power.

In July 1945, Bell Labs decided to get serious about researching the potential for solid-state physics. The phone company was a huge consumer of tubes and mechanical relays, which were not entirely reliable and a maintenance headache.

Bell’s research team was headed by American physicist William Shockley Jr. The team made good progress. In 1947, they invented the transistor, which became the keystone in semiconductor-based computing.

The 1st commercial transistorized computer appeared a decade later, in 1957. Compared to tubed predecessors, the new generation was superior in every way: smaller, faster, more reliable, powerful, and economical.

The next step was miniaturization. English electronics engineer Geoffrey Dummer was first to conceive of integrated circuits.

With the advent of the transistor and the work on semiconductors generally, it now seems possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying, and amplifying materials, the electronic functions being connected directly by cutting out areas of the various layers. ~ Geoffrey Dummer in May 1952

Dummer could not drum up support for his vision from the British government; but the US Defense Department was very interested. It had spent gobs of money in that direction but came up empty-handed.

Working for private corporations, American electrical engineer Jack Kilby and American physicist Robert Noyce created the first integrated circuits (ICs) in 1958. They only contained a few transistors.

Yeah, microchips,… but what are they good for? ~ IBM senior engineer in 1968

From there the race was on to pack more circuits in and get the cost down. It was not until the mid-1960s that ICs became reasonably priced; only then did they begin to appear in commercial computers. By 2007 an IC might contain tens of billions of transistors.

IBM faced a dilemma about the same time as the IC was making its debut. The company had a burgeoning product line, filled with incompatible machines: a program written on one IBM computer would not work on another. Nor were hardware peripherals, such as printers, compatible.

After extensive internal debate, the decision was to construct a comprehensive family of compatible computers. The plan was sound from a strategic perspective, but risky in every way. It rendered IBM’s extant product line obsolete. It involved betting on ICs at a time when they were unproven. It meant an enormous effort at tremendous expense. And it would let other companies introduce compatible equipment and software, thus cutting into IBM’s sales.

It was roughly as though General Motors had decided to scrap its existing makes and models and offer in their place one new line of cars, covering the entire spectrum of demand, with a radically redesigned engine and exotic fuel. ~ Fortune magazine in September 1966

4 years and over $5 billion later, IBM introduced the System/360. The venture was an enormous success, propelling IBM to even greater dominance. So much so that the US Department of Justice (DOJ) sued IBM in 1969 for being an illegal monopoly. The suit dragged on for 13 years before the DOJ dropped it, concluding that its accusation was “without merit.”

From the 1950s, innumerable computer companies went in and out of business. A quarter century later, the identical dynamic happened to software companies.

 Digital Equipment Corporation

In the mid-1950s, 2 MIT computer engineers – Ken Olsen and Harlan Anderson – noticed that students preferred a stripped-down computer at an MIT lab to a much faster IBM machine there. The difference was interactivity. The IBM machine required punching cards and waiting for a printout.

In 1957, Olsen and Anderson managed to secure enough funding to sell computer boards suitable for laboratory work. From that humble beginning, Digital Equipment Corporation (DEC) grew to be the 2nd-largest computer corporation in the world, behind IBM. DEC did so by focusing on relatively inexpensive, interactive minicomputers, which, later on, could rather readily be networked together.

The line of computers that earned DEC’s keep was the PDP, an acronym for Programmed Data Processor. New PDP models sprang up as hardware capabilities progressed.

DEC’s primary customer base comprised the scientific and engineering communities. Only from the mid-1970s did the company begin to attract corporate computing customers.

A DEC research group produced a prototype microcomputer in 1974, before the debut of the MITS Altair, which became the 1st successful hobbyist computer. Olson, the head of DEC, was unimpressed.

There is no reason anyone would want a computer in their home. ~ Ken Olson in 1977

Only after IBM had successfully launched its PC did DEC take an interest in personal computers. Even then, its efforts were half-hearted and even insensible. DEC’s disk drives were incompatible with other computers, and its proprietary media too expensive.

The personal computer will fall flat on its face in business. ~ Ken Olson in 1985

But it was DEC that was falling flat on its face. The company went defunct in 1998. DEC failed to adjust to the era of personal computers.

Olson’s observational power and keen ambition in youth succumbed to the smug arrogance of an aging corporate titan. The irony was that Olson failed to recognize in PCs the same market opportunity for lower-cost computing that was the raison d’être of DEC’s origination.

Personal Computers

Microprocessors cut their teeth in programmable electronic calculators; then came the Intel 8080 microprocessor, released in April 1974. The 8080 was powerful enough to run a computer: so thought American electronics engineer Edward Roberts, who designed the Altair 8800 computer around the 8080 for his hobby-kit company: Micro Instrumentation and Telemetry Systems (MITS).

The Altair got off to a roaring start, thanks to fanfare from the editors of Popular Mechanics magazine, which proclaimed in its January 1975 issue:

For many years, we’ve been reading and hearing about how computers will one day be a household item. Therefore, we’re especially proud to present in this issue the first commercial type of minicomputer project ever published that’s priced within reach of many households – the Altair 8800, with an under-$400 complete kit cost, including cabinet.

The Altair was the 1st microcomputer popular with hobbyists. Many others had been marketed before. Below is a brief survey of a few.

The 1st hobbyist calculator was Simon: a $600 kit for building a “mechanical brain” that could perform simple arithmetic. ($600 in 1950 = $5,910 in 2015 dollars.) Simon was first described by Edmund Berkeley  in his 1949 book: Giant Brains, or Machines That Think. Instructions on how to build Simon were published in a series of articles in Radio Electronics in 1950–1951.

Simon is of no great practical use. It knows only the numbers 0, 1, 2, and 3. ~ Edmund Berkeley

Italian typewriter and electronics manufacturer Olivetti introduced the Programma 101 at the 1964 New York World’s Fair. It was a desktop programmable calculator, with a keyboard, printer, and magnetic card reader.

Over 44,000 Programma 101 units were sold worldwide, 90% of them in the US. 10 went to NASA, where they were used to plan the Apollo 11 landing on the Moon. The initial US price was $3,200 ($25,000 in 2015 dollars).

If she can only cook as well as Honeywell can compute. ~ Neiman Marcus advertisement for the Kitchen Computer

The first time a computer was offered as a consumer product, none sold. In 1966, Honeywell introduced the Kitchen Computer, which included a cutting board. It was promoted as an extravagant gift at high-end department store Neiman Marcus for $10,600 ($77,650 in 2015 dollars).

Weighing over 45 kg, the Kitchen Computer was advertised as useful for storing recipes. But that would have been quite a chore for the average housewife. Learning the user interface, which was a gaggle of toggle switches, required taking a 2-week course, which was included as part of the price.

 Xerox PARC

The photocopier company Xerox established its 2nd research center in 1969 at Palo Alto, California. Within a very few years the Palo Alto Research Center (PARC) did more to advance computing technology than any entity at the time: inventing the laser printer; Ethernet, which became the standard in networking hardware; and, most significantly, the graphical user interface (GUI) that became the universal way of interacting with computers. Commercially, these were philanthropic works.

PARC developed the Xerox Alto microcomputer in 2 years (1972–1973). It was the 1st computer with a graphical user interface. To point at an object on the screen, a user moved a mouse, another PARC invention.

The Alto was not a commercial product. Xerox did not take advantage of PARC’s computing inventiveness until 1981, when it released the Star workstation.

The Xerox Star was meant as an office system, not a standalone computer. A basic system cost $75,000 ($196,000 in 2015 dollars), and $16,000 ($41,775 in 2015 dollars) for each additional workstation.

The Star price tag was too hefty, and Xerox did not know how to market computers. More damningly, the Star was not reliable, and had a slow file system. Saving a sizable document could take minutes. Only 25,000 units were ever sold.

○○○

Hobbyists were having a heyday with microcomputer systems in the mid-1970s. There were clubs devoted to them, where enthusiasts shared knowledge and traded tips on the latest advances.

 Apple Computer

Design is not just what it looks like and feels like. Design is how it works. ~ Steve Jobs

American electronics engineer and programmer Steve Wozniak (known as Woz) met Steve Jobs through a mutual friend in 1971. They quickly became friends, sharing an interest in pranks and electronic devices.

Woz designed a blue box: an electronic device that emitted tones which let a user make free telephone calls. Jobs briefly put the two in business selling the illegal boxes to friends and acquaintances.

In 1975, Woz joined the local Homebrew Computer Club. His enthusiasm drove him to design a simplified version of the Altair.

Woz demonstrated his board to an appreciative computer club audience. Woz then tried to interest Hewlett-Packard (HP), where he worked as a programmer, into making an inexpensive microcomputer. HP, which was doing well in the electronic calculator business, demurred.

Once again, the entrepreneurial Jobs cajoled Woz into going into business. Woz’s board became the Apple I, which sold for $666 in 1976 ($2,778 in 2015 dollars).

The Apple II followed in 1977, with the low-end model priced at $1,298 ($5,085 in 2015 dollars). That it had color capability spurred its popularity among enthusiasts.

The Apple II began to penetrate the business market from 1979 thanks to VisiCalc, a spreadsheet program.

Apple followed its model II with the Apple III in 1980. It was a bust.

The initial manufacture of the Apple III had loose connectors that made the machine flaky. Apple issued a recall and fixed the problem, but by then the damage to the III’s reputation had already rendered the computer a dinosaur.

In 1979, Jobs visited PARC and saw the Xerox Alto. In an echo of young Ken Olsen’s appreciation of the value of interactivity, Jobs was immediately convinced that the future of computers lay in graphical user interfaces (GUIs).

Apple began development of the Lisa computer, named after Steve Job’s daughter, in 1978. After Job’s epiphany at PARC, Lisa took on a GUI.

Infighting forced Jobs out of the Lisa development group in September 1980, so he went to work on the separate, low-end Macintosh project.

Lisa came out in 1983. It had a stiff price tag: $9,995 ($23,817 in 2015 dollars). But the real problem was inside the box. Though it had advanced features, the Lisa OS sucked the power out the computer’s microprocessor, leaving it sluggish. It did not help that Jobs announced that Apple would soon be releasing a superior system that would not be compatible with the Lisa.

The Macintosh was launched in 24 January 1984 with a TV commercial during the Super Bowl football game that proclaimed that “1984 won’t be like 1984.” The reference linked the dominance in the personal computer market of the IBM PC with George Orwell’s 1949 novel of totalitarian dystopia titled Nineteen Eighty-Four.

The Macintosh was originally priced at $2,495 ($5,699 in 2015 dollars). It was an expensive personal computer compared to the competition, but the only affordable PC at the time with a snazzy graphical user interface.

Macintosh’s GUI was imitated by Microsoft a year later with its ersatz Windows OS. But then, the Lisa and Mac had both been spawned in the spirit of the Xerox Alto.

Apple tried to ensure usability in the programs developed for the Mac by publishing Inside Macintosh, an extensive multi-volume set of guidelines on application programming and user presentation.

Macintosh ushered in the era of desktop publishing and computer graphics for everyman. Unsurprisingly, the Mac failed to displace the IBM PC as microcomputer of choice for business.

 IBM PC

“Is IBM just another stodgy company?” ~ BusinessWeek in 1979

The spurious antitrust suit against IBM took its toll on the company. With its management defensive, IBM’s share of the overall computer market declined from 60% in 1970 to 32% in 1980.

IBM missed the minicomputer market during the 1970s; the niche that DEC created and came to dominate.

By 1980, the personal computer industry was vibrant, with the likes of the Apple II, Commodore PET, Atari, Radio Shack’s TRS-80, and various computers running the CP/M operating system (OS).

In developing its PC, IBM made some radical departures from its typical way of doing business. Most fatefully, it used commercial off-the-shelf parts. This meant that other companies could create clones.

IBM choose the 8088 processor over the superior 8086 because Intel offered a better price and could provide greater volume.

IBM also outsourced the operating system. Under the contract between IBM and Microsoft, the OS provider, this guaranteed that there would be clones, as Microsoft could license its operating system to others. From the get-go, Bill Gates was determined to make MS-DOS an industry standard.

It took a skunkworks division in Boca Raton, Florida only 1 year to develop the IBM PC, which was publicly announced 12 August 1981. The low-end configuration was offered at $1,565 ($4,806 in 2015 dollars). The pricing made it clear that IBM was aiming at the Apple II: and it hit the bullseye. The IBM PC was immediately successful: welcomed into businesses everywhere, as it had the IBM imprimatur – so much so that by 1984, Apple could allude to Orwell and everyone knew whom the reference was to.

The success of the IBM PC hatched an innumerable number of clones. This made the PC an industry standard: something which delivered dominance to Microsoft while eventually denying it to IBM.

In 2004, IBM sold its PC business to Lenovo, a Chinese company. By 2015 IBM had divested itself of computer hardware manufacture, having positioned its diminished self as a software services company; though, thanks to its historic brand name, more than just one in the crowd.

Software

“Complexity kills. It sucks the life out of developers, it makes products difficult to plan, build, and test, it introduces security challenges, and it causes end-user and administrator frustration.” ~ American software entrepreneur Ray Ozzie

Computer hardware is to software what the brain is to the mind: a substrate for processing data. Unlike all other technologies, in its ethereality software is entirely a product of the mind. Thus, software typifies human intellectual acumen like no other technology.

Program designers tend to think of the users as idiots who need to be controlled. ~ American computer scientist John McCarthy in 1983

Anyone who has extensively used a desktop computer – regardless of operating system – has noticed that application programs regularly sport somewhat different user interfaces. This owes to modern operating systems, which do offer common services, such as windowing, but do not make it easy for programmers to create consistent user interfaces.

The application programming interface (API) to the operating system renders creating programs an onerous and problematic process. Thus, application programmers find bypassing the OS to create their own feature set advantageous from both development and maintenance standpoints.

A more telling aspect of modern software is a randomness in robustness. Beyond usability issues lies glitches which bedevil users with crashes: programs abruptly terminating.

If debugging is the process of removing bugs, then programming must be the process of putting them in. ~ Dutch computer scientist Edsger Dijkstra

Finally, the greatest evil that may infect a computer is malware: malicious software that steals data, deletes files, takes control, or otherwise compromises peace of mind in computing.

A modest but continually annoying aspect of this malevolence is junk email. Over 90% of the email sent is spam (a common slang term for junk email).

Crashes, inconsistent GUIs, malware, and spam are all failures of operating systems to adroitly provide for application programmers and computer users.

Microsoft has never been able to deliver a decent Windows OS to its customers. The microprocessor you have may be a supercomputer on a chip, but Microsoft Word can’t keep up with your typing. Configuring home networking is ridiculously vexatious; in Windows 10 it is practically impossible. Bugs, crashes, inadequacies, and inconveniences remain rife. Windows 10, the latest OS, frequently crashes and must restart itself. Windows 10 is so fundamentally unstable that it often commits suicide within months, refusing to load itself and run. Microsoft updates to its OS often cause the death.

Since the release of Windows 8 in 2012, Windows has had something of a schizophrenic interface at both the user and programmer levels, with 2 incompatible application types. The newer “apps” are rinky-dink compared to traditional applications, which are much more difficult to develop because of Windows’ arduous API. Microsoft has never been adept at software design.

In the 21st century, Apple has done better by dint of laying its imitative GUI over a Unix core, after the company failed to create a dependable foundation on its own after over a decade of effort. The Macintosh OS was solidified by using a well-worn kernel that was already a quarter century old when adopted by Apple. Like Microsoft, Apple codes its OS in C++.

The cause of the software malaise rampant throughout the industry can be traced to a single source: the language in which computers are programmed.

These problems appear disparate and systemically unsolvable only because of the failure to comprehend the importance of language as the foundation for a holistic computing solution. The history of programming languages illustrates the dilemma and shows software developers as dolts failing to see the forest for the trees.

“Give a man a program, frustrate him for a day. Teach a man to program, frustrate him for a lifetime.” ~ Pakistani programmer Waseem Latif

History

That language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted. ~ George Boole

All the early computers were maddeningly difficult to program. They were instructed using machine code, which comprised binary words that were an inhuman language that the computer processor could understand.

The first computer language was assembly, developed by English computer scientist Alan Turing in 1948. Comprising numbers, letters, symbols, and short instruction words, such as “add,” assembly language was one small step, and yet one giant leap, away from machine code. An assembler translated an assembly-language program into machine code, which then had to be entered into the computer by an operator.

Most early computers were employed for scientific and engineering calculations, which involved large numbers. To manage these numbers, programmers used a floating-point system. Computers could not automatically perform floating-point operations, so programmers had to spend considerable time writing subroutines (program segments) that instructed machines on these operations.

In 1951, American computer programmer Grace Hopper conceived of a compiler which could translate human-readable program instructions into machine-readable binary. Hopper was one of the very few women pioneers in computing.

Fortran was developed by IBM in the mid-1950s. The language aimed at easy translation of mathematical formulas into code. It was the first compiled language. Immediately popular, computer manufacturers added their own customizations to Fortran, resulting in a plethora of variants. This typically resulted in programs tied to a specific manufacturer’s machine.

It was not until 1966 that Fortran was standardized. Even then, variants continued to spring up like weeds. New standards were set in 1977, 1990, 2003, 2008 and 2015.

Hopper advocated machine-independent programming languages, and was instrumental in the development of Cobol, beginning in 1959. Cobol was one of the first high-level programming languages.

Cobol was something of a reaction to Fortran, which was oriented toward scientific and engineering work. Cobol was created as a portable, English-like language for data processing at the behest of the US Defense Department, which then insisted that computer manufacturers provide it. Whence Cobol’s widespread adoption by businesses, despite it being verbose and not well-given to structured programming. The common result was monolithic, incomprehensible programs that were exceedingly difficult to maintain.

The use of Cobol cripples the mind; its teaching should therefore be regarded as a criminal offense. ~ Edsger Dijkstra

Early languages such as Fortran and Cobol were ineptly designed. As programs written in these languages grew in complexity, it became increasingly difficult to debug them, improve or add features, or keep them working when the hardware changed.

There were no theoretical constructs surrounding computer programming in the 1950s: just languages that arose ad hoc from concepts of getting a certain type of job done.

Cobol is exemplary. The driving idea behind Cobol was to make programs easier to read. Despite the fond hope that readability would mystically engender maintainability, that was not the case.

Fortran and Cobol both had a goto statement that allowed one-way transfer of control to a distant line of code. The jump capability of goto, which was a convenient shortcut for programmers trying to get their programs working as quickly as possible, decimated longevity, as it made programs unmaintainable.

“The quality of programmers is a decreasing function of the density of go to statements in the programs they produce. The use of the go to statement has disastrous effects. The go to statement should be abolished from all “higher level” programming languages (i.e. everything except, perhaps, plain machine code).” ~ Edsger Dijkstra in 1968

Via the widespread use of goto, Fortran became derisively known as a “write-only” language: that is, not readable. Its ability to produce “spaghetti code” in the hands of inept programmers begat structured programming languages.

[ Code maintenance drove software language evolution. ]

Procedural integrity was the paradigm behind structured programming. This regime aimed at code clarity. It was a reaction to the damage caused by goto.

The 1960 Algol programming language supported block structures, delimited by begin and end statements. Algol 60 was the first language that offered localized lexical scope for variables and nested function definitions: in short, execution compartmentalization.

Fortran and Cobol eventually acquired structured programming facilities. So did Basic, Fortran’s simple offspring.

In forcing programs to execute in a purely procedural manner – one routine calling another, with a subroutine returning to its caller upon its completion – structured programming centered on control structures.

Control structure is merely one simple issue, compared to questions of abstract data structure. ~ American software scientist Donald Knuth

Algol did not address an equally important issue: localizing data to help ensure its integrity. But its offspring did.

The stellar spawn of structured programming was Pascal, designed by Swiss computer scientist Niklaus Wirth  in 1968–1969. Intended as a language to teach programming, Pascal was something of a straitjacket in its inflexibility. Pascal had strong typing: data types were confined to their declared usage unless explicitly converted.

Steve Jobs was so impressed with Pascal’s programming etiquette that he made it the interface language of choice for the Macintosh computer. (For efficiency, the Mac OS itself was programmed in assembly language.)

From a historical perspective, Pascal was a dead end the day of its conception. (Wirth did not think Pascal’s paradigm a dead end. He went on to design Modula 2 (1977–1985) and Oberon (1986): Pascal descendants that were instantly celebrated and quickly forgotten.) Instead, the descendants of algol which bore the most fruit were a throwback toward assembly, and a new concept altogether: objects.

While Wirth was pondering how to impart proper programming practices to pupils, American programmer Dennis Ritchie had more nuts-and-bolts concerns.

 Unix & C

C is quirky, flawed, and an enormous success. ~ Dennis Ritchie

In the 1960s Dennis Ritchie and other AT&T programmers were working on scaling up Multics, a time-sharing OS, to handle thousands of users. The Multics project started as a cooperative project between MIT, GE, and Bell Labs (AT&T) in 1964 to develop a scalable time-sharing OS. Bell pulled out in 1969 because the project could not produce an economically useful system.

Ritchie and fellow programmer Ken Thompson, another Multics émigré, cast about for other work. Thompson got to developing a new file system for the PDP–7, the then-current DEC machine. Ritchie joined in.

Written in assembly language, their nascent OS was dubbed Unix by co-worker Brian Kernighan as a sarcastic reference to Multics.

When the PDP–11 came out in 1970, Ritchie put Unix on it by tweaking the old PDP–7 assembler for the new machine. Meanwhile, Thompson worked on B, a compiler-writing tool he derived from BCPL, which had been written by English programmer Martin Richards in 1966.

B and BCPL were essentially shortcuts for writing in assembly. B was typeless (without data types, except for a nondescript word), and so fell short of being a programming language like algol.

In 1972, Ritchie added data types to B to better support the PDP–11 architecture. Thus arose the language called C, which was imaginatively named by being the letter in the alphabet following B.

Ritchie then rewrote Unix in C; in the process beefing up C a bit by affording flexibility with data types and structures.

What came out of it was one of the first OS kernels written in a language other than assembly, and a language that hewed close to assembly while offering high-level constructs associated with structured programming. This spelled both efficiency and portability: a sure winner in the programming world. In the decades that followed C became the predominant programming language.

AT&T licensed Unix to outside parties from the late 1970s, leading to numerous variants of the OS.

Finnish American programmer Linus Torvalds wrote Linux for the prevalent Intel x86 chip architecture beginning in 1971. He started the project out of frustration about licensing restrictions on the Unix variant (minix) available to him at the time. Linux became a popular operating system with everyone from hobbyists to the supercomputing community because it was distributed gratis. Linux was also adopted by corporations, including IBM and Hewlett-Packard.

Apple ported its Macintosh operating system to Unix in 2002, its GUI a gloss over a proven bedrock. Apple did this after dissatisfaction with a native OS rewrite from assembly to C++, an object-oriented descendant of C.

The computing world runs on 2 basic operating systems: Windows and Unix. Android, the mobile-device OS developed by Google, is based on the Linux kernel.

 Object Orientation

Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches. ~ English programmer Paul Graham

In the early and mid-1960s Norwegian software scientists Ole-Johan Dahl and Kristen Nygaard took algol and gave it an object orientation. The result was Simula 67. As its name suggests, Simula was designed to run simulations.

 Concepts

Object-oriented programming (OOP) is a paradigm for developing software programs. The keystone of object orientation is encapsulation: making everything modular. In an object-oriented program, data is encapsulated in objects.

Objects are of a particular class. Each class typically defines 1 or more behaviors. An object is an instance of a class.

The term behavior has many synonyms: procedure call, routine, function, method, and message. They all amount to the same thing: code which does something with or to data; in the instance of OOP, data within an object.

A principal paradigm in object-oriented programming is inheritance: the ability of one class (a subclass) to employ (inherit) the behaviors of another class (the superclass of the subclass). The advantage of inheritance is its potential for reusability: what the goto statement aimed at, but without creating spaghetti code.

In OOP, base classes establish basic functionality, which subclasses inherit to further refine and specialize upon. For example, cWindow might provide the ability to display objects within an onscreen window. cWindow would be a superclass to cGrowWindow, a subclass that sports a grow behavior to enable resizing a window via a size box. cZoomWindow would then be a subclass to cGrowWindow; adding a zoom behavior that puts a zoom box icon on the window title bar which, when clicked upon, zooms a window to have it fill the screen. The zoom box acts as a toggle: if a window is already full screen, clicking the zoom control resizes the window back to take only the part of the display that it did before being zoomed.

Another important facet to object orientation is polymorphism: that different classes might have behaviors with the same name. A subclass may have a behavior that overrides a method defined in a superclass. (Polymorphism is sometimes inelegantly called function or method overloading.) Polymorphism simplifies the task of writing behaviors and takes advantage of the capability for inheritance. With polymorphism, generic behaviors may work on objects of any type. Polymorphism allows a generality in programming which cannot otherwise be achieved.

Operating systems present an application programming interface (API) that allows application software developers to connect with OS functionality through procedure call (behavior) names. Historically, this might involve many hundreds, if not thousands, of individual functions with different names. Polymorphism offers the potential to eliminate the inexorable clutter of an API presented by a structured programming language like C or Pascal.

○○○

The term object-oriented was introduced in the description of Smalltalk, an OOP language created by developers at PARC, particularly Alan Kay. With its English-like punctuation, Smalltalk was originally intended to be a language for “children of all ages.”

Smalltalk was more an environment than a language per se. As such, it did not make much of a commercial impact, but its concepts influenced many of the OOP languages that followed.

The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle. ~ American programmer Joe Armstrong

OOP became a paradigmatic virus to programming languages: all became infected. Even Fortran was extended with OOP constructs. Most OOP extensions to languages were afterthoughts: providing some convenience, but not remedying the deficiencies inherent in the language with regard to code maintenance. Some maladapted OOP extensions, such as C++, created problems for programmers rather than offering solutions.

“50 years of programming language research, and we end up with C++?” ~ American computer scientist Richard O’Keefe

 C++

“I made up the term object-oriented, and I can tell you I did not have C++ in mind.” ~ Alan Kay

Danish programmer Bjarne Stroustrup was a student when he started tinkering with adding OOP to C. Stroustrup liked Simula 67 but found its execution far too slow for practical use.

Stroustrup went to work for AT&T Bell Labs. By being there he gained the credibility needed to promote his C++ language. Just as C had been the programmer’s language of choice, C++ inherited its crown to an audience just being introduced to the OOP paradigm.

It was a gross miseducation. C++ injected complications without advantage.

There are only 2 things wrong with C++: the initial concept and the implementation. ~ French software scientist Bertrand Meyer

The potential for reusability that was the raison d’être for OOP became a chimera with C++. The highest compliment that can be paid to C++ is that it was chock full of missed opportunities.

The problem with using C++ is that there’s already a strong tendency in the language to require you to know everything before you can do anything. ~ American programmer Larry Wall

Even Stroustrup, the designer of C++, admits to a design deficit in the language.

Within C++, there is a much smaller and cleaner language struggling to get out. ~ Bjarne Stroustrup

C++ became an industry standard in an industry lacking intelligent discretion about the fundamental concepts of the business at hand. There is no greater statement of core incompetence in the software business than C++ becoming so popular.

“You’ve baked a really lovely cake, but then you’ve used dog shit for frosting.” ~ Steve Jobs

Several savvy C programmers thought of C++ as shambolic frosting on C’s scrumptious cake. This led to a handful of object-oriented C-based language variants. Almost all fell into oblivion, while the C++ hack job lived on. An example of how to turn C into an elegant OOP language illustrates.

 OOPC

Through clever use of macros and function calls, OOPC provides C programmers with a highly flexible, fully dynamic, object-oriented development system. OOPC’s object-oriented features read like a wish list. As a development toolkit, OOPC is a full-bodied powerhouse. ~ American programmer Mark Gerl

In 1992, a tiny company called Electron Mining (1991–1994) released OOPC: a revolutionary product that failed to spark a revolution. OOPC was a cross-platform software development kit that greatly accelerated application development by dint of the language in which the framework was written.

The initial release of OOPC was for the Macintosh. Development for a Windows version began, but the company went under and the Windows version was never released.

OOPC was an object-oriented extension to the C language inspired by CLOS. CLOS – the Common Lisp Object System – was the object-oriented extension to Lisp.

Lisp was specified in 1958 as a practical mathematical notation for computer programs. As a high-level language, only Fortran is older (1957).

Lisp isn’t a language; it’s a building material. ~ Alan Kay

Lisp quickly became the favored language for artificial intelligence (AI) research, and pioneered many ideas, including tree structures, automatic storage management, and recursion (the ability of a behavior to repeatedly call itself).

Lisp has jokingly been called “the most intelligent way to misuse a computer.” I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts. ~ Edsger Dijkstra

Common Lisp got its start in 1981. A syntax extension to Lisp, CLOS offered great flexibility with data structures and manipulation. Behaviors could be treated as objects.

OOPC was CLOS in C clothing, without the dross of C++. Further, with its inherent simplicity, OOPC offered elasticity without the syntactic complexity of Lisp.

OOPC began in 1987 as a language suited for AI programming. Those inclinations were accommodated by its open-ended nature. The distinction between classes and objects was largely arbitrary, as classes might have their own data, and objects could have their own behaviors.

Further, OOPC offered a dynamic flexibility unheard-of in other C-based OOP languages. The behaviors of classes or objects could be changed as a program ran.

Multiple inheritance is the concept of a class or object inheriting methods from multiple superclasses. Multiple inheritance was accommodated in OOPC by simply listing sequentially the priority order of inheritance. In contrast, C++ made multiple inheritance an invitation to confusion, giving this extremely helpful ability a bad reputation.

Overriding a superclass behavior in C++ meant that the subclass method lost the functionality that the superclass offered. This badly complicated code reuse.

With OOPC, simply setting a flag let a subclass behavior override, precede, or follow superclass functionality: hence subclasses could judiciously specialize or augment behaviors.

By employing a strict polymorphism, OOPC allowed the full benefit of inheritance without the disadvantages inherent in C++. For example, any object could be drawn within a window via draw(object). This greatly simplified the API. To display any object, all a programmer had to do was write draw(object). But even that was unnecessary, as the built-in OOPC library provided for a functioning document-based application in only 2 simple lines of code.

That economy freed up a program developer’s effort to focus solely on the features needed for the specific application. It was easy to create robust programs in OOPC because defenses were built-in to preclude bugs (software defects).

A common error in software to this day is reading an awry location in memory by looking in an array at an index that is not legitimate. Worse is corrupting memory by writing to such an out-of-bounds location.

C and C++ programs can easily read/write memory out of bounds. Using an OOPC array object (or any object that inherited array functionality), that error was impossible.

Another ubiquitous software problem involves memory leaks: using memory temporarily, but never releasing it so that the memory can be reused. Microsoft Windows and its Office suite of applications have historically been sieves of memory leaks – a problem that has only lessened somewhat, not been eliminated, in Microsoft software.

OOPC had automatic garbage collection that precluded memory leaks. Further, OOPC’s memory management worked many orders of magnitude faster than either the Macintosh or Windows OS ever have, as the programmers at those companies never bothered to develop an efficient algorithm for object-oriented memory management, which invariably involves a multitude of memory allocations in small chunks.

Apple Computer learned of OOPC at a developer trade show of theirs in 1992. Rather than express interest in the technology, which was far superior to their own, the company simply banned Electron Mining from having a booth at future Apple developer conferences. Apple offered an ersatz competing product in Pascal called MacApp.

Electron Mining knew it was in trouble when a matter-of-fact article describing OOPC in MacTech, the Macintosh developers’ magazine, was met by readers with disbelief that such programming capability was even possible.

Microsoft was given a presentation of OOPC technology in 1993, to a room of uncomprehending engineers. These muddled men were unable to grasp the elegance of OOPC because they had been brainwashed by C++ constructs, the language which Microsoft had already adopted as its standard. (Even now, most of the problems with Microsoft’s Windows OS can be attributed to it being coded in C++, as well as the skill of the programmers involved.)

In the 1st decade of the 21st century, OOPC’s developer advanced the technology under the name of Silon. Silon solved many of the problems associated with software today, including providing a robust OS, impervious to malware; a consistent GUI among all applications; rapid application development through effortless code reuse; and an end to spam email. These are desired goals which no operating system vendor – Microsoft, Apple, and Google, most notably – has even come close to solving.

Microsoft, Apple, IBM, Google, Amazon, and Sony, among other companies, were offered Silon technology. But none of these arrogant corporations had the good sense to take an interest. Belying their commercial success, the corporate cream of the software crop lack core competency at the technical level.

○○○

The inflexibility and inconsistent quality in software products indicates an industry struggling with technical faculty. As software is entirely a construct of the mind, the implication is obvious.

A programming language has direct effect on programmer productivity and software quality. The robustness of code and the ease of its reuse is an indicator of language quality.

By their nature, object-oriented languages offer the promise of an unparalleled potential for code reuse. That is precisely why OOP overlays were ubiquitously made on older languages.

But OOP does not necessarily facilitate code reuse. C++ is exemplary. Experienced C++ programmers consider code reuse mostly hype. Yet, because of its implementation of polymorphism, OOPC made code reuse trivial if not downright automatic.

Language libraries, which are ultimately dependent upon the chosen language’s capabilities, also hold great potential for rapid development of sturdy software: that is, being able to quickly program without creating bugs. The quirkiness of many software products shows that this remains a problem within the industry.

 Software Swell

640 k ought to be enough for anybody. ~ Bill Gates in 1981

Early microcomputers had a paucity of memory. Bitmapped GUIs became feasible as RAM grew less expensive. The original 1984 Macintosh had 128 kilobytes.

The Macintosh II, introduced in March 1987, was the first personal computer to support color. Color requires greater display memory capacity. The standard memory complement for the Mac II was 1 MB, expandable to 8 MB.

Since then, RAM capacities have soared as the cost of memory manufacture has plummeted. Low-end personal computers now have at least 4 GB.

The memory use of applications has also ballooned. The simplest program now takes many MB, and everyday office applications can require over 100 MB to run.

Thanks to the plethora of scripts attached to web pages nowadays, web browsers chew through hundreds of megabytes with just a few pages showing.

Further, despite thousand-fold increases in CPU processing power since the late 1980s, personal computer program execution speed is only modestly faster. The hardware giveth and the software taketh away. The reasons are 2-fold, with an ultimate singular cause.

1st is the fact that the OS does not share all that it has, and the application programming interface to do so can be bothersome. As code reuse is limited, applications must duplicate code that is within an operating system. Every application that does significant word processing has its own text engine, as does the OS; likewise for graphics processing.

2nd is the sloppiness with which modern compilers translate programmer-written source code to machine-executable instructions. C produces tight executable code, as did OOPC. C++ does not.

C++ and other languages do nothing to engender code sharing between an OS and applications. In contrast, the Silon technology developed by Electron Mining using OOPC makes an operating system a full-fledged platform for application programs: a programmer need only add the incremental functionality that makes an application unique.

Software owes its bloat to the language used to write it.

○○○

Software is a great combination between artistry and engineering. ~ Bill Gates, who thought Basic was a great language before he had Microsoft adopt C++

Microsoft Windows and Office products, programmed in C++, show a visible accretion of features via inconsistencies in the user interface. Many old bugs remain unfixed. These problems owe considerably to the language employed, which makes code maintenance difficult. As Microsoft proves in spades, object orientation is no panacea if done poorly.

The Internet

A network that’s going to change mankind. ~ American Internet pioneer Steve Crocker

On 4 October 1957 Russia announced it had launched an artificial satellite into space. This instantly shattered the sense of technical superiority the US had felt until that day.

In response, President Eisenhower announced on 7 January 1958 a new organization – ARPA – tasked with overseeing and advancing the government’s technology work. (ARPA is an acronym for Advanced Research Projects Agency.) The top priority was winning the “space race.”

In 1966, Bob Taylor was in charge of ARPA’s computer projects. Taylor told his boss: “we’ve got a problem. We’re throwing money away. We’re paying different people all over the USA to do exactly the same work.”

Taylor’s solution to the problem was “to build a network of computers.” On 29 October 1969, the first successful message was sent over Arpanet. Then the computer that sent the message crashed, having transmitted only the first 2 letters of the word login.

Arpanet was declared operational 6 years later (1975). In 1977, an experimental Internet started.

By that time, computer networks of all sorts had proliferated. Their interconnection became possible only by using the same protocol (tcp/ip), developed under government guidance. On 1 January 1983, arpanet was split in 2: the military milnet and the civilian Internet.

The Internet did nothing but grow, covering the globe in computer interconnectedness. During the 2010s, the Internet grew so economically important as to become a focus of global investment while monies to build facilities not associated with computing lulled.

As of 2017, nearly half of the world’s population uses the Internet. Unsurprisingly, penetration is lowest in Africa, where only 30% of the people have Internet access.

 World Wide Web

Information on the Internet is subject to the same rules and regulations as conversation at a bar. ~ American pathologist George Lundberg

The most-trafficked part of the Internet is the World Wide Web, an open information space where documents and other resources are accessed.

The Web was conceived by English software scientist Tim Berners-Lee in 1989. He wrote the first web browser in 1990, 9 years after he first proposed a hypertext document system.

In all that time, not Berners-Lee, nor any of the many pioneers that were aware of his early web work in the early 1990s, were savvy enough to figure out that a document-based system should have a decent page description language. What should have been obvious wasn’t. Berners-Lee’s feeble thought process never made it past working “toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities.”

So, what was foisted upon the world was html: a specification for network-based links coupled to a simple system of tags for notating paragraphs, headings, and list items – end of story. (html = HyperText Mark-up Language.) It was not until 1995 that html tags for rudimentary font styles were added, and even later that tables were even considered.

Berners-Lee had many good examples of how a page-based document system ought to be, but he lacked vision and technical expertise. It wasn’t until December 1992, when American programmer Marc Andreesen insisted upon having an image tag, that graphics were even considered to be supported by a web browser. Other early Web enthusiasts were not at all keen on the idea of supporting graphics.

◊ ◊ ◊

The Internet? We are not interested in it. ~ Bill Gates in 1993

In 1993, no large computer companies were interested in the Internet. They remained unconvinced that the Internet would be a success; seeing it instead as an academic project. Which it was at the moment. But good ideas, however gimped the implementation, grow legs that take them through time. So it was with the Web.

I see little commercial potential for the Internet for at least 10 years. ~ Bill Gates in October 1994

A few moons later, Gates was singing a different tune.

The Internet is crucial to every part of our business. ~ Bill Gates in May 1995

Consensus on turning html into a page-description language was stymied by the stupidity of the Web’s early developers. It never really happened. Instead, the door was opened for scripts to be run from web pages in 1996. A cacophony of scripting languages sprang up to cover web page design deficiencies that should have been obvious from the get-go. Now it is common for complex web pages not to load correctly because of wonky scripts.

Scripts afford surreptitious surveillance and malware. It was imbecility to allow scurrilous scripting to substitute for a decent description language as the basis for the Web pages.

 Adobe Flash

Internet video went to the whims of the network effect: the players most commonly used determined the formats most often employed. Flash won by dint of being first.

Flash was an animation player that Macromedia came out with in late 1996. 9 years later, Macromedia was gobbled up by Adobe.

Adobe had extensive experience with software standards, having tried to have its cake and eat it too by promoting supposedly open standard formats while having them work well only via their own proprietary software. This happened with fonts, then documents: the now universal pdf (portable document format). So too with Adobe Flash.

Adobe’s handling of its Flash player was irresponsible. Adobe never wrung the bugs out, and it let Flash act as a conduit for malware. Apple Computer was so disgusted as to not even allow Flash on its mobile devices.

Flash is closed and proprietary, has major technical drawbacks, and doesn’t support touch-based devices. Adobe has been painfully slow to adopt enhancements. ~ Steve Jobs in April 2010

Jobs has hit the nail on the head when describing the problems with Adobe, but not until after smashing his own thumb. Every criticism he makes of Adobe’s proprietary approach applies equally to Apple. ~ American programmer John Sullivan

In the 2010s, web browser makers gradually phased out support for Flash. Adobe will abandon Flash in 2020.

More generally, Adobe has long been criticized for its price-gouging practices, poor software quality, and even spying on customers of its products. Even the company’s flagship product, Acrobat, remains ridden with bugs.

Through carelessness, the company suffered a grievous security breach in 2013 that affected 152 million users. Adobe initially admitted to 2.9 million, then later confessed to 38 million users being affected.

 Security

The men who created the Internet thought they were building a classroom, and it turned into a bank. ~ American historian Janet Abbate

It’s not that we didn’t think about security. We knew that there were untrustworthy people out there, and we thought we could exclude them. ~ American Internet pioneer David Clark on the wishful thinking of Internet pioneers

No innate security was envisioned by the designers of the Internet or World Wide Web. This folly allowed the Internet to repeatedly fall on its knees to malicious intent, providing the means for spreading malware worldwide.

We could have done more, and most of what we did was in response to issues as opposed to in anticipation of issues. ~ Steve Crocker

We didn’t focus on how you could wreck this system intentionally. You could argue with hindsight that we should have. ~ American Internet pioneer Vint Cerf

That’s a perfect formula for the dark side. ~ American Internet pioneer Leonard Kleinrock on the disregard of security concerns when the Internet was evolving during the 1970s and 1980s

Software security slowly emerged ad hoc, using clumsy schemes to encrypt email, and later, encrypting messaging traffic on the Internet.

Secure encryption has been technically feasible for decades and can be done without affecting software usability. Instead, it has been all too common for encryption software to have a “back door”: a way to break the supposed security. A back door may be built in by design, the result of programmer error, created by unauthorized tinkering, or some combination of the 3. The presence of back doors severely weakens encryption.

Damn. I thought I had fixed that bug. ~ American programmer at a leading software company on learning of a flaw in the encryption software he had written, allowing the Morris worm of November 1988, the first Internet security breach to gain mainstream media attention.

A worm is standalone malware that replicates itself to spread to other computers. The Morris worm was a cagey bit of code stupidly written by American programmer Robert Morris, then a graduate student at MIT. Morris wrote the worm as a means of discovering the number of computers connected to the Internet. Instead, the worm wreaked havoc on the computers it infected, crashing thousands of machines and causing millions of dollars in damage.

The fundamental problem is that security is always difficult, and people always say, ‘Oh, we can tackle it later,’ or, we can add it on later.’ But you can’t add it on later. You can’t add security to something that wasn’t designed to be secure. ~ American software scientist Peter Neumann who has been chronicling Internet security threats since 1985

Early Internet attacks – and there were many – were met by handwringing, and the rise of private software security companies exploiting fear, but often not providing adequate protection.

Apple wants to pretend that everything is magic. They need to admit that their products can be used by bad people to do bad things. ~ American software security specialist Alex Stamos

Operating system companies and governments worldwide did effectively nothing to counter the continuing threat. OS companies did belatedly provide lackluster security to their customers; offerings which were bested by 3rd parties.

Meanwhile, government intelligence agencies insistently demanded back door access for private communications, thus crippling Internet security. The US government, which essentially sponsored the Internet, has been incessantly reticent about making decent encryption publicly available.

Much of the business of the Internet is predicated on insecurity. ‘Surveillance capitalism’ – the collection of user data and its sale to advertisers and others – depends on vulnerable Internet practices, as does intelligence collection for national security and law enforcement. ~ American physicist Steven Aftergood

Internet data breaches are a regular event. Robbers are now able to steal from banks without getting near a bank building.

While the bank’s IT staff is scrambling to keep its servers online and running, criminals are transferring money from users’ accounts. ~ Slovenian software security specialist Mitja Kolsek

All told, cybercrime costs the global economy at least $500 billion each year; all because software developers were not smart enough to anticipate an obvious problem, nor take effective steps to thwart malfeasance. 1/3rd of the Internet sites worldwide are under attack at any time.

Half of all Americans are backing away from the Internet due to fears regarding security and privacy. ~ American cybersecurity researcher Dan Kaminsky in 2016

Lack of security is an excellent reason to treat Internet access like kissing something diseased. A recent malware trend is crypto-ransomware, which encrypts all the data files on a user’s computer, making them inaccessible. Once a machine is infected, the malware displays a screen demanding a ransom, which typically runs hundreds of dollars. If the victims don’t pay up in time, the files are destroyed. In 2016, crypto-ransoming accounted for nearly 60% of all infections.

Over the last few years, attackers realized that instead of going through these elaborate hacks – phishing for passwords, breaking into accounts, stealing information, and then selling the data on the Internet’s black market for pennies per record – they could simply target individuals and businesses and treat them like an ATM. ~ American cybersecurity researcher Brian Beyer in 2016

From mid-October 2016, web sites around the world experienced outages as hackers harnessed Internet-attached appliances to assault Internet infrastructure. Security researchers had long warned that hooking devices to the Internet – the so-called Internet of Things – would create a serious security threat; but device manufacturers around the world did not bother installing any security precautions.

◊ ◊ ◊

We’ve ended up at this place of security through individual vigilance. It’s kind of like safe sex. It’s sort of “the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there.” There’s this sense that the Internet provider’s not going to protect you. The government’s not going to protect you. It’s up to you to protect yourself. ~ Janet Abbate

Even governments regularly have sensitive data stolen from them. The Chinese and Russian are quite adept at snitching data from the hapless American government.

“We are living in the dark ages of cyber.” ~ Russian cybersecurity specialist Eugene Kaspersky

○○○

“The web is more a social creation than a technical one. What we believe, endorse, agree with, and depend on is representable and, increasingly, represented on the Web. We all have to ensure that the society we build with the Web is the sort we intend.” ~ Tim Berners-Lee

Despite technical shortcomings, the World Wide Web became the information conduit for the global community. Its development exemplifies how failure to think issues through, incompetent design, and human inability to intelligently cooperate can create a worldwide information mess: a cerebral analogue of the physical dystopia which humans created with a greed-based economic system.

“The thing about Web companies is there’s always something severely fucked-up. You work with these clugey internal tools and patch together work-arounds to compensate for the half-assed, rushed development, and after a while the fucked-upness of the whole enterprise becomes the status quo. The whole Web was built by virtue of developers fixing one mistake after another, constantly forced to compensate for the bugginess of their code.” ~ American writer/ranter Ryan Boudinot

Documentation

In the 1990s, software products came with printed manuals. Many also came with help files accessible through the program. There were also books that extensively described the features of the most popular software.

By 2110, printed software documentation had gone the way of the dinosaur: extinct. That left typically measly help files and online support sites, include forums where finding anything was often an exercise in frustration, as search engines seemed incapable of properly pruning results.

Microsoft has been notably remiss in not providing adequate documentation. Most feature requests for its Microsoft Office suite of programs already are features.

A dearth of documentation leads to reliance on technical support, which often resembles entering a lower level of Hell.

Technical Support

Technical support for computer equipment and software is notorious for being infuriating. This is by design.

Computer service managers are fully aware that customer dissatisfaction is high. 74% admit that their company procedures prevent satisfactory experiences.

Don’t think companies haven’t studied how far they can take things in providing the minimal level of service. Some organizations have even monetized it by intentionally engineering it so you have to wait an hour at least to speak to someone in support, and while you are on hold, you’re hearing messages like, “If you’d like premium support, call this number and for a fee, we will get to you immediately.” ~ American software support specialist Justin Robbins

Cable and mobile service providers, which are regional monopolies or have little competition, tend to be the most egregious offenders.

Conversely, the large companies rated as having the best support charge more to begin with, so the cost of service is baked in; such is the case with Amazon Prime subscription service. Companies in competitive markets, and hungry upstarts trying to break into a segment with dominant companies, may distinguish themselves with decent service.

Faith in Technology

Since industrialization, technological advances have transformed most aspects of daily life. The footprints of technology are pervasive. Constantly advertised as “progress,” people generally see technology positively.

“People may not understand how a particular technology works, but they do assume that it will work. People unconsciously associate technology with success.” ~ American business management professor Chris Robert

The more exotic a technology is, the greater its appeal. Less faith is put in familiar technologies, as their limits have already been exposed.

“People have more confidence that unfamiliar technologies will provide solutions to a range of problems. People put new technology in a category of ‘great things that work which I love but don’t understand’.” ~ Chris Robert

Used to the conveniences of technological appliances and inured to living in complex societies, people maintain optimism, with ignorance mounting no obstacle. The mind resigns itself to what it cannot alter.

Faith in technology is false. As this book patiently explains, the overarching arc of technology has wrought conveniences of all sorts at the consequential cost of degrading the natural environment, being detrimental to societal cohesion, and damaging health.

In the heedless way it was accomplished, industrialization has ensured our own demise at an astonishingly accelerated pace, along with much other life. Replacing crafts with manufactured goods may have made them cheaper, but they also cheapened life. Modern processed foods may be convenient, but they are not nourishing. Those now-ubiquitous handheld tablet phones/computers are a convenience which has drained mental acuity and is rendering youth stupider and more infantile than they might otherwise be.

In short, technology has been a bane disguised as a boon. Health, the enjoyment of life, and life expectancy do not correlate to technological state. Success in living is instead an outcome of luck, skill, and discipline that has little to do with technology. Societally, technology has been most ‘successfully’ applied to man’s worst inclinations, thereby amplifying with ease what was otherwise difficult to accomplish: pollution, inequity, and murder on a mass scale.

“Our scientific power has outrun our spiritual power. We have guided missiles and misguided men.” ~ American Baptist minister and civil rights leader Martin Luther King Jr.