Energy was critical to industrializing economies. Waterpower was limited. Wood was never a very efficient fuel, as it gives off comparatively little heat: not enough to melt metals. In comparison, charcoal gives off twice as much heat as wood when burned. That left fossil fuels, which were abundant.
“Humanity is rapidly extracting and burning fossil fuels without full understanding of the consequences.” ~ American climatologist James Hansen
Fossil fuels are ancient organic matter protractedly pressed and heated via subterranean processes into various forms. Whereas coal comprises dead plants pressed into rock resemblance, petroleum originates from archaic algae and zooplankton, oozed into a viscous brew.
Coal came from the demise of mighty trees during the aptly named Carboniferous period (359–299 MYA). It was an oxygen-rich time when insects grew to horror-movie size: foot-long cockroaches, dragonflies with meter-long wingspans, and gigantic spiders.
When the trees toppled, the stagnant swamps in which they grew protected them from rotting. Over millions of years the erstwhile trees dried, were compressed, and naturally cooked into coal.
Making crude oil is a more delicate process than crunching trees into coal. The dead bodies of tiny sea creatures had to have been trapped, preserved, and cooked at precisely the right pressures and temperatures. If the process went awry, the oil broke down into methane, also known as natural gas. Crude petroleum consists of several chemical components called fractions, which make for distinct products.
These fossils lay near the surface and within Earth’s crust for hundreds of millions of years; decayed organic remains transformed by pressure and heat into combustible hydrocarbons with rich energy content. Tapping that energy incurs considerable externality in polluting the air with particulates and greenhouse gases.
The hydrogen content of fossil fuels provides most of the energy. Much of the once-trapped carbon is released into the atmosphere, where it acts as a warming agent.
Coal was the calling card of early industrialization: replacing wood, charcoal, wind, and water. At the onset of the 20th century, coal dominated energy supply. As the century wore on, other fossil fuels, particularly petroleum and natural gas, came to the fore.
In 1928, coal still accounted for 75% of world energy production. Petroleum was 17%, while water was 8%.
In 1950, coal provided 50%, with petroleum and natural gas 30%. By 1980, those percentages had more than reversed. Nonetheless, in several countries, including the US and China, coal continues to play a critical role in energy supply, particularly in electric power generation. China is the world’s largest producer and consumer of coal, accounting for 25% of the world’s total.
Coal-burning power plants invariably spew massive quantities of greenhouse gases and other pollutants into the air. The most ambitious effort to develop “clean coal” technology was at Southern Company: the electric-power provider for much of the southern US.
The company began construction on a prototype “clean coal” plant in Kemper county, Mississippi in 2010. The Kemper project was to be completed by May 2014, at a cost of $2.4 billion. By mid-2018, the plant was still not in service, after nearly $8 billion had been sunk into it: the most expensive power plant in history. The Kemper design has what proved to be an insurmountable problem: it requires twice the maintenance downtime than predicted (50% versus 25%).
The company hid known problems from regulators for years, deceptively touting progress while mired in intractable difficulties. Having received government subsidies, the company decided upon the stratagem of denial. Despite the revelation of deception, as of mid-2018, the government has not asked for any money back. After all, doing so would be an admission to taxpayers that oversight was lacking – so, denial all around.
In the 2nd decade of the 21st century, cheap oil from fracking sent coal prices plummeting. As US coal companies collapsed into bankruptcy, they left billions of dollars in mandatory cleanup for taxpayers to pay; this after decades of subsidizing the coal industry to massively pollute in the first place.
“These groups collectively are the heart and soul of climate denial.” ~ American environmentalist Kert Davies
Before their decline, America’s largest coal companies spent millions funding dozens of groups that cast doubt on man-made climate change and oppose environmental regulations. This was on top of funding politicians favorable to their cause (mostly Republicans).
As a secondary energy source, electricity became essential to industrialization, as it afforded precise power application.
“Electricity is really just organized lightning.” ~ American comedian George Carlin
Electrical phenomena were appreciated in prehistory, but as late as the 18th century, electricity was nothing but a curiosity. Toward the end of that century, researchers flipped the switch of electricity’s status from parlor trick to scientific pursuit. While there was rapid progress in electrical science in the early 1800s, it was not until the late 19th century that scientific understanding began to have an economic impact.
American inventor Samuel Morse developed the electric telegraph 1832–1844, following on work by English scientist Michael Faraday, who discovered electromagnetic induction and other electrically related phenomena. Faraday’s work formed the foundation for electric motor technology and led to generating electricity on an industrial scale.
Replicating lightning on a small scale entranced many researchers. In 1800, Italian physicist and chemist Alessandro Volta invented the battery and made a wire glow.
British physicist and chemist Joseph Swan invented the incandescent light bulb, which he first demonstrated in 1878. American inventor Thomas Edison claimed credit the next year for inventing a practical light bulb, based upon work by his assistant, Serbian-American electrical engineer Nikola Tesla.
Also in 1879, German inventor and industrialist Werner von Siemens improved the telegraph by pointing to letters rather than requiring Morse code interpretation. Siemens had already invented the moving-coil transducer in the mid-1870s, which led to the loudspeaker. In 1880 Siemens invented the electric elevator. 2 years later he came up with the streetcar.
In 1882, Edison had an electrical generating station working in lower Manhattan. Streetlights sprouted there. But these small power stations, such as Edison’s DC system, could only distribute electricity short distances without severe power loss.
American entrepreneur and engineer George Westinghouse invented the rotary steam engine when he was 19 years old. It was the first of many innovations he made, several of which went to railway technology.
Westinghouse became interested in electrical power distribution. He investigated Edison’s scheme, but decided that it could not be scaled up.
Westinghouse went to work developing a practical AC electric power generation and distribution system. He began by importing the best technology at the time, including a Siemens AC generator, then went about making improvements as needed. In 1888, Tesla invented a practical multi-phase AC induction motor and transformer, which Westinghouse licensed.
Westinghouse built a few test generating systems. The first was in Great Barrington, Massachusetts in 1886. His systems used high-voltage distribution (3,000 volts), stepped down to 100 volts for lights.
Westinghouse’s promotion of AC power led him into bitter confrontation with Edison, who favored DC. The feud became known as the “War of Currents.”
Edison claimed that high-voltage AC systems were inherently dangerous. In 1903, Edison cruelly electrocuted an elephant to make the point, not bothering to note that high-voltage DC current could have done the same.
Edison pushed the absurdity of the war to an apex with another execution. Edison, who ostensibly opposed capital punishment, to humans at least, set out to discredit AC power by secretly paying for the first electric chair to be built for the state of New York. Edison told the state board that alternating current was so deadly that it could instantly kill.
30-year-old convicted murderer William Kemmler was the first to be executed by electrocution. Westinghouse had hired a top-drawer lawyer to defend Kemmler. The lawyer condemned electrocution as a “cruel and unusual punishment;” but the powerful tycoon J.P. Morgan, who was backing Edison, had more pull with the court.
In August 1890, Kemmler was electrified for 17 seconds and still survived. Jailers were horrified but turned the juice back on until the convict expired. Edison lied and Kemmler was fried until he died.
The electric chair went on to become a common American execution method for decades, even as its tortuous nature was apparent. Despite his success with barbarity, Edison was unable to popularize his pet slogan: “to be Westinghoused.”
Edison – a vicious, unscrupulous, sharp-tongued man – became celebrated as a “wizard.” In contrast, the fair-minded Westinghouse, comparably inventive and of upstanding character, but lacking Edison’s taste for grandstanding, garnered less renown.
By the end of the century Westinghouse had demonstrated a working hydroelectric station at Niagara Falls. The power commission there had rejected Edison’s proposal. General Electric, which absorbed Edison’s electric company, completed a 2nd AC station in 1904.
Though AC won the War of Currents worldwide, DC power continued in some cities, including Stockholm and parts of Boston, Massachusetts, into the 1960s. Consolidated Edison did not eliminate DC power provision in New York City until 2007.
Generation & Distribution
Electricity is typically generated via turbines driven from fluid flow: water, steam, or gas. Commonly, coal or petroleum is combusted to boil water. Pressurized steam spins turbine blades, which generate electricity through electromagnetic induction: producing a potential difference – voltage – across a conductor exposed to a varying magnetic field.
Wind or waterfall can also power turbines, as can pressurized gas from combusting natural gas or other light fossil fuel. Huge mirrors may be used to focus the sun’s heat, thereby generating steam. Via controlled massive radioactive decay, nuclear fission can also generate steam, as well as create waste that is incredibly difficult and expensive to dispose of.
Any engine that can power a rotating wheel can be used to generate electricity. Reciprocating (piston) engines, such as those used in automobiles, may be employed for power generation.
Electricity can also be generated more immediately: by converting the sun’s heat energy into direct current. Solar panels perform this trick.
While sunshine is free, photovoltaic cells are relatively expensive, owing largely to their conversion inefficiency. But photovoltaic technology continues to advance and will increasingly be deployed in the sunnier parts of the globe, especially in developing countries which lack inexpensive access to fossil fuels.
Commercially, the fuel used to generate electricity is determined by load generation characteristics. Base load is the level of power always needed. As electric utilities always face a large base load, the cheapest fuel is preferred to meet that load. Historically, this has been coal; then came a day favoring beta decay.
“Our children will enjoy in their homes electrical energy too cheap to meter.” ~ Lewis Strauss, Chairman of the US Atomic Energy Commission, in 1954
The devastating intensity of nuclear reactions was understood from striving to make horrendous bombs during the 2nd World War. The potent sway of beta decay was then applied to boiling water – a humbling comedown from being Hell incarnate but offering the prospect of steady employment.
Russia built the first nuclear power reactor in 1951, followed by the British in 1956, and the Americans in 1957.
As so little raw material was needed for fuel, there was tremendous optimism from the late 1950s to the end of the 1970s that nuclear power would offer salvation from man’s unquenchable thirst for energy, which before could only be slaked by burning prodigious quantities of fossil fuels.
Nuclear power plants globally bloomed in the 1960s and 1970s. In 1973 the US had 42 reactors. By 1990 there were 112. 432 nuclear plants were operating worldwide in 2013.
France gets 75% of its electricity from water heated by decaying uranium. France made its decision about electricity generation in the 1970s. The country has quite limited fossil fuel resources and scant uranium. But former French colonial countries in Africa had uranium, and it was readily available on the world market; so the French government ambitiously built reactors.
Both coal and uranium are suitable fuels for base-load generation. The mining of either invariably entails grotesque pollution on a horrendous scale.
Using coal involves burning dangerously dusty lumps which horrifically pollute the air. In contrast, handling radioactive uranium requires extensive safeguards, but nuclear power generation is almost emission-free.
The sanguine scenario of atomic power to the rescue was dashed as the nuclear plant construction suffered stunning cost overruns. The promise of nuclear power was exhausted by insufficient technical skill, and further compounded with the unreasonably unexpected dilemma of storing spent fuel.
Nuclear power’s problems begin with lack of foresight and planning. Owing to uranium’s toxicity by proximity, atomic power is a much more complex undertaking than other electrical generation. Compared to dirty coal, nuclear energy is a high-risk venture which pays off only if you disregard sound economics and ignore radiation, both of which are admittedly invisible.
Power generation is the most capital-intensive enterprise on the planet. To make a bad situation worse, a nuclear power plant can easily cost over twice as much to build per kW output as a comparable coal-fired plant.
Ignoring subsidies, which are considerable, constructing a 1,000 MW nuclear plant runs to $8 billion (2010 dollars) if construction goes according to plan, which it never does; and that cost does not capture the complete life-cycle cost of operating and decommissioning a nuclear plant and safely storing the waste.
(Westinghouse, which claimed to be “the world’s leading supplier of nuclear technology,” went bankrupt in March 2017. Westinghouse built the 1st commercial nuclear reactor in 1957. Since then Westinghouse built half of the nuclear power plants in the world, more than any other company; but it never learned to construct plants on schedule or within budget. Cost overruns on plants under construction finally felled the company.)
“History shows that the expense involved in nuclear power will never change. Past construction in the United States exhibited similar cost increases throughout the design, engineering, and construction process. The technology and the safety needs are just too complex and demanding to translate into a facility that is simple to design and build.” ~ American nuclear power regulator Gregory Jaczko
Total direct life-cycle cost of nuclear energy is easily 4 times that of coal-fired electricity, though that figure ignores the indirect but extensive damage of pollution caused by burning coal. Which means that coal and nuclear power are ultimately twin devils to choose between.
“Coal often contains uranium from the mined ore. Operating a coal-fired power plant emits more radioactive material in its exhaust than an operating nuclear power plant.” ~ American physicist Charles Ferguson
The other part of the nuclear energy equation is that spent fuel must be stored so as not to leak deadly radiation for hundreds of thousands of years. This was never realistically planned for, and so its cost never figured in.
A typical nuclear plant generates 20 tonnes of spent fuel each year. As of 2010, the nuclear industry had chewed through 270,000 million tonnes of enriched uranium worldwide since the dawn of the atomic power age.
Nuclear waste can be contained indefinitely in certain geological formations. There is no engineering roadblock to do so. Numerous appropriate sites are known, and several used.
In some countries, politics has prevented long-term nuclear waste storage. The US has been notably negligent in letting petty politics prevent decent siting for radioactive waste disposal. This dereliction has increased the cost and risk associated with nuclear energy.
Private generation of nuclear power is subsidized by governments across the globe. The US government pays for over 1/3rd of the costs associated with nuclear power while power companies profit off consumers. From 1947 to 2015 the US government gave its nuclear industry over $3 trillion.
“It’s just like a mule. A mule is a docile, patient beast, and he will give you the power to pull a plow for decades, but he wants to kill you. He waits for years and years for that rare, opportune moment when he can turn your lights out with a simple kick to the head.” ~ Jerry Poole, referring to nuclear power
History has shown that humans are neither methodical nor careful enough to be messing around with radioactive material. Of course, that logic is easily extended to dangerous substances all sorts, from toxic chemicals to DNA.
Globally, there have been many thousands of radioactive mishaps. As a result, the atmosphere and waterways have been irradiated, and soil irredeemably contaminated. Statistically, Americans and Russians have been especially careless, though the Japanese have also been frequent fumblers in the nuclear realm (which is strange, since the Japanese are the only people to have had massive radiation inflicted upon them, and so should be most acutely aware of its devastation potential).
There have been at least 115 severe nuclear plant accidents 1942–2015. This does not count the other several hundred foul-ups where beta decay made for a very bad day. That withstanding, the pollution from coal-fired power generation has been much more damaging to the environment and human health.
The 1979 accident at Three Mile Island in Pennsylvania, with a partial meltdown of the plant and modest radioactive gas release, galvanized anti-nuclear sentiment in the US. Public panic was abetted by the movie The China Syndrome being in theaters at the time. The movie dramatized a nuclear plant mishap and subsequent cover-up, which the Three Mile Island foul-up made seem all too plausible.
“The safety of the environment and even workers was not exactly a primary concern in the Soviet Union.” ~ American research scientist James Mahaffey
In 1986, a reactor vessel at the Chernobyl nuclear power plant in the Ukraine ruptured during an unauthorized test by inept operators, causing the core to catch fire, with extensive release of radioactive steam into the atmosphere.
It did not help that the reactor had a dangerous design: a Soviet-made reactor built to produce both electricity and plutonium for bombs. The plant also lacked the common concrete containment structure that would contain a core meltdown.
The Soviets tried to cover the incident up, which made matters much worse: people were unknowingly exposed to lethal doses of radiation.
The Chernobyl disaster caused lasting environmental and political damage. Chernobyl was instrumental in changing government policies toward more transparency, termed glasnost, which culminated in the disintegration of the Soviet Union in 1989.
Nuclear Power in Japan
Aside from chopping down its forests, Japan lacks fuel resources. Japan has to import fuels to meet ~90% of its energy needs. This deficiency was a prime motivator in the militaristic fever that infected the Japanese government in the 1st half of the 20th century.
With scant fossil fuels, it is unsurprising that Japan was an early and enthusiastic adopter of atomic energy (despite being a victim of 2 atomic holocausts). Slowdown in nuclear plant construction in Japan only happened after several serious accidents which provoked protests and resistance to new plants. Japan has had over a dozen major nuclear incidents, and many more of modest consequence. Revelation of repeated cover-ups involving mishaps did not endear the public to the cause of nuclear power.
The Japanese public had never been in favor of nuclear energy. The industry, abetted by the government, did its best to manipulate public opinion.
Geologically, Japan is ill-suited to host nuclear power plants. Situated in the Pacific Ring of Fire, atop 3 tectonic plate boundaries which are fond of geophysical friction, Japan sports earthquakes like Idaho sprouts potatoes.
In the wake of an earthquake 153 km offshore, a tsunami hit the northeast coast of Japan on 11 March 2011, resulting in 18,500 deaths. A 13-meter wave crested the 10-meter seawall at the Daiichi nuclear power facility, flooding the low-lying rooms housing the diesel generators that powered pumps critical to the cooling system. Core meltdown ensued in 1 reactor, followed by hydrogen-air chemical explosions in 2 other reactors. Building the hapless facility at Fukushima was considered ill-advised 600 years ago. A few kilometers inland from the nuclear power site are a set of inscribed stones set in a roughly semicircular pattern. The stones mark that distance that a tsunami had washed ashore. The inscription on the stones reads: “don’t even think of building anything between here and the ocean.”
The 2011 disaster was merely the last in a series of mishaps at Daiichi. Besides lamentable siting, the plant was never well managed.
“There were design problems that led to the disaster that should have been dealt with long before the earthquake hit. Fukushima Daiichi was a sitting duck waiting to be flooded.” ~ Turkish civil and environmental engineer Costas Synolakis
The radiation leak at Daiichi was massive and went uncontained for years. Evacuation from the area incurred 1,600 deaths. Even now, the cancer toll from radiation exposure is just beginning. By 2014, at least 40% of the children in Fukushima prefecture had thyroid tumors. Radiation exposure had killed 10,000 by 2014.
Investigation into the accident found the disaster “foreseeable” and “man-made.” Plant owners and government regulators bungled their way through the aftermath, making matters worse.
“The accident destroyed people’s trust in the industry, in the government, and experts.” ~ Japanese nuclear scientist Ikuro Anzai
The Japanese government is determined to rebuild Fukushima prefecture. The power company is decommissioning the Daiichi reactors; a process that may take 3 to 4 decades and cost over $15 billion.
The Fukushima disaster provoked a rethink regarding nuclear power around the world. The Germans and Swiss swore off nuclear power, ostensibly opting to go back to coal.
Meanwhile, China, South Korea, and India were unfazed. Operation of existing facilities and plans for new reactors were unaffected.
The reaction in Japan was especially stark. Prior to the Fukushima incident, Japan had been getting nearly 30% of its electricity from nuclear reactors and was headed toward 40%. That immediately dropped to zero in the wake of Fukushima, which shook the government’s faith that atomic power was safe.
Practical considerations prevailed a very few years later. In 2014 the government sought to reopen nuclear plants, aiming for “a realistic and balanced energy structure”: pabulum which masked desperation.
As electric power demand varies during the day, peak power is met by running plants that can quickly be brought on-line and then shut off. Oil is suitable. Natural gas is even better.
Historically, gas was more expensive than petrol. But that changed at the end of the 20th century as fracking took off.
In 2009 in the US, natural gas provided 23.4% of electric power generation, while petroleum was only 1%. Hydroelectric power generation, by damming rivers, contributed 7%.
Electric power transmission is the bulk transfer of electrical energy from a generating power station to electrical substations located near demand centers. Electricity is then distributed locally to customers via distribution lines.
Transmitting electricity at high voltage over long distances is not especially efficient: 6–10% is lost to resistance, turning electricity back into heat.
As power lines may be loaded from various plants, transmission lines form an interconnected grid. The transmission grid system embodies a self-organized criticality. Excess load may be overcome by shifting electrical flow, but a large, sudden loss in supply or surge in demand can bring down an entire system, causing a blackout (power outage).
Failure protection is built into grid systems to prevent, or at least localize, blackouts. But the physical components behind electricity grids are surprisingly fragile, and hard to come by.
The US runs on roughly 2,500 large transformers, most of unique design. Only 500 or so can be built per year globally. It typically takes a year or more to receive an ordered transformer. Some transformers exceed 400 tonnes. The average American transformer is 40 years old.
Downing the transformers at just 9 critical substations could shutter America’s electricity grid down for months. The easiest way to do so would be shooting them with a high-caliber rifle. Which 9 substations are crucial is secret.
A surge of energy from the Sun took down Quebec’s electricity grid for 9 hours on 13 March 1989. Had such a solar storm hit America, it may have destroyed a quarter of the high-voltage transformers, knocking out power for quite some time.
An electromagnetic pulse, which can be prodigiously produced by a nuclear bomb designed to maximize gamma radiation, could knock out electric power for years.
More mundanely, a country’s electricity grid can be knocked out by cyber-attack. Hackers cut power in the Ukraine for a few hours in December 2015. The same could be achieved elsewhere with determined effort, though power outage would likely be measured in hours, not days.
Despite the risks and consequences, startlingly little has been done to protect the electricity grids in most countries, including the United States. The electric power industry expects the government to take the lead in protecting its assets. It took a decade for the North American Electric Reliability Corporation to draw up a vegetation-management plan after an Ohio power line sagged into tree branches and cut power to 50 million northeasterners at a cost of roughly $6 billion.
National security in many countries amounts to bullying foreigners, harassing dissidents, and maintaining a war machine, not looking after economic well-being beyond subsidizing favored corporations and industries.
Petroleum is another energy source that came into prominence in the back half of the 19th century, though the substance was known since ancient times. Asphalt was used in constructing the walls and towers of Babylon in the 18th century bce. By 347 there were bamboo-drilled oil wells in China.
Scottish chemist James Young distilled kerosene from petroleum in 1847. Kerosene was to prove a cheaper alternative to whale oil, to cetacean relief.
Polish pharmacist and petroleum industry pioneer Ignacy Łukasiewicz invented the modern kerosene lamp and street lamp in 1853, then proceeded to construct the first modern oil well (1854) and oil refinery (1856).
One of the first engine-drilled oil wells belong to American oil driller Edwin Drake, who sank a gusher in 1859 near Titusville, Pennsylvania.
Like electricity, the first employment of petroleum distillates was for illuminants. While the lighter fractions first went to lighting, thicker fractions served as machine lubricants, while the heavy, residual fractions, once waste, became heating oils. The lightest, most volatile fractions – naphtha and gasoline – were regarded for decades as dangerous nuisances.
Fossil fuels went everywhere pipelines, planes, ships, trains, trucks, and automobiles would take them, and from everywhere they were exploited, with little regard to the continuing viability of the environment to support life.
Everywhere that oil is extracted, transported, or processed, it spills. Everywhere. (There were 5,712 significant leaks or ruptures in US oil and gas pipelines 1998–2017. Offshore oil drilling in US waters resulted in 2,441 spills 1964–2015.) Sooner or later, there are significant spills at every major find where oil is extracted. The economics of oil is the practice of accident.
Affected ecosystems do not fully recover for centuries. Water polluted by petroleum simply cannot be made potable for any plant or animal life.
“There are no fish in the lakes.” ~ Peruvian Galo Vásquez after an oil spill in the remote Amazon
Ships carrying petroleum regularly have spills, most of which are ignored. Only those where the vessel does not get away are polluters even identified.
Most major offshore oil spills involve tankers being hit or running aground. Time and again, carelessness is the cause.
Exploding oil rigs are more spectacular but less frequent. America has had such 2 historic events, though there have been hundreds of others.
On 28 January 1969, an oil rig exploded off the coast of Santa Barbara, California, spilling ~14,000 tonnes of oil into the channel and onto the beaches. The marine and coastal ecosystems were devastated.
At the time, the US government had no regulations on offshore drilling. It took 9 years for the federal government to take any interest, to scant effect. The government does not enforce safety or environmental quality regulations on offshore drilling sites.
On 20 April 2010, an oil rig in the Gulf of Mexico – the Deepwater Horizon – exploded and then sank. The rig was 66 km off the coast of Louisiana. The massive leak, the largest in history, was never stopped and continues as you read this.
The environmental destruction from the spill has been estimated to be over $17 billion. This ridiculous price tag – based on projected loss of revenues from other exploitation of Nature no longer possible – does not account for ongoing damage nor that ecosystems will never recover.
British Petroleum, owner of the rig, paid a tax-deductible fine equivalent to ~10% of its annual revenues, and helped with the cleanup. No effective regulations were put into place to prevent a recurrence.
Another long-standing leak of gross proportion in the Gulf of Mexico is from an oil-production platform 19 km off the coast of Louisiana that sank during Hurricane Ivan in 2004. Nothing has been done to cap the many leaking wells which spew 300–700 barrels of oil per day into Gulf waters.
Almost all oil pipelines leak at some time. Even pipelines that do not visibly leak exude toxins which kills fish and other aquatic creatures. Universally and unequivocally, petroleum is a death sentence for life.
American geologist King Hubbert presciently predicted at the 1956 meeting of the American Petroleum Institute that US oil production would peak by 1970. Coupled with the oil shocks that the Western world received from the Arabs in the 1970s, the idea of alternative energy sources became attractive.
Nuclear was the fuel of the future at first, but environmentalists envisioned energy from “renewable” sources, such as the Sun, wind, and water. The 1979 Three Mile Island accident soured a generation of Americans on nuclear power.
The appeal of renewable energy is its supposed endlessness: taking what Nature inexhaustibly provides. The reality is quite different. The resources required, and pollution put out, in producing presumed ‘clean’ energy belies the attribution. All human energy production is environmentally destructive.
“So-called “sustainable development” is meaningless drivel.” ~ James Lovelock
“Biofuels are associated with lower greenhouse gas emissions but have greater aggregate environmental costs than gasoline.” ~ ecologists Jörn Scharlemann & Bill Laurance
Biomass was man’s main energy source for the many millennia before industrialization. Even after, ethanol found favor for powering vehicles.
American industrialist Henry Ford was an enthusiastic ethanol proponent. His Model T could run on it or gasoline.
Ethanol is ethyl alcohol (C2H5OH) used as engine fuel. Grain alcohol for guzzling and other medicinal applications is made by fermentation. Industrial ethanol is had via hydration of ethylene. Both come from sugary crops such as beets, sugarcane, and corn.
Increased fuel demand during the 1st World War boosted US ethanol production, but the introduction of leaded gasoline and the increasingly inexpensive refining of crude oil took the crops out of the gas tank.
The 1970s oil shocks brought Brazil back to ethanol, but it was not until the 21st century that the US got back into biofuel in a big way, thanks solely to government mandates and subsidies.
Growing crops for vehicular rather than dietary consumption has played a significant role in food price inflation. Further, the environmental dilemma associated with cultivation is compounded by trying to grow both food and fuel.
The energy content of ethanol is only 65% that of gasoline. Low power density and the acreage required to grow fuel stock alone makes ethanol an ill-advised energy source. But biofuel gets worse when considering its environmental impact beyond growing the biomass to derive it. Ethanol processing produces airborne heavy metals, copious amounts of wastewater, and other pollutants.
Rivers and waterways in Brazil have been biologically dead since the 1980s owing to biofuel effluents. Ethanol is ~1/3rd of Brazil’s automotive fuel.
Whereas America feeds its biofuel frenzy with corn, Brazil grows sugarcane to make its ethanol. For its energy inputs sugarcane delivers a whopping 8 times output than does corn. But sugarcane doesn’t grow well in the US Midwest. So American farmers are subsidized to grow an insensible fuel.
Only 1/3rd of the world’s great rivers remain free flowing. The rest are dammed.
Hydro power involves damming mighty rivers so that their majestic flows turn turbines to transform rotary motion into electricity. Every dam is an ecological disaster that destroys the ecosystem where it is located. The larger the dam, the greater the damage. The horrendous Hoover Dam, near Boulder, Colorado, has caused earthquakes.
The damn cost of dam construction and maintenance makes hydro power uneconomic on top of being environmentally outrageous. There were 3,185 hydroelectric dams in the world as of 2016, built at a total cost of ~$2.4 trillion.
“The ecological impoverishment of rivers is particularly dramatic when a series of dams prevents interlinking different habitat types.” ~ German biologist Juergen Geist
Reaping the wind has come quite a way from the windmills of yore that provided the power to grind grain. Wind power now consists of spindly propeller-driven turbines that generate electricity from gusts.
Environmental-impact studies of wind farms prior to their planting consistently underestimate the damage they do. Birds and bats succumb to the deadly rotors, including endangered species. Bats are fascinated with wind turbines, to their great peril. Though some birds are savvy enough to realize the threat whirling wind turbines pose, the construction of wind farms causes avian populations to plummet because of their habitat disruption.
The noise from wind farms is fierce. A planned facility on the coast of Germany was delayed due to the threat posed to porpoises by the din of wind turbines.
“A porpoise is doomed if its hearing is shattered.” ~ German marine biologist Kim Detloff
Though the prospects of wind power are promising, the electricity to be had is limited. Wind farms must be situated in remote spots where winds are strong and consistent.
Further, wind turbines extract kinetic energy and produce a plume of low-velocity air downstream; hence wind farms must be spread out to be economic. This exacerbates the habitat degradation of onshore wind power.
In every country where wind energy has a presence, it is only thanks to government subsidies.
“Wind energy doesn’t make sense without the tax credit.” ~ American business magnate Warren Buffett
The US has subsidized this blustery energy sector for over a quarter century. The outlay of taxpayer money in 2014 alone was $12 billion. That year, wind energy supplied a paltry 1.6% of American electricity. Government-sponsored wind power met 5.6% of British electricity consumption in 2014.
As any plant would happily tell you if it could talk, the Sun is the ultimate renewable energy source. The plant may then rightly remind you that its kind is the only truly green energy technology there is.
The feeble attempts by men to turn solar rays into electricity take 2 forms: solar concentrators and photovoltaic panels. Solar concentrators are thermal conversion systems which focus the Sun’s rays to heat a fluid that produces steam, thereby driving a turbine. Photovoltaic cells more directly convert solar energy to electricity, requiring only an electrical inverter to achieve their aim.
There are 3 main solar concentrating thermal conversion systems: parabolic troughs, parabolic dishes, and central receivers.
Parabolic troughs are the simplest of the solar thermal systems. A curved trough covered with reflectors rotates to track the sun. The mirrors focus sunlight onto a fluid-filled pipe at the center of the trough. The heat transfer fluid, typically a thermal oil, may reach 400 °C. The hot oil is then used to make steam for a standard turbine generator that produces electric power. Trough systems can concentrate solar radiation to 100 times the intensity of normal sunlight.
In 1866, French inventor Auguste Mouchout used a parabolic trough to power a steam engine. American inventor and engineer Frank Shuman built the first parabolic trough system in 1897. He patented the entire system in 1912.
Cheap oil subsequently supplanted all interest in solar energy until the 1970s, when Shuman’s basic design found favor, albeit updated with better materials.
Parabolic dishes are like troughs, except they focus light to a point instead of a line. Such systems can concentrate up to 10,000 times the intensity of sunlight.
Central receiver systems use an array of computer-controlled sun-tracking mirrors (heliostats). The heliostats focus sunlight onto a single central tower, termed a receiver. The receiver is filled with a mixture of molten thermal salts. Hot molten salt is pumped through heat exchangers which produce steam to turn a turbine that generates electricity.
Commercial deployment of concentrating solar power plants commenced in the US in 1984, lasting only to 1990. Then, from 1991 to 2005, no plants were built anywhere in the world.
Since then, plants have built in Spain, US, India, and the Middle East. Sunny countries have a promising future of decently clean energy technology with concentrating solar power. But concentrators have not got nearly the attention of their downright dirty solar cousin: photovoltaics.
“It takes a lot of energy to extract and process solar-grade silicon.” ~ American chemist Seth Darling
French physicist Edmond Becquerel discovered the photovoltaic effect in 1839. Becquerel’s discovery was a mere curiosity, as his selenium cells could only convert 1% of sunlight to electric power.
That changed in 1954, when Bell Labs physicists developed silicon-based solar cells that worked at 6%. Suddenly, photovoltaics fell within the realm of semiconductor technology development.
Solar cell electrical output is directly related to photonic input. Engineers sometimes incorporate mirrors to up the incident sunlight striking the cells.
Solar cells are put together into panels for deployment. An electrical inverter is employed to convert photovoltaic output into usable electricity.
In stark contrast to concentrators, the most atrocious fraud of clean energy is photovoltaics, the fawned-upon star of the solar show. Producing solar panels is an egregious exercise in pollution, and the environmental destruction does not come cheap. The silicon material used in solar cells is the same as for semiconductors, but far more is required for photovoltaics.
Producing silicon wafers is energy intensive and produces mostly waste. 80% of the initial high-grade silicon is lost in the process.
Sawing silicon wafers releases dangerous dust as well as dispensing sodium hydroxide and potassium hydroxide. 50% of the material is lost in the air and water used to rinse wafers. All told, 90% of the silicon used to make solar cells is wasted.
Silicon solar cell processing involves the use or release of numerous toxins, including arsenic, arsine, hexavalent chromium, lead, phosphine, phosphorous oxychloride, silicon tetrachloride, silicon trioxide, sodium hydroxide (lye), stannic chloride, trichloroethane, and trichloroethylene. Further, caustic and corrosive chemicals, including various acids, are used to remove impurities and clean materials. Perhaps the most dangerous chemical employed is silane, a highly explosive gas involved in accidents on a routine basis. The latest thin-film solar panels also employ numerous toxins, most notably cadmium, a potent carcinogen, and selenium.
Beyond the photovoltaic material, making solar panels produces pollutants such as lead, nitrogen oxide, and sulfur dioxide.
The manufacture of solar cells involves hexafluoroethane (C2F6), nitrogen trifluoride (NF3), and sulfur hexafluoride (SF6). These greenhouse gases make CO2 seem harmless.
C2F6, which is entirely man-made, is 12,000 times more potent than CO2 and survives in the atmosphere for 10,000 years. NF3 is 17,000 more virulent than CO2, while SF6, the most treacherous greenhouse gas, has 25,000 times the warming effect of CO2.
The photovoltaics industry is the leading emitter of these gases. Since photovoltaic production has ramped up in the 21st century, atmospheric concentrations of these gases have been rising at an alarming pace, except that no one seems alarmed.
Solar panel proponents tout photovoltaic efficiencies (~20%) that can only be achieved under ideal laboratory conditions. Those are the only numbers you will ever read. In the field, solar panels produce less than half that percentage on their best days.
The first common cut in efficiency comes with placement: panels not aligned with Sun greatly reduces performance. This is ubiquitous in rooftop placements.
Atmospheric humidity and haze disperse the Sun’s rays, and so cut photovoltaic productivity. Dust downs efficiencies even more; often by 20% or more.
Even modest blotches, such as from bird droppings or leaves, can cut solar cell outputs dramatically. Due to wiring characteristics, small obstructions disproportionately drop solar panel productivity. Small soiling losses can rob as much 80% from potential output.
Everyone in the solar industry knows that photovoltaic systems are expensive. That is why this solar technology had scant application before governments stepped in and granted heavy subsidies, misguided into considering this a clean energy technology.
Silicon represents only 20% of the cost of solar panel fabrication. Copper, aluminum, glass, and plastics comprise the bulk of the costs. None of the components are obtained without considerable pollution somehow.
Manufacture and installation are only the beginning of the expense for photovoltaics. Repair and maintenance costs of solar panel systems remain stubbornly high.
At the end of a solar panel’s life (25 years at best), the embedded chemicals can either leach into groundwater, if dumped, or released into the air if incinerated. As there is almost no recycling of solar panels, the rare and precious metals that go into them are wasted.
“Companies are surviving on razor-thin margins. They’re not thinking 20, 30 years down the road, where scarcity might enter.” ~ American environmental scientist Dustin Mulvaney