Despite impressive advances in artificial intelligence (AI), software remains in a primitive state. The quality and capability of software is directly dependent upon the language used to create it. Popular programming languages are crude; hence the glitches so often seen in software.
“Language is an instrument of human reason, and not merely a medium for the expression of thought,” noted English mathematician George Boole, who developed Boolean logic in the mid-19th century.
All the early computers were maddeningly difficult to program. They were instructed using machine code, which comprised binary words that were an inhuman language that the computer processor could understand.
The first computer language was assembly, developed by English computer scientist Alan Turing in 1948. Comprising numbers, letters, symbols, and short instruction words, such as “add,” assembly language was one small step, and yet one giant leap, away from machine code. An assembler translated an assembly-language program into machine code, which then had to be entered into the computer by an operator.
In 1951, American computer programmer Grace Hopper conceived of a compiler which could translate human-readable program instructions into machine-readable binary. Hopper was one of the very few women pioneers in computing.
Fortran was developed by IBM in the mid-1950s. The language aimed at easy translation of mathematical formulas into code. It was the first compiled language.
Immediately popular, computer manufacturers added their own customizations to Fortran, resulting in a plethora of variants. This resulted in programs tied to a specific manufacturer’s machine.
In reaction to Fortran, Hopper was instrumental in the development of COBOL, beginning in 1959. COBOL was one of the first high-level programming languages.
COBOL was created as a portable, English-like language for data processing at the behest of the US Defense Department, which then insisted that computer manufacturers provide it. Whence COBOL’s widespread adoption by businesses, despite it being verbose and not well-given to structured programming. The common result was monolithic, incomprehensible programs that were exceedingly difficult to maintain. Dutch computer scientist Edsger Dijkstra declared that “the use of COBOL cripples the mind. Its teaching should therefore be regarded as a criminal offense.”
Early languages such as Fortran and COBOL were ineptly designed. As programs written in these languages grew in complexity, it became increasingly difficult to debug them, improve or add features, or keep them working when the hardware changed.
There were no theoretical constructs surrounding computer programming in the 1950s: just languages that arose ad hoc from concepts of getting a certain type of job done.
COBOL is exemplary. The driving idea behind COBOL was to make programs easier to read. Despite the fond hope that readability would mystically engender maintainability, that was not the case.
Fortran and COBOL both had a GOTO statement that allowed one-way transfer of control to a distant line of code. The jump capability of GOTO, which was a convenient shortcut for programmers trying to get their programs working as quickly as possible, decimated longevity, as it made programs unmaintainable. It was nigh impossible to follow the thread of execution in programs littered with GOTO statements.
Via the widespread use of GOTO, Fortran became derisively known as a “write-only” language: that is, not readable. Its ability to produce “spaghetti code” in the hands of inept programmers begat structured programming languages.
The difficulty of code maintenance drove software language evolution.
Procedural integrity was the paradigm behind structured programming. This regime aimed at code clarity. It was a reaction to the damage caused by GOTO.
The 1960 ALGOL programming language supported block structures, delimited by begin and end statements. ALGOL 60 was the first language that offered localized lexical scope for variables and nested function definitions: in short, execution compartmentalization.
Fortran and COBOL eventually acquired structured programming facilities. So did BASIC, Fortran’s simple offspring.
In forcing programs to execute in a purely procedural manner – one routine calling another, with a subroutine returning to its caller upon its completion – structured programming centered on control structures. As American software legend Donald Knuth observed, “Control structure is merely one simple issue, compared to questions of abstract data structure.”
ALGOL did not address an equally critical issue to modularizing execution structures: localizing data to help ensure its integrity. But its offspring did.
The stellar spawn of structured programming was Pascal, designed by Swiss computer scientist Niklaus Wirth at the close of the 1960s. Intended as a language to teach programming, Pascal was something of a straitjacket in its inflexibility. Pascal had strong typing: data types were confined to their declared usage unless explicitly converted.
From a historical perspective, Pascal was a dead end the day of its conception. Instead, the descendants of ALGOL which bore the most fruit were a throwback toward assembly.
The new twist in languages was fulsome encapsulation: objects. The keystone of object orientated programming (OOP) is making everything modular. In an object-oriented program, data is encapsulated in objects. Objects are of a particular class. Each class typically defines 1 or more behaviors. An object is an instance of a class.
The term behavior has many synonyms: procedure call, routine, function, method, and message. They all amount to the same thing: code which does something with or to data; in the instance of OOP, data within an object.
In the early and mid-1960s Norwegian software scientists Ole-Johan Dahl and Kristen Nygaard took ALGOL and gave it an object orientation. The result was Simula 67. As its name suggests, Simula was designed to run simulations.
Object orientation injected valuable new paradigms into programming languages that extended beyond encapsulation. These novel concepts which will be covered shortly. First, an interlude of an altogether different shift occurred shortly after the invention of software objects: a retrograde development that swept the world of programming.
In working to port the infant UNIX operating system from one computer architecture to another in 1972, American programmer Dennis Ritchie developed the language simply named C. C was a language that hewed close to assembly while offering high-level constructs associated with structured programming. This spelled both efficiency and portability: a sure winner in the programming world. In the decades that followed, C became the predominant programming language.
The economy of C was appealing. But C was conceptually a step back for software development in terms of maintainability.
Contrastingly, the idea of encapsulated software objects and associated behaviors was compelling. There were other potential benefits to object-oriented programming.
A principal paradigm in object-oriented programming is inheritance: the ability of one class (a subclass) to employ (inherit) the behaviors of another class (the superclass of the subclass). The advantage of inheritance is its potential for reusability: what the goto statement aimed at, but without creating spaghetti code. In OOP, base classes establish basic functionality, which subclasses inherit to further refine and specialize upon.
Another important facet to object orientation is polymorphism: that different classes might have behaviors with the same name. A subclass may have a behavior that overrides a method defined in a superclass. (Polymorphism is sometimes inelegantly called function or method overloading.)
Polymorphism simplifies the task of writing behaviors and takes advantage of the capability for inheritance. With polymorphism, generic behaviors may work on objects of any type. Polymorphism allows a generality in programming which cannot otherwise be achieved.
Operating systems present an application programming interface (API) that allows application software developers to connect with OS functionality through procedure call (behavior) names. Historically, this might involve many hundreds, if not thousands, of individual functions with different names. Polymorphism offers the potential to eliminate the inexorable clutter of an API presented by a structured programming language like C or Pascal.
The term object-oriented was introduced in the description of Smalltalk, an OOP language created by developers at PARC, particularly American programmer Alan Kay. With its English-like punctuation, Smalltalk was originally intended to be a language for “children of all ages.”
Smalltalk was more an environment than a language per se. As such, it did not make much of a commercial impact, but its concepts influenced many of the OOP languages that followed.
OOP became a paradigmatic virus to programming languages: all became infected. Even Fortran was extended with OOP constructs.
Most OOP extensions to languages were afterthoughts: providing some convenience, but not remedying the deficiencies inherent in the language with regard to code maintenance. Some maladapted OOP extensions, such as C++, created problems for programmers rather than offering solutions. C++, which became the standard OOP language, exemplifies how the software industry suffers from a fundamental failure of conceptualization.
Danish programmer Bjarne Stroustrup was a student when he started tinkering with adding OOP to C. Stroustrup liked Simula 67 but found its execution far too slow for practical use.
Stroustrup went to work for AT&T Bell Labs. By being there he gained the credibility needed to promote his C++ language. Just as C had been the programmer’s language of choice, C++ inherited its crown to an audience just being introduced to the OOP paradigm.
It was a gross miseducation. C++ injected complications without advantage. American computer marketer Steve Jobs, co-founder of Apple Computer, remarked about C++ as an addition to C, “You’ve baked a really lovely cake, but then you’ve used dog shit for frosting.”
Several savvy C programmers thought of C++ as shambolic frosting on C’s scrumptious cake. This led to a handful of object-oriented C-based language variants. Almost all fell into oblivion, while the C++ hack job lived on. A shining gem which was lost in the mists of time was OOPC: a revolutionary language and software product that failed to spark a revolution.
A tiny California company, Electron Mining, developed the OOPC language in 1987. In 1991, the company commercially released OOPC as a cross-platform software development kit that greatly accelerated application development by dint of the language in which the framework was written.
A program for document processing that handled text and graphics took only 2 lines of code. Additional features to such a program were easily added by crafting new modules.
The founder of Electron Mining, Gary Odom, was an avid Go player; even having lived in Japan in the early 1980s so he could better learn the ancient board game. While in Japan, Odom also learned how to design and build computers.
Merging his enthusiasms in Go and software, Odom decided to program the game in the mid-1980s. His survey of extant languages left him wanting.
Odom was inspired by CLOS (Common Lisp Object System), the object-oriented extension to Lisp. Lisp was specified in 1958 as a practical mathematical notation for computer programs. As a high-level language, only Fortran is older (1957).
Lisp quickly became the favored language for artificial intelligence (AI) research, and pioneered many ideas, including tree structures, automatic storage management, and recursion (the ability of a behavior to repeatedly call itself).
Tilted toward mathematical notation and linked lists, Lisp is an operose language. It appears as an assigned nomenclature swamped by parentheses. Alan Kay said of Lisp: “Lisp isn’t a language; it’s a building material.” Edsger Dijkstra acknowledged the difficulty of Lisp while admiring its potential: “Lisp has jokingly been called ‘the most intelligent way to misuse a computer.’ I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.” Odom appreciated Dijkstra’s perspective.
Common Lisp got its start in 1981. A syntax extension to Lisp, CLOS offered great flexibility with data structures and manipulation. Behaviors could be treated as objects.
OOPC was CLOS in C clothing, without the dross of C++. Further, with its inherent simplicity, OOPC offered elasticity without the syntactic complexity of Lisp.
With its open-ended flexibility, OOPC was created for AI programming. The distinction between classes and objects was largely arbitrary, as classes might have their own data, and objects could have their own behaviors.
Multiple inheritance is the concept of a class or object inheriting methods from multiple superclasses. C++ made multiple inheritance an invitation to confusion, giving this extremely helpful ability a bad reputation. In OOPC, simply listing sequentially the priority order of inheritance enabled multiple inheritance.
Overriding a superclass behavior in C++ meant that the subclass method lost the functionality that the superclass offered. This badly complicated code reuse. With OOPC, simply setting a flag let a subclass behavior override, precede, or follow superclass functionality: hence subclasses could judiciously specialize or augment behaviors.
By employing a strict polymorphism, OOPC allowed the full benefit of inheritance and greatly simplified the application programming interface. Unlike other OOP languages, code reuse in OOPC was effortless.
That economy freed up a program developer’s effort to focus solely on the features needed for the specific application. It was easy to create robust programs in OOPC because defenses were built-in to preclude bugs (software defects).
In many distinctive ways, OOPC offered a dynamic flexibility unheard-of in other C-based OOP languages. The behaviors of classes or objects could be changed as a program ran. OOPC afforded a dynamic, self-modifying software system – perfect for AI learning.
Even now, AI developers often don’t understand how the software they create makes decisions. Microsoft AI researcher Kate Crawford: “Engineers have developed deep learning systems that ‘work’ without necessarily knowing why they work or being able to show the logic behind a system’s decision.”
The problem of black-box AI does not occur with OOPC. The status and data objects carry, and the behaviors they exhibit, are open to inspection at any moment.
Electron Mining knew it was in trouble when a matter-of-fact article describing OOPC in MacTech, the Macintosh developers’ magazine, was met by readers with disbelief that such programming capability was even possible.
Apple Computer learned of OOPC at a developer trade show of theirs in 1992. Rather than express interest in the technology, which was far superior to their own, the company simply banned Electron Mining from having a booth at future Apple developer conferences.
Odom gave a presentation of OOPC technology to a room of uncomprehending Microsoft engineers in 1993. Those muddled men were unable to grasp the elegance of OOPC because they accepted kludgy C++ constructs as the best that object-oriented programming could be. Microsoft had already adopted C++ as its standard development language. Even now, most of the problems with Microsoft’s Windows OS can be attributed to it being coded in C++, along with a lack of craft by Microsoft programmers and management.
OOPC highlights the core incompetency that continues to define the software business. The inflexibility and inconsistent quality in software products indicates an industry struggling with technical faculty. As software is entirely a construct of the mind, the implication is obvious.
The most popular programming languages in 2019 were Java, C, Python, C++, C#. Java is portable OOP language lacking the power of C at the low end and the flexibility of OOPC at the high end. Python is a general-purpose language with a kitchen-sink approach – COBOL on steroids. C# is Microsoft’s ersatz attempt to objectify C, after finally becoming fatigued with the pitfalls of C++. None of the popular programming languages offer the flexibility or reusability of OOPC by a wide measure.
It’s easy to marvel at software. Underneath the sheen is a senseless struggle from holistic incomprehension.
In this software is indistinct from other sciences, from physics to economics to politics. Seldom is the instance where humanity’s mental reach does not exceed its grasp.
After Electron Mining folded, Odom became an inventor of computer-related technologies and practitioner of patent law. Odom prosecuted his own inventions before the patent office.
Odom’s innovative technologies were worth tens of billions of dollars to the companies which adopted them. A key feature of Microsoft’s ubiquitous “ribbon” toolbar, introduced in Office 2007, used an Odom-patented technology. Sophisticated network distributed processing was another Odom invention which found favor in computer modeling which required supercomputing power, including astrophysics and quantum physics. Like OOPC, some of Odom’s patented inventions were unemployed jewels of foresight.
Odom then turned inward, becoming a recluse. He spent a decade immersed in scholarly research and writing, quenching his long-held ambition to chronicle human knowledge in a comprehensible manner. The result was the magnum opus he titled Spokes of the Wheel. The book Clarity: The Path Inside was written in a month after his publisher expressed disappointment that the Spokes sampler book – Unraveling Reality – was not a spiritual guidebook.
Odom attained unity consciousness during this scholarly epoch. In abashed reflection of how long it took him to reach enlightenment, he adopted a wry guru name – the Japanese expression for “good endurance” – Ishi Nobu.
Sources:
Ishi Nobu, The Fruits of Civilization, BookBaby (2019).
Will Knight, “The dark secret at the heart of AI,” MIT Technology Review (11 April 2017).
Mike Ananny & Kate Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” New Media & Society (13 December 2016).
Dave Gershgorn, “We don’t understand how AI make most decisions, so now algorithms are explaining themselves,” Quartz (20, 2016).
Ben Putano, “A look at 5 of the most popular programming languages of 2019,” Stackify (30 August 2019).