The Software Problem

The root of all problems in software, including obstacles to stronger AI, stem from the language used. The solution, invented over 30 years ago, has been an inadvertent secret.

Software code that cannot be readily corrected has been the bane of programmers since digital computers arose. Maintenance drove the evolution of software.

To that end, the most crucial element has been modularity: the ability to isolate functionality. Object-oriented programming (OOP) languages arose because they have the potential to be highly modular.

A most helpful feature in software is reusability. The economy of being able to multiplicatively use a core routine rather than duplicate it to extend its functionality saves memory, provides flexibility, and offers the potential of faster execution speed.

Hence, reusability has been the holy grail of software: what modularity functionally aims at. By contrast, bug prevention via modularity is a defensive goal which enhances programmer productivity and software robustness.

Every computer now needs extensible word processing and graphics capabilities – which an operating system (OS) should fully provide built-in for programmers to tap into. None do. This owes to the OS being programmed in an ersatz language, with scant scope for extensible modularity and thereby limited reusability.

The reason that application interfaces – the look and feel of programs – differs between programs and web sites in a browser is due to lack of reusability in software.

In 1987 a programmer known as GO invented the OOPC language as an object-oriented extension to C.

At the time, through corporate politics, C++ was fast becoming the industry standard for object-oriented programming. Apple founder Steve Jobs called C++ “dog shit.” Apple ended up adopting C++ as its standard language after Jobs died.

The Java language also got its start in 1987. Java was about as limited as C++ in features and flexibility but had better structure than C++. Through the good fortune of corporate acquisition and promotion, Java found a niche as an extension language for web browsers. As Java was never very good, that niche slowly eroded by apter scripting languages.

GO’s aim was to use OOPC for artificial intelligence programming. With this goal in mind, OOPC was designed to be simple to code but malleable in every way.

The simplicity of OOPC was achieved through extensive polymorphism: the ability to execute a specific routine using a generic name. OOPC behaviors for any object, regardless of type, were tapped by categorized verbs: draw, print, save, et cetera. Other OOP languages offered polymorphism, but often, as in the instance of C++, very clumsily: thereby fostering confusion via inflexibilities that had to be worked around only by not using polymorphism.

The beauty of OOPC was its unsurpassed flexibility. Most object systems only allow the data in objects to change when a program (or OS) was running. In OOPC, object behaviors could be defined and dynamically modified as the software was running. Thus, objects could learn through experience.

Because OOPC code was dynamically reusable, an OS running OOPC could have a consistent user interface and core functionality built in. Programmers need only program the additional features they wanted a specific application to have.

Apple and Microsoft both became aware of OOPC in the early 1990s. Software engineers in both companies were simply not smart enough to appreciate what they saw. In 2010, seeing how crude software remained, GO futilely tried again to promote OOPC to software corporations worldwide.

OOPC illustrates how corporate institutionalization precludes innovation. Still far ahead of its time, OOPC retains the sterling promise of what software should be.

References:

Ishi Nobu, “Object orientation,” in The Fruits of Civilization (2019).