This summer, while reading James R. Gaines’ Evening in the Palace of Reason: Bach Meets Frederick the Great in the Age of Enlightenment, I was struck by his description of the Baroque era’s conception of earthly music and its relation to the Divine: “Some thought the celestial music was abstract, an ephemeral spiritual object, but others insisted it was real, inaudible to us only because it has been sounding constantly in the background from the time of our birth. In either case music was a manifestation of the cosmic order.”
This sentence comes early in the book, on page 49, in a section recounting the story of the “moonlight manuscript” when a young Johann Sebastian Bach secretly copied his older brother’s prized anthology of musical masterpieces to discover the secret of counterpoint—counterpoint being the “elaborated codes and principles” used by composers to write “note against note (punctus contra punctum)” and braid “relevant vocal lines through one another to form increasingly rich weaves of melody.” These codes and principles were handed down from master composer to apprentice as carefully and discreetly as the secrets of alchemy and were similarly guarded. Upon being discovered by his brother, Gaines tells us, Bach was forced to destroy the manuscript – his act of copying considered akin to stealing trade secrets or other proprietary information.
Gaines vividly explains that in Bach’s time, “just as the alchemist’s ambition was to discover God’s laws for ‘perfecting’ iron into gold, the learned composer’s job was to attempt to replicate in earthly music the celestial harmony with which God had joined and imbued the universe, and so in a way to take part in the act of creation itself.” In other words, counterpoint was the mathematical means through which composers translated the sounds of the divine world into a musical score that early listeners could hear. Counterpoint was a language technology—a code!—through which to compose ephemeral spiritual objects.
Gaines traces this idea of music’s relationship to numbers from Plato to Newton, through Pythagoras, Euclid and Boethius, Martin Luther and Kepler:
No less than the seventeenth-century astronomer Johannes Kepler gave Luther’s position the stamp of scientific certainty in his great work, Harmonices Mundi, where he correlates the orbits of the planets to the intervals of the scale and finds them to be ‘nothing other than a continuous, many voiced music (grasped by the understanding, not by the ear).’ This last point was debated: some thought the celestial music was abstract, an ephemeral spiritual object, but others insisted it was real, inaudible to us only because it has been sounding constantly in the background from the time of our birth.
So in the Western tradition, even as far back as the Greeks, we see three phenomena that continue today: 1) music as a mode of inquiry into the nature of the material and ephemeral worlds; 2) composition as a system for the design, creation and of ephemeral objects; and technology – first in the form of musical instruments, later in the form of computers and sound systems – as a tool for creative expression.
Since beginning this research project, I have spoken with many people who look at the history of computer programming as an applied philosophical endeavor in which increasingly higher order levels of programming languages are created in parallel with increasingly sophisticated insights into human cognition and the nature of reality itself. The tangible relationship between science, metaphysics and computer programming stretches back as far as Ada Lovelace, daughter of the poet Lord Byron, who is often identified as the first computer programmer. As Dr. Betty Alexandra Toole, author of Ada, the Enchantress of Numbers, has written, “[Lovelace] hoped to be ‘an analyst and a metaphysician’. In her 30’s she wrote her mother, ‘If you can’t give me poetry, can’t you give me ‘poetical science?’ Her understanding of mathematics was laced with imagination, and described in metaphors.”
I embarked on this project based on a hunch and just enough knowledge of computer programming, art, and technology to be dangerous. Unsurprisingly, I did not wholly anticipate the staggering complexity of this undertaking. The area of investigation where I felt least familiar was that of computer programming. Given my near-total ignorance of these matters, I was fortunate enough to interview two erudite and insightful individuals who helpfully provided information and explanation.
I first met writer/programmer Paul Ford when he was blogging at Ftrain.com and I was co-organizing the WYSIWYG Talent Show at Performance Space 122. Daniel Segan is a programmer and electronic musician who I met at a Harvestworks panel on Sound Art in June 2014. I sat down with both of them separately to discuss computer programming, but upon listening to the separate interviews noticed a theme emerge.
Both Segan and Ford got into computers through music. For both of them, the impulse towards creative expression and exploration in music was paired with a determination to learn new technologies. Paul told me, “I was and am still, fascinated by this world where you don’t write a program so that it executes but rather to create some kind of an aesthetic object.”
Ford first started programming in a language called CSound, a synthesizer language first created at MIT in the 1980’s, while he was working as a web developer in the 1990s. Segan started programming out of an interest in electronic music, telling me, “I got really into the idea that you could plug a synthesizer into a computer and sequence it to make it do music. I thought that was the coolest thing ever!”
Segan continued, “Working in this industry for so long, I’ve discovered that the best software people are almost always the people who studied music. It’s bizarrely consistent. Many people in my industry don’t have CS [computer science] degrees; that’s not their first line of study. When you find a good programmer odds are music was, and still is, a big part of their life.”
One shared characteristic of Ford, Segan and most of the people I’ve interviewed for this project, is an eagerness to solve problems by just trying things out and teaching yourself how to do things. This characteristic is shared by most of the really creative, innovative artists I’ve known, but is interesting to observe how “trying something out” in a programming environment differs from – and is consistent with – the improvisations and experiments used by artists working in “live” disciplines.
It seems fitting, then, to do a bit of a deep dive into programming languages, at least enough to provide a general audience with some vocabulary and tools to consider how art is made in a digital environment.
The technologies and methods used by artists working in live performance are more or less familiar to a general audience, while the technologies of artists working in digital media are often quite mysterious to the public at large. As artist/composer R. Luke DuBois recently said in ArtSpace, “The history of music is the history of technology. Instruments are technology; notation is technology; architectural acoustics are technology; amplification and phonography are (a bit more recent) technology. So unless you’re improvising (or reciting from memory) vocal music and performing it, a capella, outdoors, you’re using technology.”
I asked Daniel Segan to help me understand computer languages. He started by explaining computers in the most basic way possible: “when you get down to the bare metal, the only thing a computer can do is add.”
He continued, “all of the operations are just variants of addition. How do you do subtraction by addition? Add a negative number. How do you do multiplication? You just add and add and add and add. That’s really all it can do. And if you ever saw the actual machine instructions computers use, it’s pretty basic.”
Since all a computer can do is basic arithmetic, coders need a programming language to translate things a human can understand into a set of instructions a computer can understand. The lowest-level programming languages are Machine languages which consist entirely of numbers, and which are pretty much incomprehensible to humans. The next level up are “assembly languages” which use the same instructions as a machine language but include words, not just numbers. Finally, we arrive at what most people talk about when they talk about programming languages, high-level languages like BASIC, C++, COBOL, FORTRAN and Pascal.
So a human being writes “source code” (or instructions) in a high-level programming language that is input into a “compiler,” which is a program that translates source code into object code, which in turn is translated into machine language (which, being all numbers, is the only language the computer actually understands).
In computing, as in life, the more you can delegate mundane, repetitive task-based work to someone (or something) else, the more attention you can devote to “higher order” problems and ideas. So when we look at the history of the development of computer programming languages, we see a tendency to become higher and higher level, to constantly seek increasing levels of abstraction, and we can think of some programming languages as distinct languages unto themselves and others as dialects of a parent language.
“It’s kind of like, if you want to do engineering, it’s good to know German,” said Segan, making a comparison to more familiar languages. “If you want to pick up a girl, it’s good to know French; if you want to find work it’s good to know English; if you’re involved in manufacturing it’s good to know Mandarin.”
So among the many paradigms of higher-level programming languages, historically we start with what are known as functional computer languages that, as Segan describes it, consist of “pure input and pure output: you pass something into the computer, it chews on that and outputs a result. The input data is completely stateless. You pass something in and you get a result without relying on any information about the state of the data that isn’t already there.”
Then we move on to procedural languages. “In Procedural languages, the ways that you express things, your data structures, don’t model the real world [as in Object Oriented Programming].” Daniel Segan told me “They were developed as a series of instructions that a computer would carry out: ‘Do this, do that, do the other thing.’ It was higher level, it was English-readable in some respects, if you knew a little bit of math and a little bit of English you could read it, it’s sequential, reads like a narrative. But there’s no high level abstraction, there’s no wrapping things up in other ideas, it is what it is, “do this or do that” like a “choose your own adventure” book.”
Eventually, we arrived at Object Oriented Programming, which was a major paradigm shift. The term “Object Oriented Programming” is widely attributed to programmer Alan Kay, who was a team member at Xerox’s Palo Alto Research Center (PARC) where they were developing the Graphical User Interface (GUI). A GUI is what we all use now; we interact with the computer through recognizable, clickable pictures. In order to create a graphical user interface they had to figure out how to model real world objects in a programming environment, which led to the basic paradigm of Object Oriented Programming.
When I interviewed Chris Ashworth from Figure 53 in March, he said, “A properly organized object oriented world has these interfaces between objects that hide the detail of the implementation of the object,” which I found that to be a helpful and concise description of a very complex system.
Let’s return to the development of counterpoint in the 18th century as a way of understanding “celestial music.” In much the same way that counterpoint corresponds with a conception of a divinely ordered universe, Object Oriented Programming implies a similarly comprehensive worldview, one that has steadily migrated into other disciplines—Object Oriented Philosophy, Object Oriented Ontology, Object Oriented Design—in ways that are at once tantalizing and tenuous. The conceptual shift to Object Oriented Programming can be seen to be as significant as the widespread adoption of “learned counterpoint” in music.
If Object Orientation can be said to be worldview, it is, approximately, that an “object” is an entity that is tangible and legible, that has visible properties and associated, predictable behaviors. However, the details of how those properties attach to the object – the causes and mechanisms by which it enacts those behaviors – are all hidden from our view.
When OOP first emerged as a paradigm, there were only a handful of languages and they were mostly developed by vendors, for-profit software companies like Microsoft and Borland, and so the ecosystem was designed by, and very centered on, what vendors wanted. With the rise of Linux, the Open Source and DIY movements, with the ascendancy of Google and Big Data and, more recently, the exponential growth of the mobile space, the infallibility of Object Orientation has come under question as a new generation of programmers redefines the field.
“It’s really interesting because there’s been a huge renaissance in programming languages. After the dotcom bust things were dark for a while but around 2005 unemployment in tech was 0% so a lot of smart kids got into CS and now they’re doing their own things,” Segan said. Like any comprehensive theory, be it in art, science, religion, philosophy or technology, Object Orientation – both as programming language paradigm and worldview – has attracted, as it has migrated across sectors, a coterie of acolytes and critics. Segan told me, “for the longest time the concept of ‘Object Oriented’ was blindly accepted as the way we were doing things moving forward, but some problems don’t necessarily need to be thought about that way.”
This renaissance in programming languages—corresponding, as it does, with increased access to the tools of technology—is not unlike the disruption in musical pedagogy and cultural production during Bach’s lifetime. That disruption was The Enlightenment. Gaines writes:
“Frederick [the Great] and his generation…denigrated counterpoint as the vestige of an outworn aesthetic, extolling instead the ‘natural and delightful’ in music, by which they meant the easier pleasure of song, the harmonic ornamentation of a single line of melody. For Bach this new, so-called galant style, with all its lovely figures and stylish grace, was full of emptiness. Bach’s cosmos was one in which the planets themselves played the ultimate harmony, a tenet that had been unquestioned since the ‘sacred science’ of Pythagoras; composing and performing music was for him and his musical ancestors a deeply spiritual enterprise whose sole purpose, as his works were inscribed, was ‘for the glory of God.’ For Frederick the goal of music was simply to be ‘agreeable’, an entertainment and a diversion, easy work for performer and audience alike. He despised music that, as he put it, ‘smells of the church’ and called Bach’s chorales specifically ‘dumb stuff.’ Cosmic notions like the ‘music of the spheres’ were for him so much dark-age mumbo jumbo.”
Bach’s patrons were primarily civic and ecclesiastical bodies; he was employed by local government to write music for the local church in service to communal, spiritual, religious, and civic needs and aspirations. With the arrival of The Enlightenment, the rise of rationalism and the wane of strict theological constrictions on cultural production, music education moved out of the sole purview of sacred institutions. At the same time, the patrons of composers no longer wanted music that “smells of the church.” “Old Bach” as Frederick referred to him, had become a relic, his work outdated and old-fashioned.
So we see a few things happening. The Enlightenment changes the expectations of what music is supposed to do and how it is supposed to sound; it also changes the understanding of what a composer is, as an artist.
While Bach was undoubtedly a genius, he would never have conceived of himself as an “artist” in the sense that the word is now widely understood. He would not have seen his work as an articulation of an idiosyncratic individual vision that demanded expression; his innovation was in the complexity of his perception of the world around him, his ability to articulate that complexity in music, and to do so within the constraints of his time, while simultaneously pushing the form forward. This new, Enlightenment era understanding of “genius” was embodied in composers like Mozart and Beethoven and became even more individualistic in the Romantic era.
At the same time that the conceptual underpinnings of music are changing, the funding sources of composers are too, with a corresponding shift in where music is being heard – from church to chamber. And as David Byrne explains in his TED talk, “How Architecture Helped Music Evolve”, venue and intended audience affect not only the sound of music but also the production and performance practices of musicians.
With the rise of the Internet, Open Source and DIY culture, the venue and audience for computer programming has changed. Where once both functional and aesthetic outputs of computers were for a small, knowledgeable audience, that audience has expanded to include a world of novices, hobbyists, and—well, anyone reading this on a computer screen. In an age of ubiquitous connectivity – and interconnectivity – we can posit that all digital space is a performance space, concert hall, theater or museum, where anyone at any time can be a content creator, performer, audience, or all three at once.
So what does that mean in terms of understanding the domain of art in the digital age? In some ways, things haven’t actually, fundamentally changed. Human beings have always created art, everywhere. It is that with this new technology, we are more aware of “everywhere” than ever before. In fact art is, as we can be, “everywhere at once”. And as we become more conscious of “everyone, everywhere, all at once,” the hierarchical structures we previously used to categorize art flatten out and now seem inadequate to parse a vastly expanded field of cultural production.
Within the rubric of 21st century cultural production—Maker Culture, “everyone a creator”, “content providers,” etc.—there are many, many layers of creative expression and artistry. There are creators of all kinds making things at home, on their computers, or on handheld devices and sharing them on the Internet. There are artists using sophisticated technological tools of the moment, including computer programming, to create works in both physical and mediated environments, sometimes simultaneously. There are computer programmers and software developers who, in their own way, are pioneering a nascent art form whose “products” may exist in any number of forms both known and yet to be defined.
Given how rapidly the personal computer, the Internet, and pervasive digital technology have been introduced into our society, we often forget that we are only at the very beginning of this new world. Because of the pace at which digital technologies have transformed culture, we often forget that human history is an ongoing narrative of evolving technology.
Paul Ford insightfully noted as much:
“Writing is a technology – we just take the alphabet for granted. Somebody came up with that, you know? How to arrange characters in a sequence, include spaces: it’s an established technology that we take for granted.
I have a book at home that’s a dictionary of paper published by the paper craftsmen’s union. It’s really thick. To get that dictionary done and to read through it takes about as much effort as reading a book about how computers work. It’s a real pain in the ass. Learning about paper makes you appreciate the technology. Paper’s heavy, it has different weights, there’s a lot of chemistry involved, ultimately there’s physics involved. It’s just they’ve got it locked down.”
He laughed and said, “If I asked you to make me a book starting with a pile of papyrus, you’d be in big trouble. Not only that – you’d have to make the ink, all that. But these days it’s all there for the taking! And it’s so cheap!”
I was reminded of my earlier conversation with composer/programmer Mark Coniglio of Troika Ranch who told me, “One of the examples I often give is the invention of the piano. So we have the harpsichord, right? And then this new invention comes, the pianoforte, and suddenly, every piece of music is being written to show off what the piano can do: really, really soft! REALLY, REALLY LOUD! Really, really soft! It’s like that. Or, in the history of cinema the Lumière Brothers have a train coming towards the audience and everyone runs screaming from the theater because they don’t actually know that it’s not a real train. The Lumière Brothers are playing games with this new technology. It’s normal that when delving into new technology the first step is ‘What spectacular thing can I do with this?’ because it’s interesting to see what’s spectacular about it. And then it starts to develop into something real.”
So how does the adoption of new technology in the analog era correlate to the adoption of new technology in the digital age?
Piano technology evolved at a rapid clip from Gottfried Silbermann’s version that finally won Bach’s approval in 1747 through the Mozart era, into the Industrial Revolution which saw innovations like piano wire (for strings) and precision casting for the iron frames, until the 1860’s when the famous Steinway piano factory in Manhattan reached peak operation, producing 1800 pianos a year!
As the technology became more widely available, we saw entire new genres of music arise, particularly in America: Scott Joplin popularizes Ragtime, Jazz is born; Honky Tonk, Rhythm and Blues. Eventually we see composers like George Gershwin combine jazz and symphonic music, the likes of Richard Rodgers and Leonard Bernstein bring popular and symphonic music together in a Golden Age of American Musical Theater.
And through these more popular aesthetic experiments and innovations, the broader public came to have at least a passing familiarity with the technology – sheet music, instruments, perhaps even a smattering of music theory. The more conceptual and theoretical experiments of Europeans like Schoenberg, or Americans like Ives and Cage, would have had even smaller audiences, had it not been for the popular dissemination of music technology and knowledge – even in the form of the player piano – that invited participation and engagement from a wider swath of the public.
We have seen a similar trend in computer programming. As the Graphical User Interface has evolved to include WYSIWYG (“What You See Is What You Get”) programming environments and many programming languages become more widely accessible, the general public is gradually becoming more familiar with the programming as a profession, practice and domain of knowledge.
I asked Paul Ford about this. Since we are familiar with writing as a technology, and we are more or less familiar with pre-digital technologies for writing and performing music, does he expect programming to become as familiar to most people as writing is now?
“Nobody writes,” he laughed. “It’s a really, really small percentage of the universe that actually writes, especially for a living. Writing words is not a common thing, and to write lots of them is really uncommon.” He continued, “I mean, people already know a little bit of programming: they know Excel. Excel is programming half the time, you put an equals sign in or do some addition and it’s basically array programming. I think we’re going to get to the point where everyone’s going to know just enough to get their stuff done – write a memo, update a spreadsheet – but not much more. In the same way that not everyone really knows how to write, I doubt that everyone’s going to learn programming.”
Not too long ago Paul published an essay called, “The Great Works of Software” that had, in part, prompted me to pursue the notion of composers as programmers. I began to speculate on whether it was possible to articulate an aesthetics of code that comprised both the form of the writing and the audience’s experience of the output.
“Elegant code is a huge thing; it’s a big deal,” Segan said. “For programmers or developers who take pride in their work, they’re serious about architecture and elegant design, its not just how fast something runs or raw performance, it comes down to how maintainable it is, how extensible it is. People take it seriously and there is a very artisanal mentality with a lot of software developers.”
I would posit that programming starts to resolve as a discrete creative practice when we look at certain tangible skills: choosing the right language to address a specific problem set, fluency in a parent language and its multiple dialects, for instance. But it is not just what you create, but how you got there. Along with hard knowledge, there are still the less quantifiable characteristics of creativity, talent and skill—the ability to write elegantly and well in any given language that might elevate some programmers as more expressive or artful than others.
At the same time, since writing either text or music as an expressive from is a relatively “locked down” technology, we’ve developed ways of evaluating it, of evaluating the skill and talent of a writer; their ability to work within the form as well as their ability to experiment, innovate and even re-invent the form. People write in different languages, in formal prose and in the vernacular, in created dialects and multiple voices and perspectives. How can we – and should we – apply these criteria to programming? Can we even think of computer programming as an expressive medium?
In many ways the current tech world has been so subsumed by venture capital and product marketing, it is nearly impossible to have a higher order conversation on the aesthetic or philosophical implications of programming unto itself. Paul Ford noted, “There are market forces assigned to you [as a programmer] that don’t make sense in the way that people talk about art or literature. Consider that there’s a province of literature, a province of poetry, there’s a province of art and theater. There are sets of forms and rules; there’s an internal logic to each one of those, and there are people who are up and down in their world at any given time. And then tech is just this giant substrate where you can go make a lot of money. It’s like the Roman Empire just going out and hitting these provinces and absorbing them. It’s tricky.”
That image of tech as the Roman Empire rolling implacably over the landscape stuck with me, I thought about the way the Roman Empire subsumed the ideas that preceded it – Greek gods and philosophy – and assimilated the ideas and cultures it conquered. I thought about erosion of complexity under the dictates of the Empire, with its voracious appetite for the new: new ideas, new conquests, new revenue streams and new fashions.
Certainly today’s wealthy tech sector is calling the piper’s tune, and it shares with Ancient Rome a voracious, and fickle, appetite for the new. These days, as Daniel Segan told me, there is even a graph for this appetite, Gartner Research’s “Hype Cycle”. The Hype Cycle Graph begins with an exponential spike from zero to as high as it can go. This describes the infatuation moment where a new product or idea emerges and everyone in the sector turns on a dime to embrace it. This is immediately followed by what Gartner calls “the ‘Trough of Disillusionment’, when the Utopian promise of the new product or idea is discovered to have a flaw and everyone shuns it. Finally, this is followed by a long, slow period of incremental growth over time as the product, technology or idea matures and is adopted more prudently.
Segan told me, “It’s a bit crazy and it’s all driven by the same underlying cause: the money. Things have kind of changed over the past ten or fifteen years – during my lifetime in the industry at least. People didn’t used to be as driven by the extreme bottom line, it wasn’t such a short-term thing. But these days you could take that graph and apply it to anything.”
Like Hollywood’s blockbuster mentality or the music industry’s market saturation strategies (all Lady Gaga, all the time, until the next one comes along), market ideologies predicated on minimizing risk and maximizing short-term ROI tend to work against the long arc development cycles associated with sustainable innovation and measure growth. They work against the kind of slow progress that investing in human capital and creative imagination requires. Yet history has demonstrated, time and again, that great works of art and profound breakthroughs in our understanding of the world require time and, over the long term yield much greater returns.
In this context, one can imagine using the central metaphor of Object Orientation to describe the tech sector as a whole. It is an interface designed to hide the implementation of the program itself, favoring the readily accessible and easily understood; all challenges can be solved through seamless efficiency and elegant simplicity despite the stubborn truth that complex, real world problems usually require similarly complex, and often inelegant, solutions.
I am reminded again of Bach. His music and worldview fell out of favor towards the end of his life and for many years after, as the tastes and ideas of the “payers” changed. Not only was he eventually returned to good standing, but posthumously recognized as one of Western music’s greatest geniuses.
Bach’s profound and complex worldview was subsumed by The Enlightenment, with, as Gaines writes, its “cramped mission” and “limited expectations for music to be “lightweight”. But in the face of the devastating destruction wrought by the 1755 Lisbon earthquake, Enlightenment era European intellectuals were compelled to temper their boundless optimism and unshakeable belief in the infallibility of Reason. The tragedy of the Earthquake and the ensuing crisis of faith drew composers like Mozart and Beethoven back to J.S. Bach in search of compositional inspiration to move from frivolity to profundity.
One can imagine, then, our current Tech Bubble moment, with its characteristic boundless optimism and self-confident assurance that all problems can be solved through technology, as a spiritual cousin to the Age of Enlightenment’s honeymoon period of 1745 – 1755.
The vagaries and fickleness represented by the Hype Cycle are merely the most recent articulation of fundamental human behavior, behavior that has not changed, nor is likely to ever change. With each technological leap comes the promise of an imagined future that, rarely, if ever, arrives, and almost never as it was originally envisioned. But this cycle of hope and disillusionment is what keeps the wheel of progress turning; we occupy one point on the wheel and are rarely fully aware of the long, slow arc of incremental change over time. Taking the long view, we are likely in an infatuation stage and will almost certainly become disillusioned again, looking back at this moment with as much consternation and disbelief as we now regard the Utopian promises of the Atomic Age.
And yet, there will be a very few wonderful creations that will endure.
As citizens of the 21st century we are almost entirely unfamiliar with the theological, philosophical and aesthetic assumptions that would have been universally accepted by Bach’s original audience. We can hardly imagine the theologically ordered cosmology of Bach’s world, much less the certainty and sense of mission emerging from his faith. But we can still aesthetically appreciate Bach’s works, and that sensual enjoyment of the work itself is only the beginning of understanding the artist’s genius and accomplishment, it is only the beginning of a never-ending investigation into the limitless complexity of the world around us.
What is exciting about this moment in time is the efflorescence of new ideas, the new horizons opened up by the emergence of new technology and the promises implied in being able to do things differently, to think differently, to increase our understanding and thus change ourselves, and the world we inhabit, for the better.
And so, just as Bach, when encountering the pianoforte for the first time near the end of his life, must have intimated what lay ahead for music, today’s artists, audiences and thinkers must speculate on what contemporary technological innovation augurs for aesthetic appreciation in emerging fields and, just for fun, place our bets on what will endure.