A few weeks ago I went down to Baltimore to talk to Christopher Ashworth and Adam Bachman of Figure 53. Figure 53 is the software company responsible for the sound and video design software QLab, and a new product called Tixato that is rapidly disrupting the ticketing software market.
Ashworth wrote the first version of QLab to help some theater friends manage complicated sound cues, and it has since grown into the industry standard application for designers creating integrated sound and video composition for live performance. I talked to Chris and his colleague Adam Bachman about the evolution of QLab and the art of creating beautiful tools for sculpting in space/time.
Andy Horwitz: So as I mentioned, my project is premised on the idea of mapping the foundational concepts of Object Oriented Programming onto live performance. So I’d like to start with a tangible example. QLab supports 48 separate channels. If you had a full 48 channels running and each one had a separate sound on it and each sound has – like in the old days – a series of knobs controlling pan, effect, reverb, etc. can we we consider each sound as a discrete object and each of those settings as properties associated with that sound object?
Chris Ashworth: Oh yeah, that is what we do. It’s interesting. When you use the term “Object Oriented” with programmers it means something very specific to us; there is nothing that can be undefined in that term. It means very specific things, right? Like encapsulation, abstraction, indirection, inheritance, reuse, modularity…
Adam Bachman: Reuse is a fun one because you can really move from the abstract and apply it to real world situations. For example, a real world object that does not observe good modularity is, like, an iPhone dock that only fits one particular iPhone and clips to one particular place in the car, and if you get a new car or you get a new iPhone you have to trash it. That’s called a tightly coupled system and in software that’s bad because if you make a change in one place, you have to make change everywhere.
CA: That’s one of the nice things about object orientation. A properly organized object oriented world has these interfaces between objects that hide the detail of the implementation of the object. The whole point is that you can change the way an object behaves under the hood and it doesn’t have to change everything else in the whole system.
AB: And that’s big in designing a program like QLab. You don’t have to see all the details of the implementation, you can just interface with the cue as the fundamental object and then the behaviors of all the different kinds of cues – sound cue, video cue, live mic input, camera input, fades, triggers, midi – cascade from there.
CA: Specifically, every single cue in QLab is literally a “cue object” and they all have common properties. They all have names, they all have timing information, properties associate with them like whether or not they might be triggered by an incoming MIDI signal. Because every single cue might share those kinds of things and in specific kinds of cues – like an audio cue or a video cue – inherit from that master class.
So all of the cues are one class?
And “cue” is a class of object?
CA: Yes. Audio cue is a subclass and video cue is a separate subclass. An interesting idea that relates to Object Oriented is the composition of objects. There’s a concept known as “design patterns” that applies to many fields but computer science and engineering is really excited and happy about it. It came from an architect, Christopher Alexander, who wrote the book A Pattern Language. He introduced the idea that you could identify patterns that are reused over and over by professionals in a field, pull them out and use them as a language to talk about your industry.
So, for instance, you could consider Anne Bogart’s Viewpoints as a pattern language of her work. There are entire classes devoted to design patterns in computer science and “Object Oriented design” is one design pattern, but there are many design patterns that pop up in the real world.
AB: In Christopher Alexander’s work he went from very low level like the shape of a window, the shape of a doorway, the space in a room, to very high level like the design of a village, how professionals interact with residents, and how those spaces evolve together and by necessity rely on each other. In computer science there are very small patterns like for storing a string, for interacting with strings, etc. And then there are very high-level patterns like the way that all events within a system are communicated throughout the system.
CA: So a design pattern can be small or large. And one of the design patterns that is an interesting combination with the Object Oriented pattern is composition. The way that happens in QLab is that we have a special kind of cue called “The Group” which just contains other cues; it composes multiple building blocks into one unit that can be carried around. So, as it turns out, every QLab document is actually, at the highest level, one cue. All of them! Every show is essentially one cue, and it just so happens that that one cue is broken apart into multiple sub-cues that have been broken apart into multiple sub-cues, and so on.
So it is kind of like in old versions of Flash here this is one Master Timeline that runs the whole thing?
AB: Yeah, it’s moving fluidly between multiple scenes.
CA: The thing that makes that really powerful is that, if you can abstract all of these things into a compositional relationship, that means that you can treat groups of things in the same way that you would treat individual instances of them.
So I have a cue list – a cue list is a group of every cue in, let’s say, Act 1. I can do things like pause a cue and, for a compositional object if I pause a cue, it pauses everything in the cue list. So conceptually now I have one place to talk to many different cues at the same time because they’re all composed into what looks from the outside like a single cue. So I can do cue-like things to you: I can start you, I can stop you, I can pause you, I can do whatever, and it knows how to translate that to all the pieces inside it.
So object oriented composition is an extraordinarily powerful organizational principle because you can take simple building blocks, put them together, hide them as if they were a slightly more complex building block and get a higher level building block that can in turn be put together in more interesting ways.
So the process becomes: you compose simple things into more complex things and keep going up to the level of “here’s the entire show” which is essentially one cue. In the end the entire show – every single show that has ever been made in QLab, really – is one cue that just happens to be broken down into smaller pieces of cues. So it’s a really powerful concept, though I’m not sure how it applies to your question!
Well, actually, it totally applies. It makes perfect sense to me. I have heard it said, and believe it to be true, that a performance begins when you first hear about it and doesn’t end until you stop thinking about it. And I’ve had that experience, whether attending Grateful Dead concerts or the final performances of the Merce Cunningham Company at the Park Avenue Armory, that the music, the dance, the art is always there, but it is when we’re all in the room together and it is embodied that it becomes manifest. That’s why live shows are so amazing, in my opinion.
CA: Well, you know, we think a lot about that, actually. With our new product, Tixato, we’ve really been thinking about the entire user experience from the moment you decide to attend a show and buy a ticket, and after. I talk about it in terms of increasing the surface area of your show or theater company, or what have you. How do you increase the surface area beyond just the show itself and put as much thought into every interaction along the way as you do into the show?
You are really thinking creatively about this and before you mentioned Anne Bogart and Viewpoints. What is your experience with that and how does it relate to QLab?
CA: I apprenticed at the Actors Theater of Louisville and one of the foundational experiences that led to creating QLab came to me on a show that Will Bond was doing for the Humana Festival. It had aspirations to be a fancy video projection show and there weren’t really tools for it at the time. So I was operating the show, the video part of it, and it was a nightmare. It should have been just “press go”, do the next video, do the next video but it was this custom built software and it was just awful.
After spending a year at Actors Theater of Louisville I wanted to get health insurance and income, so I went to grad school for computers. While I was there some of my friends from the apprentice company went off to form a little theater company in North Carolina that has since disbanded – the Theater of a Thousand Juliets. They wrote to me at the end of one of my semesters and said they needed help with this production they were doing. They had a complicated sound design, a CD player wasn’t going to quite cut it, they had a Mac but they couldn’t find any software on the Mac that would do what they wanted either.
I thought I would just be able to search for some program but I didn’t really find very much. Since I wasn’t enjoying my research that much, and I was looking for something to do that I actually enjoyed doing, this became a side project with some friends. A few weeks later we’d cobbled together this thing that they used to run the show.
It was so intense for that month or two that I kind of burned out and went back to trying to do homework again. Not too long after though, I was like, “That was really fun!” and since I was still not enjoying my grad school experience all that much, I thought I’d go back to working on QLab because I enjoyed it.
I kept poking at it for months and eventually got it to the point where I thought “Maybe I should share this with other people, I don’t know if anyone else would use it as well.” I found a mailing list for sound designers and said, “I’m working on this thing and I would love feedback’ and sort of threw out into the world and it started to snowball from there. It was a long process; I never expected that it would necessarily turn into a company by any means. I gave it away for free for a long time because I didn’t think anyone would pay for it. Then they told me, “I should pay for it so you can start a company and actually work on it full time.” And I said “Okay, if you want me to charge you I guess I will.” So that’s kind of where that all came from.
What problem were you trying to solve?
CA: My friends were looking for a way to play sound effects with a certain amount of sophistication beyond what a CD player could handle, like crossfading more than one sound at a time, but arranged dynamically. It wasn’t like they could record a track in an audio editing program and then just play it from start to end. The dynamic nature of live performance is that sometimes the timing is flexible and you need a component to be triggered by the actor completing a certain action, so the timing is slightly different every night. It was a relatively simple problem to solve – a certain sophistication of sound playback beyond what they could do with the tools available at the time. Then it grew into something a lot bigger.
The first version of it, we just did stereo. We had a document model so they could save a workspace and reopen it. We got to the point where you could build different shows and save the documents and open different documents.
And how has QLab evolved over time?
CA: One of our strongest points is that from the beginning we were incredibly closely connected to a lot of people using the software on a daily basis and we’ve tried to keep that going. We try to give users enough ways to interact with the program that they can add features to it.
For instance, we have one particular user who is a fiend for AppleScript and he’s built entire third programs that work alongside QLab, which is kind of amazing because AppleScript is a language that few people actually like. So we’ll look at what he’s written and we’re like “How’d you do this??”
AB: He’s a very sophisticated user – a sound designer by trade. He’s essentially added Power User features to QLab that we never thought of. Like, how to take every cue in the show and modify it in a particular way or renumber it, or how to do documentation on the cue list. Essentially he’s added entire sets of features that we never would have added ourselves.
CA: As QLab has matured there’s a lot of development work that has grown up around it. When we do support for it, our ability to understand what the heck is going on in a user’s computer is critical for us solving problems. So in Version Three we built this whole system to help us get feedback about what the specs of the machine are and what’s been going on in the machine.
Adam is primarily a web developer and he built the entire system that receives and tags crash reports and that kind of stuff. We’ve added a lot of scripting over time to let people do more custom programmer-y things. Adam wrote a supporting Ruby library that allows a Ruby programmer to easily interact with the QLab Machine.
Basically Adam has made it easier for people to make their own modifications or additions, so that spidering out from this central piece of software is a lot of other stuff that other smart people made. There are a lot moving pieces that a lot of people have contributed to, the depth of the program as a whole goes much further than what you see when you launch it. There’s a lot more going on.
Does feedback from your community of users drive your product development process?
CA: Well, because we are closely connected to our users, we do have a very immediate sense of what people want from QLab that it’s not doing, or how they want it to change from what we thought was a good idea. We tend to make choices based both on what we are hearing from them that they really, genuinely need. We don’t make choices in a way that is trying to impose an artistic perspective; I don’t see our job as trying to make an artistic statement about anything. We’re not trying to dictate how you make art, we’re trying to empower how you make art, whatever that happens to be.
Our choices tend to be about how do we make a tool that is beautiful, not what people should make with that tool? So when people come to us and say “QLab needs to do x” we may say, “Well I disagree with the specific way that you think I should do x, but we can back that up and say “what problem are you trying to solve” or “what is your ultimate goal” and let’s see if we can fit that into the world of this tool that doesn’t violate the principles of the tool. If so, we can probably work on that. If not, we may say no.
Are there any features that you’ve added – or not added – because they were controversial?
CA: Well, in our world the Sound Designer is kind of the lead artist and there has been some controversy around what constitutes design. For instance, one of QLab’s features is that you can have a bunch of sounds in a folder and if you press play on this folder it will just play one of the sounds at random, and this ended up being really controversial. A lot of designers say that if you do that, you’ve just stopped designing: if it’s random, it’s not designed, you’ve just thrown your job out the window. There’s a real divide between people who think that is something that is interesting and legitimate to use, and people who think you’ve completely given up your responsibility as a designer if you play something at random. So in the world of sound design that specific feature is a significant point of discussion. People feel pretty strongly about whether or not you use a random element in your design.
I can see where people are coming from when they say if it’s random, it’s not designed. But I also think that randomness – and having actors react live, in real time, to a random cue – can create its own more interesting live experience, sort of in the way that I understand Anne Bogart’s strategies.
I’ve seen quotes from here where she talks about loading an actor up with actions to the point where they can’t do them any more, so it kind of breaks them down and out of their heads. If you’re trying to keep track of the five different things you’re doing at once, you stop being able to think “I am performing” and you’re just 100% exclusively trying to keep track of the five different things. At some point something shifts and there’s something more authentic and interesting about what you’re doing.
My understanding is that she purposefully pushes people to the point where there’s just no brain space left and they’re just reacting in the moment to what’s happening. It’s like, “You have to hop on one leg while you clap and do some vocal pattern thing!” And they’re so focused on these tasks that they’re just in the moment. So I think there’s something to the notion of saying, “Here are the ingredients of what might happen,” but since you don’t know exactly what will happen, there’s a certain amount of improvisational reaction and spontaneity.
But in defense of those people who are critical of using that randomness, I think sometimes what happens is someone says, “I don’t want to listen to the same preshow music every night so I’m going to randomize it”. In that situation I think there’s a pretty good argument for saying, “You’re not using randomization to improve the artistic experience, you’re using randomization lazily.”
The people who walk in and are sitting down in the seats at ten minutes to curtain and five minutes to curtain, they’re having an emotional, artistic experience there, the show doesn’t start at curtain and you are just throwing that to chance and have therefore abdicated your responsibility as a designer. I think that’s an example that comes up pretty often and that’s a legitimate place to say randomness is not appropriate. The artistic purpose we are trying to achieve on a more traditional theatrical event is not served by randomness in this particular place. I think they have a legitimate argument.
Speaking of randomness, algorithms are by definition not random. They may create the appearance of randomness but they’re actually highly ordered and complex. A program like QLab allows artists to create increasingly complex audiovisual landscape. How has the relationship between technological innovation and artistic expression played out in the development of QLab?
CA: As I said before, we don’t try to impose an artistic perspective; we’re not trying to make an artistic statement about anything. We’re not trying to dictate how you make art, we’re trying to empower how you make art, whatever that happens to be.
I see it as two distinct modes of thinking. There’s a very practical mode of thinking where people are saying, “Here’s what we are trying to do today and if you made this change we could achieve it” and do it faster, better, or whatever.
Then there’s a larger, higher level thing that we’re engaged in right now which is trying to figure out what directions we may go in terms of new products. This is where we step back from the practical details and have pretty extensive high level conversations as a company about “What is this tool we have made?” “ What is it serving to do?”, “What are the characteristics that make it it,” and “If we made new products, how would those solve problems hat are we not addressing with this product?”
We’re at a point now where we are explicitly saying this product is for such-and-such and it never will be for such-and-such other thing, but we could make another product that is for that other thing.
Let me give you a specific example:
QLab is designed with an operator in mind who is sitting in a booth operating a show. That is the entire paradigm of that world: a prepackaged show that has variability in how it plays out from night to night but is not, fundamentally, a particularly interactive experience. It may be interactive to the degree that interactivity has kind of bled into it over time, you can get triggers that cause things to happen in an interactive way, but QLab doesn’t have interactivity as a deep foundational principle. So it is prepackaged, is has an operator at the heart of it, there’s a person sitting there, we assume, and even though you can run it without a person and people do, we’ve built it with a person in mind. So those are high-level principles that define it to be what it is. So we can look at that and say “Here are the characteristics of this tool that we’ve made, what are the other kinds of tools we can make?”
Now we’re thinking about if we want to move more towards something that is explicitly interactive in a deep way that’s not tacked on later, that may or may not have a human being sitting there running it each time and that maybe exists on more than one machine, that kind of thing.
QLab pretty fundamentally exists on one computer at a time. People sometimes create multiple QLab machines that talk to each other, but that is a function that was tacked on after the fact. So you can send a message over there to trigger that machine, and now you’re even more powerful because you have an audio machine and a video machine and you can have multiple machines running. But it is built out of a fundamentally “one at a time” paradigm and that networking is really an afterthought. So we can recognize that in what we’ve made so far and say, “That defines what we’ve made here, and that’s appropriate for what we’ve made so far.” But it’s interesting to think about interactive operation, networked operation as a foundational principle, what are the things that could be made if we use those as starting points?
AB: Even though it is currently designed for one operator, one room, one space, people are definitely pushing it into multiple spaces. I think we had a user who was a haunted house operator and he used it in multiple rooms, splitting audio channels so that the different rooms all had their own things happening.
Last year I saw an amazing piece called Latency Canons by composer Ray Lustig. He composed the piece to be performed on Google Hangouts. Performers were in different rooms in Carnegie Hall, in Manchester, UK and I think somewhere else. They were all playing in real time, but rather than fighting the microsecond lag, he built it into the score and it was really beautiful.
CA: That’s exactly what we’re talking about. This is a really lovely example of the kinds of things we suspect are out there, and we’re always looking for the artists to give us new ideas, push us. That’s part of the trick of it, really, because we’re not going to be as creative as the artists who use our tools.
Another foundational moment for QLAb was when I was at Actor’s Theater and someone else, another SITI company person, described to me something that he wanted to do that was essentially an algorithmically produced performance where it would have ingredients about what could happen at various moments in the show but it wouldn’t be preset. And that the performance would be generated in real time and then dictated to the performers as the performer went. I thought it was interesting.
That’s awesome. It is kind of what Joe Diebes was doing in WOW!
That’s the thing, right? We’re not trying to be as creative as the artists who use our tools, but we’re trying to recognize places where they may have not even had an opportunity to be creative, and build the tools so they can be creative in those spaces. Networking is something that has finally come to the point where that can be a tool, that can be a part of the artist’s palette.
AB: In a way it is the Rumsfeld problem – the unknown unknowns. The biggest problem we’re confronting with this new idea is coming up with the kinds of art that we can’t think of with the tools we have now.
CA: Which gets to the point that, ultimately, what we have to do is build a thing that’s developed enough for other creative people to spot possibilities in it, and then get it to them as quickly as possible and see if it actually works.
AB: That’s Paul Graham’s whole shpiel: make things people want, what is the minimum viable product and the longer you sit on it the more wrong ideas you will have and the bigger your humility fail is going to be when you finally release it.
CA: Part of the thing about building software is you want to get it into the hands of the people who are going to be using it as soon as humanly possible, because that’s when it really starts to mature in terms of what are they actually going to be using it for and how are they going to direct it? So if we believe we have a foundational idea that’s valuable, we need to build enough of a foundation that people can somehow use it then start getting them to use it and see …
I’m sure there are a lot of people who will be psyched to see what you guys come up with next. Thanks for talking to me!