Beautiful Errors - An Interview With Wobbly

A Closer Listen

Monitress cover art by Max Allison

Jon Leidecker, aka Wobbly, has produced experimental electronic music since the 1980s, whether on his own or as part of San Francisco collective Negativland. His work has constantly questioned the relationships between musicians and their tools, composition and improvisation, as well as originality and appropriation. Since 2019, he has developed a project called Monitress, consisting of a piece for mobile devices designed to both listen and produce sounds, resulting in a collaboration between artist and machine. Across 7 albums, Wobbly has formulated variations of this piece through interrelated premises that result in new, yet familiar music every time, with different questions and references each. Through an email exchange, A Closer Listen was able to talk to Jon about this project and its conceptual background, the relation between technology and culture, and more.

David Murrieta Flores (ACL): Could you please talk to us about how you came to conceive of Monitress as a series?

Wobbly (W): The piece was a reaction to the ways in which mobile phones and tablets were changing our lives. Before 2010, any show where a musician used a laptop, you’d hear that joke about whether or not they were just checking their e-mail; that stopped with the touchscreens. It wasn’t just that audiences had a better line of sight on your hands — everyone’s relationships with these things were immediately more personal. Soon they’d replaced most of my hardware, and I realized.. they were the piece.

From the beginning, some of my favorite apps were the pitch trackers, which converted an audio signal into melody data which could then drive a synth, all running on the same phone. Sing into the mic, and it’ll sing back to you on its speaker — so, that’s not even a metaphor, right? The sound of your own phone listening to you.

This went over well in concert — I’d start by saying hi to everyone and describing the process, then turn the devices up until the melodies they were extracting from my speech took over. Then I’d begin playing the synths on the iPads directly, until they’re all feeding back on each other. Each device comes up with its own variation on the melody, with lots of beautiful errors. An audience can follow my hand gestures, and see that when I stop moving, the music stops too (or at least simplifies). So there’s the kind of complexity you can only get with an ensemble, but there’s still a human at the center of it.

ACL: What would you say your aim is with the series as a whole?

W: It’s a way to dance through the hard definitions of improvisation and composition, as well as human and machine. When you hear multiple voices singing in unison, instinctively you hear that as a composition, with all that implies: rehearsals, communities, traditions. Many voices singing the same tune always means culture. Until now — machines with reflexes of less than 30 milliseconds are capable of singing along with you in real time. Even if you know you’re improvising, to the machine, it’s a written score that it can follow, doing as it’s told — though it does make mistakes. As you improvise, their oddities influence your response, either by throwing you off or by becoming your ideas — you can learn to close the loop and follow their lead. At which point you get into some beautifully alien group-mind territory, things that sound collective, but that you can’t do with humans. Music that doesn’t sound either composed, or improvised, that sounds loose and spontaneous yet also unarbitrary and coordinated.

Initially I thought any album would have to be a straight document of a concert to make sense. But in the studio, adding and editing additional layers all triggered by one initial human performance only gives you more opportunities to blur the lines between improvisation and composition, and makes for a really inexplicable kind of music. It sounds like questions. And the only way to really illustrate a piece this based on improvisation is to record more than one of them… if there are seven, it’s because this piece is so unbelievably fun to play.

Photograph by Joe Gerhardt, courtesy of Wobbly

ACL: What is it about technology that interests you? How does that relate to musical technology in particular?

W: Technology and culture aren’t separable. When composers take advantage of new technologies, the music often illustrates all the aspects of that technology, either metaphorically or literally. Drone instruments reenforce a continuous sense of time, keyboard instruments give harmonic freedom while standardizing tuning, orchestras demonstrate the hierarchical control of a conductor and a passive audience, record players replace home performance of music; new technologies create forms of music which are models of the societies they emerge from.

So, as electronic music instruments enable us new kinds of automation, we have to decide be careful about the kind of goals we’re choosing, because your relationship with your tools gets modeled straight into the music. If you’re using your machines to loop and imitate human playing, you’re modeling your own dependency on slave labor, if not a certain kind of isolation. Perhaps you’re following Kraftwerk’s lead and exploring man-machine music, though too many people seem to forget most of their best music was made with two human drummers. But machines can also do things humans aren’t good at doing, such as true random number generation, abstract pattern recognition across micro and macro-time scales, or microtunings hard to achieve on acoustic instruments. And once machines make new relationships audible, humans often learn how to play them after the fact; less than five years after drum and bass showed up, drummers learned it. Important information about the way a culture is changing always shows up in the music first.

ACL: What would the difference be between an interpreter of any given piece with extremely strict instructions and a machine programmed to do the same?

W: I’m not entirely sure, but that would depend on those strict instructions. When you see someone manage to perform Cage’s “Music Of Changes”, there’s a definite athleticism to it. That piece remained unperformable until David Tudor somehow found a way to break free of traditional musical logic — the pleasure is in watching the human transcend the score. A machine tasked with performing a piece most humans could do is mere automation — no pleasure in virtuosity without risk. But there is something to be said when you task machines with realizing utterly unperformable music. Nancarrow’s scores for simultaneous, irrational time signatures come to mind. So there, the machine raises the bar; even today, humans can’t play those pieces without dropping the tempo by half.

ACL: How did you reach the concept of the ‘monitress’?

W: It was around the time I began playing this piece that we all began to really understand Silicon Valley’s business model, and that the predictive power of our collective data, and the potential to predict as well as shape our behavior with it — that was the real commodity. So the pitch tracking that had sucked me in from the beginning was always more than a metaphor. I don’t even remember choosing the title.

ACL: Would you say that the link between you, the ‘monitress’, the sounds, and the audience is a feedback loop?

W: Exactly! There’s a long list of composers included in the liner notes to Popular Monitress, all in the lineage of people exploring cybernetic feedback as a compositional principle, music which responds or learns from its own output — early pioneers in what now often gets called generative, network or algorithmic music. I have a lecture on it called ‘United Feedback’ which can go pretty long (a short version is on YouTube). Early network music from the Seventies and Eighties was a huge influence — Jim Horton and the League of Automatic Music Composers, David Behrman, George E. Lewis, The Hub — though for them, pitch tracking is only one of many complex data streams they code into their pieces. I’m not a programmer; I’m using these things off the shelf, exploring these half-century old trackers as a process in and of itself. Not cutting edge at all — but over the last ten years, trackers have turned up as standard features in Logic and Live, so… right as it’s going invisible, that’s when it becomes important.

Photograph by Michael Zelner, courtesy of Wobbly

ACL: To some extent, the titular ‘monitress’ is both a neutral (even neutralized) observer and a mechanically reactive participant of musical processes. For a couple decades now, the argument for conceiving of listeners as active rather than passive has been clearly made, and I believe this way of understanding monitoring could be a fruitful parallel. Would listeners be musical tools in that same way, or is our role different?

W: As Pauline Oliveros put it, good musicians are virtuoso listeners. And even though they’re entirely reflexive, the trackers fuse listening and playing; they add their voices.

Monitress as a process comes across even more clearly when I’m improvising with other musicians, especially ones that play traditional instruments; when an audience sees someone playing an acoustic instrument, but hears a distinct voice locked in… you can hear the disbelief. It’s distinct from hearing electronic signal processing. I learned quickly that it was flat out unethical to train Monitress upon a musician without telling them about the process first; even if you know it’s coming, the sound of hearing a machine tail you that closely can really shake you. A common response was for the musician to simply trail off and stop, because the machine had fused with their sound so completely they couldn’t tell what they were doing, they stopped to hear what was them and what wasn’t. Which didn’t help, because of course, the second they stop, the machine stops too…

When playing with Fred Frith or Thurston Moore, the electronics paralleled beautifully with their guitars, turning the overtones into tunes. Playing with Zeena Parkins, who does a lot with controlled feedback on her electric harp, that was a blend so seamless I just fell into it, I lost myself completely. Another successful duo was with the pianist Tania Chen, who played Morton Feldman’s “Triadic Memories“ for the trackers, which would often trill a half-step off while trying to lock in on one of his clusters — and of course all those half-steps sounded very Feldman.

ACL: This plays, I think, into the interesting link between programming and improvisation essential to this series. How does one affect or transform the other? Are they more similar to each other than what we normally think?

W: Decades ago, when I first started composing things in the studio, the most striking thing about reel to reel tape editing for me was how close it felt in practice to improvisation; all those silences you’d experience while trying out countless edits, well that is the hidden sound audible within densely constructed finished piece. And in the years since then, electronic and software instruments have increasingly made editing processes like looping, splicing, and live capturing easier to do in real time. Tech keeps injecting techniques of composition into the practice of improvisation.

ACL: Just to reintroduce an earlier matter: how do these injections made by technology affect the people handling the tech? Does it say something about culture?

W: It usually does. All the software designed to make music production simpler is making decisions in advance about what that music is going to sound like. Which is definitely okay, even if ‘composition’ isn’t really the paradigm you should be using when you’re making music with your average modern loop pedal. But from the beginning of electronic music studio practice in the United States, Louis and Bebe Barron defined their machines as generative collaborators, which they’d steer or guide and interact with rather than command. That’s an insight about the actual relationships we have with our tools, we’re better off if we don’t think we’re in absolute control of them.

ACL: Throughout the liner notes, the idea of errors in the process is key: what have these mistakes revealed in the task of making Monitress?

W: Mistakes make the rules of the systems we’re using audible, and in that moment, you have split second to question that rule and whether it’s a good one.

There’s no objective way to write code that turns a full spectrum audio signal into a monophonic melody — at every split second, there are musical choices to make about overtones, transients, tunings, all of which are highly context dependent. You can’t set and forget. I expect most people avoid using the trackers live because of all this variance. But if you like the Residents, if you like the close intervals in Bulgarian or Sardinian choral music, if you like Charles Ives as much as you like Xenakis or frog ponds… you hear them trying to sing unison, and so you stop hearing mistakes and start hearing variety, you hear their intent.

ACL: There’re a lot of contemporary tools that exploit the unconscious, that region of ourselves full of mistakes, for profit. You take on the topic with Ethical Monitress, suggesting to use musical technology against its own designs. Is there a way for us, as an audience, to listen against those designs as well?

W: I wish I could be more certain I’m actually using this technology against its own designs. Just as Facebook hardly minds when you use Facebook to complain about Facebook, I still use my phone.    It’s more a hope that this music isn’t lying to you about the world we’re in. If it’s a dangerous world, hopefully what you’ve made is something that keeps you sharp, rather than of anesthetized. These tools won’t be around forever — I visited Peter Blasser of Ciat-Lombarde when he was still in Portland, and his backyard was filled with these light sensitive, solar-powered synthesizers — he loves electronic music and wants to make sure it’s still in his life if anything ever happens to the grid. Those instruments had a lot of the future in them.

ACL: I’d like to close with a couple of questions about intelligence, which is also an important part of this series, I feel. In particular, I’d like to ask you what the role of intelligence is when it comes to programming and improvisation in your work – does it consist of a “successful” navigation between these two processes, or something different?

W: There’s not much difference in the experience of either composing or improvising when it’s going well, you’re experiencing the same kind of flow state — in music, the degree to which you’re consciously making a decision is usually the degree to which you’ve already blown it. Your brain is only waking up to think because something’s already not working. Of course, during composition, you have the time to stop and find a new solution — so the real navigation is between consciousness and instinct, and using intelligence to balance the two. And that’s what’s fun about Monitress — though the trackers don’t learn from their behavior, they still have intelligence — they listen and respond in emotionally evocative ways. But of course, that’s you projecting into it — and it’s also key that the second you stop playing, and it has nothing but itself to listen to… the sound gets weirdly less complex. It goes numb, which demonstrates something simple about the difference between intelligence and consciousness, somehow.

Ethical Monitress cover art by Wobbly

ACL: Lastly, for the past few decades the concept of emotional intelligence has become commonplace, as a framework to understand how we deal and work with our own emotions. Could the unconscious revelations of errors in these musical processes be understood as a way to frame the emotional dimension of machines making music alongside you?

W: The recent music rendered through neural networks, which train themselves on recordings of human music to produce vaguely familiar recombinations — there’s ever more complexity on display. It’s also the sound of money, given how much compute time is required to create a training model of a single artist’s work. I love listening to the OpenAI renders, it’s definitely new — if I’d tuned into a magic radio station playing this even as recently as ten years ago, I would have had no way to understand it. It’s not training on meaning, or context, or culture, or relationship — just waveforms, and everything we find in it, we’re projecting into it. Certainly just as I learn from recordings, I learn from completely automated generative process that aim to stay interesting without further human input, but ethically, the music I’m interested in the most are the ones that manage to keep humans in the loop, whether as pilots or collaborators. Because that’s the music that’s going to model the world that’s not quite here yet.

ACL: Thank you very much for your time, Wobbly. Is there anything else you’d like to add as a final word to our readers?

W: Over the last six months, I’ve returned to playing live shows. The ethics involved in playing indoor events remain a bit murky given the clear and present dangers. But the screens we’re using to safely distance ourselves from the world have become a greater danger than any of the real threats those screens are depicting. Live music isn’t really about entertainment.

The last album in the series, Patient Monitress, is out now via the artist’s bandcamp.

Sat Feb 12 00:01:23 GMT 2022