Church of Euthanasia

The One Commandment:
"Thou shalt not procreate"

The Four Pillars:
suicide · abortion
cannibalism · sodomy

Human Population:
SAVE THE PLANET
KILL YOURSELF




Translate button

Fifteen Questions Interview with Chris Korda - Routines Won't Suffice

Name: Chris Korda
Occupation: Transgendered suicide cult leader, electronic music composer, digital artist, free software developer
Nationality: American
Current Release: Apologize To The Future on Perlon
Recommendations: My main inspiration for polymeter and phase art is Thomas Wilfred. Look him up in Wikipedia. I saw his work at the Museum of Modern Art in NYC when I was a child. Some of his “Lumia” machines permute for years without repeating. For harmony, I recommend “Drifting Petals” by Ralph Towner and Gary Burton, as well as Ralph Towner’s 1979 “Solo Concert.”

If you enjoyed this interview with Chris Korda, find out more about Chris's work and music on the following pages: Personal website, Twitter, Instagram.

 

When did you start writing/producing music - and what or who were your early passions and influences? What is it about music and/or sound that drew you to it – especially compared to the other activities you've been engaged with?

I started making music as a child by tapping on household objects. I was displaying an unusual aptitude for rhythm, but unfortunately it was misdiagnosed as twitchiness. I also improvised on the piano every chance I got, but this was similarly discouraged. I dazzled my schoolmates by using my mouth to accurately imitate rock drumming, beat-boxing long before I knew the term. I also started building my own instruments, for example I taped a microphone to the end of a wooden recorder, plugged it into a radio, and played wild Hendrix-inspired solos.

Odd time was the height of musical fashion in the late 1960s and early 1970s, and I developed a lifelong fascination with odd time as a result. My strongest influence was the band Yes, and I still listen to their album “Relayer” regularly and use it as an example of peak complexity in popular music. The rock opera “Jesus Christ Superstar” was another major influence. Many years later, odd time influences such as these predisposed me to discover complex polymeter and use it in my composing.

By the age of twelve I acquired a toy organ, and a cheap acoustic guitar shortly thereafter. I started studying jazz guitar in 1979, practicing countless hours every day, and had a series of excellent teachers, including tenor saxophonist Jerry Bergonzi. I studied composition at Sarah Lawrence, and attended a summer session at Berklee College of Music. The latter was a decisive influence, for two reasons: I learned to sight-read jazz charts, and my roommate introduced me to a group of artists that vastly expanded my musical taste: Pat Metheny, John Abercrombie and Ralph Towner.

For most artists, originality is first preceded by a phase of learning and, often, emulating others. What was this like for you? How would you describe your own development as an artist and the transition towards your own voice? What is the relationship between copying, learning and your own creativity?

My dream since early adolescence was to play guitar in a band, and after some false starts I eventually fulfilled my ambition by playing in a few Boston-area jazz and rock bands. I even spent a summer busking, playing jazz standards on street corners, but ultimately I found the technical aspects of guitar intensely frustrating and limiting.

I was a huge fan of John Abercrombie, attended many of his shows, and consciously imitated his style, for example by transcribing his solos, but it made me increasingly unhappy. Eventually a friend persuaded me that my strategy was mistaken, and that I needed to escape from Abercrombie’s shadow in order to find my own creative path. So in 1991, I quit the guitar, moved to Provincetown, and started a new life as a female impersonator. This drastic transition gave me the inspiration and courage to reinvent myself, first as founder of the Church of Euthanasia, and then as an electronic musician.

What were your main compositional- and production-challenges in the beginning and how have they changed over time, especially after a long break from producing music?

I started producing in 1993 using the MS-DOS version of Cakewalk, which by chance happened to allow each track to have its own independent loop length. Due to this happy accident, I immediately discovered and fell in love with polymeter and phasing. Oscillators having different frequencies will drift in and out of sync, and this is called phasing. Polymeter is quantized phasing, wherein the drift occurs in discrete steps. I soon began composing in complex polymeter, which I define as the simultaneous use of three or more prime meters. For example “Buy”—the opening track of “Six Billion Humans Can’t Be Wrong”—is in 3, 4, 5, 7, 11, 13, 23, and 31, all at once. I didn’t discover Steve Reich’s work until decades later.

My immediate problem was that in Cakewalk you were either editing or listening but not both. I wanted to escape from this dichotomy, and improvise my arrangement in real time. I frequently performed live sound-collage, and was influenced by that style’s free-flowing aesthetic.

My goal was to live-arrange my polymeter loops, and have the arrangement recorded, not as audio, nor as MIDI, but as mute automation. In other words, I wanted to record when each track was muted or unmuted. The advantage of this is that the mute events can be edited afterwards—for example to fine-tune the transitions—without disturbing the underlying polymeter loops. The concept is analogous to a stencil. Unmuting tracks cuts holes in the stencil, and the underlying tracks show through the holes, with their phase relationships always preserved.

I started by hacking Cakewalk, but this proved too limiting, so I developed a live-arranging program of my own—partially modeled on a lighting controller—which eventually grew into a full-fledged polymeter MIDI workstation. It consisted of three separate programs: one for polymeter composing, one for live-arranging and recording the arrangement as mute events, and still another for fine-tuning the resulting arrangement.

It was all written from scratch in C and assembler language—in those days you had to write your own device drivers—and it was clumsy and hard to use by today’s standards. Originally I drove music hardware, but after Reason came out I simplified my rig to two laptops, one running my sequencer and the other running Reason, connected by a hardware MIDI interface.

Decades later my sequencer has evolved from these humble roots into a powerful integrated composing software with many features that aren’t found in commercial DAWs.

What was your first studio like? How and for what reasons has your set-up evolved over the years and what are currently some of the most important pieces of gear for you?

When I started producing electronic music, racks full of hardware—synths, drum machines, effects, mixers and so forth—were still a necessity, but I ditched all that stuff as soon it became practical to do so. It’s fashionable to be obsessed with hardware, but it reminds me of collecting antique cars. I like unlimited undo.

My studio currently consists of a Windows laptop running my custom composing software (called Polymeter), along with Propellerhead Reason connected to Polymeter via a virtual MIDI loopback cable. I use Reason only to translate Polymeter’s MIDI output into audio. I also have a flat-screen monitor, a digital-to-analog converter, a pair of powered speakers, and headphones for working at night.

How do you make use of technology? Would you say your background as a software developer plays into this? In terms of the feedback mechanism between technology and creativity, what do humans excel at, what do machines excel at?

Academic studies confirm that the complexity of music has declined steadily since my childhood. Music is increasingly made by non-musicians, as in persons who lack musical training and are unfamiliar with the theory and practice of instrumental music. Music technology corporations are partly to blame, because they market their products by spreading the convenient fiction that music is sound design.

More generally the widespread adoption of music technology is a double-edged sword. On the one hand it’s had a democratizing effect: now nearly everyone can be a music producer. But on the other hand it’s led to de-skilling, often reducing musical expression to the level of a video game. For example when Rebirth was released, people imagined they could use it to become the next Richie Hawtin, but what they were really doing is playing Richie Hawtin in a simulation.

Like all corporations, music technology corporations seek to maximize their profits, so it shouldn’t surprise us that their products are conceptually conservative and have the effect of reinforcing the musical status quo. If you use the same tools as others, you will have similar degrees of freedom and therefore unavoidably achieve comparable results.

To achieve unique results, you need unique tools and methods, and that’s why I decided long ago to create my own composition tools. My career as a professional software designer made this decision possible, but there were many daunting hurdles. For example, by 2003 my original polymeter MIDI sequencer had become hopelessly obsolete and was limiting my creativity, but at that time I lacked the skill to adapt it to a modern platform. During the fifteen years before I returned to the electronic music scene in 2018, I worked as a software consultant in the 3D printing industry, and it was during those years that I gradually acquired sufficient programming skill to modernize my sequencer.

Since the 1990s I’ve been acutely aware that collaborating with technology could not only allow me to overcome my limitations as an instrumentalist, but more importantly allow me to explore unknown musical territory that would otherwise be inaccessible or even inconceivable. Computers can perform complex calculations accurately in real time, and easily manipulate huge datasets, and these capabilities are indispensable to my artistic process. By offloading music theory computations onto machines, I free myself to approach musical expressiveness in a more abstract and intuitive way.

Above all, I value orthogonality, meaning I strive to isolate fundamental aspects of music—timbre, rhythm, pitch, melody, harmony—into independently controllable parameters, so that for example the rhythm can be changed without changing the harmony, or vice versa.

You used algorithmic music techniques and a robot choir on the album, tell me a bit about these, please. Do you see a potential for AI in exploring novel musical concepts?

I was writing two albums at once during this period. The other was my “Polymeter” album which consists of generative solo piano and solo guitar, in a fusion of neoclassical and jazz, reminiscent of “stride” piano. I was also teaching myself atonal music theory, and that’s audible on both albums, for example on “Overshoot.”

Polymeter modulation is an outstanding tool for rule-based harmony generation. My software defines a “scale” very abstractly as any collection of pitches, and a “chord” as any subset of a scale. I’m headed away from the common scales, and towards generative atonal harmony, because it has tremendous potential for ambiguity and surprise. Atonal music often suffers from the “cat walking around on the piano” problem—too many adjacent semitones—but I have methods for avoiding that.

In 2014 I developed a software called ChordEase that makes it easier to improvise to jazz changes. You can play jazz using only the white keys, because the software automatically translates them to the needed scales in real time. It codifies a lot of knowledge about jazz, and that makes it an expert system, which is a type of AI. It’s also an example of offloading, which is a hot topic in AI. I wrote a paper about it and presented it at NIME. Guess who really hated it? Jazz musicians. I almost got beaten up in a jazz club once just for talking about ChordEase.

The robot choir draws inspiration from the chorus in classical Greek tragedy. It seems plausible that our machines will outlive us, so it makes sense for them to tell the story of our hubris and demise.

Production tools, from instruments to complex software environments, contribute to the compositional process. For Apologize To The Future, you eventually spent many years developing your own sequencer. How does this manifest itself in your work? Can you describe the co-authorship between yourself and your tools?

Having covered such questions above, I’m going to pivot and talk about the elephant in the room. "Apologize to the Future" relentlessly expounds the pivotal issues of the 21st century: climate change, economic inequality, intergenerational injustice, artificial intelligence, overpopulation and overconsumption, antinatalism, and human extinction. This is unprecedented in electronic music. And yet here we are blithely discussing compositional processes as if nothing were amiss. It's as if I told you an asteroid is headed straight for Earth and you responded by asking me about my childhood influences. It feels like an example of denial, which is another major theme of the record.

My earlier work is often ironic, but on this album I felt an obligation to speak from the heart, in plain language that anyone could understand. “Apologize to the Future” preaches that procreating isn’t just selfish, it’s cruel. There’s no ethical justification for creating new humans only to abandon them on a wrecked planet. Future generations will suffer for crimes they didn’t commit, while the perpetrators abscond, smugly dead.

I have spent nearly thirty years attempting to increase public awareness of the climate crisis and its causes, through art, music, writing, street theatre, culture jamming and more. These efforts were not in vain: public awareness has increased greatly and we may be approaching a cultural tipping point. But the disaster is already upon us, and our usual routines won't suffice. We either wise up fast, or the future won’t include us.

Collaborations can take on many forms. What role do they play in your approach and what are your preferred ways of engaging with other creatives through, for example, file sharing, jamming or just talking about ideas?

I’m a very solitary artist. My musical methods are incomprehensible to most people. I have a friend who is a gifted mathematician and he talks me through some of the thornier problems. I sometimes share unfinished pieces with close friends whose judgement I trust, but only if they’re gentle. Harsh criticism can be very destructive. I try to make everything open source. I should write a book.

Could you take us through a day in your life, from a possible morning routine through to your work? Do you have a fixed schedule? How do music and other aspects of your life feed back into each other - do you separate them or instead try to make them blend seamlessly?

My creativity is erratic and arrives in episodic spasms. I try to create hospitable conditions within myself, so that my muse will be more inclined to visit and stay longer when it does. I drink a lot of coffee and take copious notes. David Lynch is one of my main inspirations in this regard. I have lists and spreadsheets for just about everything. I used to stay up all night for days but my health won’t stand it anymore. I crave solitude and quiet. I’m an incurable workaholic, and oddly I do some of my best work while I’m asleep. I wake up with a solution, and then realize I’m exhausted because I worked all night in my dreams.

Could you describe your creative process on the basis of your new album Apologize To The Future? Where did the ideas come from, how were they transformed in your mind, what did you start with and how did you refine these beginnings into the finished work of art?

The album contains around 1,200 words, all in rhyme. I studied rap in order to better develop my rhyming schemes. The words predate the music, sometimes by months or even years. I read an entire bookcase full of books about the climate crisis. The album springs from those books, and from decades of assimilating and disseminating unpleasant environmental truths.

The idea of apologizing to your children came from Dan Miller’s presentation “A REALLY Inconvenient Truth” which is available on YouTube. He lists things individuals can do, and his first item is “Ask your children for forgiveness.” This led me to a thought experiment, in which I asked myself “How will future generations regard us?” Assuming future generations are lucky—or unlucky?—enough to exist, they’ll resent us for sending them to hell.

Another source was my “Metadelusion” blog, which started with my poem “Less.” The poem’s theme is that it’s too late to avoid catastrophe, but not too late to slow down. As the poem says, “Less can no longer be avoided / Less could be gradual, or sudden / Less will hurt, either way / Sudden will break more bones.”

The Kubler-Ross “five stages of grief” model is another influence. We’re stuck at denial, and we need to get past that to arrive at the crucial final stage of acceptance. Now that disaster is upon us, hating humanity is pointlessly cruel. Instead we should feel sorry for ourselves, since we’re our own worst enemy. This observation is the essence of the post-antihuman Church of Euthanasia.

Still another source is David Quammen’s article “Planet of Weeds” about the paleontology of mass extinctions. “A Thin Layer of Oily Rock” is a reference to the Permian-Triassic extinction, the so-called “Great Dying” which eliminated 96% of all marine species and 70% of terrestrial vertebrate species. A similar mass extinction is already underway.

William R. Catton’s 1980 classic “Overshoot” is yet another influence. Catton viewed humanity through the lens of population biology, and was the probably the first to popularize the term “overshoot” in reference to human overpopulation and overconsumption.

There are many descriptions of the ideal state of mind for being creative. What is it like for you? What supports this ideal state of mind and what are distractions? Are there strategies to enter into this state more easily?

In March 2019, I spent a week alone in a rented apartment in Lisbon writing “Singularity,” which features some of my most brutal depictions of the future. Lines like “Picking through the rubble of society / Mountains of toxic trash our legacy” were traumatic to write. It’s horrifying to contemplate a future without civilization or decency, a lawless world in which only criminals are free.

I needed to be alone for long periods in order to transmute my rage into something constructive. “Apologize to the Future” was painful to create, and I still find it painful to listen to. It’s supposed to hurt. Earth’s in disarray, and we need to feel the ugliness of what we’ve done. We need to grieve for what we’ve destroyed, including our own future. Without remorse there can’t be restitution.

How do you see the relationship between the 'sound' aspects of music and the 'composition' aspects? How do you work with sound and timbre to meet certain production ideas and in which way can certain sounds already take on compositional qualities?

While composing, I usually find it sufficient to work with generic timbres such as piano, bass and strings. Sound design distracts me from composing, so I prefer to have less options. When synthesizers were relatively new I used to enjoy programming them, but now I find it tedious. As my work becomes more harmonically nuanced, I increasingly use classical acoustic sounds, because they do a much better job of rendering ambiguous or dissonant tonality. A gritty synth patch that may be fine for the pentatonic scale will obliterate a very tense chord.

Our sense of hearing shares intriguing connections to other senses. From your experience, what are some of the most inspiring overlaps between different senses - and what do they tell us about the way our senses work? What happens to sound at its outermost borders?

I don’t experience synesthesia and I haven’t found that musical concepts translate easily into the visual domain, or vice versa. I often try to visualize musical relationships, sometimes successfully, but music itself is uniquely auditory.

Art can be a purpose in its own right, but it can also directly feed back into everyday life, take on a social and political role and lead to more engagement. This seems particularly true with regards to Apologize To The Future. Do you think [techno] music can truly be political? Can you describe your approach to art and being an artist?

Of course techno music can be political! I’ve been making political techno since 1994, starting with “Save the Planet, Kill Yourself.”

My approach to art is to cultivate inspiration and avoid considering the opinions of others. Artists should strive to express their vision faithfully. Art is personal. I’m a peculiar and polarizing person, and my art often reflects these same qualities.

It is remarkable, in a way, that we have arrived in the 21st century with the basic concept of music still intact. Do you have a vision of music, an idea of what music could be beyond its current form?

I don’t find it so remarkable. As long as we exist and have ears we’ll have music. The question reminds me of the period when all-black paintings were in vogue. Ad Reinhardt’s “Last Paintings” weren’t and couldn’t have been the end of art, because we see in color, and would inevitably tire of monochromatic art. Assuming civilization doesn’t collapse—admittedly a big leap of faith—harmonically complex music is bound to return, because people are biologically equipped to hear subtle changes in tonality.

I hope polymeter fulfills its potential in the 21st century. I submit my work as evidence that a vast musical territory remains largely unexplored. I make my software free and open source because I want people to follow in my footsteps and continue exploring this fascinating frontier long after I’m gone. This statement is admittedly hard to reconcile with the climate crisis, but I’m used to living with cognitive dissonance.

 top  email the Church of Euthanasia