No Sudden Moves

I’ve shared this recording elsewhere, so why not here?

  • Plaits in waveshaping mode (with an LFO over the level) feeds the audio input of Rings in inharmonic string mode, which feeds the ER-301.
  • In ER-301 channel 1, three Schroeder allpass filters in a feedback loop are manually controlled by a 16n Faderbank. In channel 2 there’s just a grain delay in a feedback loop with its time manually controlled. There’s a bit of cross-feedback.
  • The two channels are recorded as mid-side stereo, and some ValhallaRoom is applied.

I’m really enjoying the 16n Faderbank as a controller for all sorts of things. In another recent recording, I used it to control levels in Maschine over USB MIDI, as well as the levels and timbre of a harmonic oscillator in the ER-301. In this one, constant manual micro-adjustments of the allpass filters prevented the feedback from building up into something piercing and unpleasant, and changes in harmonic content were a combination of tweaking Plaits and Rings as well as the filters. The impression I get from this piece is light refracting off the curved surface of some mysterious alien artifact, perhaps… which might have been better title inspiration than what I chose. Ah well.

I’m reading A Study In Honor, a novel set during a near-future civil war. A post-Trump leftist government implements universal healthcare, guarantees LGBT rights, and does much for racial justice and income inequality and so on — and then radicalized right-wing idiots are so upset about it that some states start a war, and federal centrists are in the process of eroding rights and breaking the economy again to placate the crazies. Our protagonist is a wounded veteran of that war, a queer woman of color who suffers from PTSD, a poorly fitting, irritating, poorly functioning prosthetic arm that the VA won’t replace, and fresh waves of alienation. Needless to say, this has not been a happy story so far. It’s well-written and gripping, though.

So with that bouncing around in my subconscious, last night the infamous Shitgibbon-in-Chief actually appeared in my dreams. This cartoonish con man has been a mental health threat to the entire country for the last 30 months or so, but up until now he’s avoided direct appearances in my brain at night. Well… he’s officially banished.

thinking with the ears

I begin with a digression, because I must share this.

You’re welcome.

I mean. Once you have combined the concepts of “donut”, “prune” and “salad” into a single dish, why not serve it with mayonnaise?

What gets me here is the apparent random anarchy of the ingredient choices, paired with the strictly limited, generic, whitebread pool of possible ingredients that must have been drawn from. There are no spices or seasonings, nothing that would indicate a culture — except we all know it’s got to be “American, white, 1950-1975.” It’s almost mechanical, like it was created with a very crude randomization algorithm that lacks the finesse and charm of a neural network recipe.

I appreciate how this one (brown) leaves certain factors — not least, all of the actual preparation instructions — up to the cook’s improvisational judgement, so that each performance is unique. John Cage would approve.

Maybe the weirdest thing about the donut prune salad recipe is that it’s not unique. Coincidence or conspiracy?

The aesthetics of the first one, such as they are, seem a little better but I’d honestly rather 86 the mayo and use cream rather than cottage cheese. I’d also rather break my left arm than my right arm.

And in fact yes, I have added “Donut Salad” to my list of potential song titles, but under the category of “probably will never use.”

Anyway, what I was going to write about: I’m currently reading DJ Spooky’s Sound Unbound: Sampling Digital Music and Culture and it’s thrown some provoking thoughts my way. One of them is the idea that our primary mode of thinking is a visual/spatial one, with a coordinate grid, objects that take up space, and the spaces between them. The argument is that this spread in Western thought during the Renaissance with Descartes, the printing press, explorers and maps, etc. It’s probably not much of a stretch to say that movies, television and computers were heavily influenced by, but also strongly reinforced, this spatial paradigm.

It all seems very rational, scientific, and straightforward. Of course, it’s pretty wrong and/or useless at the quantum level, or when considering energy, or for a lot of metaphorical or magical uses, but it’s pervasive and sometimes we try to make things fit anyway. My career has been based on it — 3D graphics and modeling for games and then engineering.

I will speculate with some confidence that the previous mode of thought for most people for most of human history was a bit less spatial and more narrative. When we say “myth” now, unfortunately there’s usually a connotation of falsehood, disdain for the primitive etc. rather than the understanding that the idea of truth itself wasn’t necessarily so fixed and binary.

But steering a little more toward the inspiration from the book: the idea of an “acoustic” mode of thinking, where the measurement of space is more vague, and reality is inhabited by an infinity of interpenetrating fields of energy and motion, pressure and density, transmission and absorption and reflection. There are no distinct “objects,” just a whole where any divisions one makes are arbitrary slices of a spectrum that we know we could have sliced up differently. This ties back into what Curtis Roads was talking about when he claimed electronic music removes dependency on notes.

Of course, we still have a tendency to think of sound in terms of grid coordinate systems:

Amplitude over (a very short) time; literally the path the speaker cones will trace, sending waves of pressure through the air but also through wood, water, metal, bone, brick (not very well) etc.
Intensity over frequency, on a logarithmic scale; the strength of various frequencies of sound at a single moment (by some mathematical definition)
Intensity represented by brightness/color, with the frequency spectrum on the vertical axis and a span of time on the horizontal; we see a note or chord as a series of stripes, and notice that higher frequencies fade away faster than lower ones; there’s some intrusion of broadband noise in the middle.
Musical notes in a sequencer. Another kind of frequency scale on the vertical axis and time on the horizontal; much easier to identify pitches, rhythm, music theory type stuff but it says nothing about tempo, timbre, volume, “expression”, etc. Notes aligned on this grid will be perfectly in time with 32nd, 16th, 8th, quarter, half or whole notes…
This rhythm generator is even called Grids.
This is a Monome Grid, a button/light controller with many different software- and hardware-based friends, often used for music sequencing.
And this module is named for Descartes and is referred to as a “Cartesian sequencer” due to its ability to navigate in two or three dimensions, as opposed to linear sequencers which navigate either forward or backward but are still, conceptually, grids.

Grids are certainly a useful paradigm, in music and outside it. But it is also very much worth simultaneously thinking about all those overlapping, permeating, permeable fields of energy. Blobs rather than objects. Sounds, rather than notes. Salads, rather than donuts and prunes (sorry). Not just in terms of music and sound, but whatever else may apply. Personal relationships, memes, influences, cultures, societies? Economies, ecologies? Magic, mysticism? In a sense, I think this “acoustic” view of a universe is closer to the narrative one than the visual view is. (And there’s that word “view”, illustrating the bias… and oh we’re illustrating now, also visual…)

Using some of those grid-based tools above, I did some editing this evening of a recording I’d made earlier. There was a point where a feedback-based drone fell into a particular chord, which I thought made a much nicer ending than what happened later. So I took a bit of that ending, ran it through the granular player in the ER-301 to extend it for several seconds, resampled that and smoothly merged it back into the original audio — one continuous drone. No longer two things spliced, nor five thousand overlapping grains of sound; those metaphors stopped being useful, just like eggs stop being eggs when they’re part of a cake.


I’ve just finished reading Curtis Roads’ Composing Electronic Music: A New Aesthetic.

Roads has some pet techniques and technologies he is fond of (having developed some of them) as well as some pet compositional concepts, and tries to shoehorn mentions of them in everywhere. Granular synthesis is a big one. “Multiscale” anything is another (along with macroscale, mesoscale and microscale). “Dictionary-based pursuit” is one I’ve never heard of before and can’t actually find much about.

Roads comes from the more academic side of electronic music, as opposed to the more “street” end of things, or the artist-hobbyist sphere where I would say I am. But he recognizes that music is very much a human, emotional, irrational, even magical endeavor and that prescriptive theory, formalism, etc. have their limits.

The book was primarily about composition — and by the author’s own admission, his own views of composition. He gives improvisation credit for validity but says it’s outside his sphere. Still, I found some of the thinking relevant to my partially composed, partially improvised style.

At times he pushes a little too much at the idea that electronic music is radically different that everything that came before. For instance, this idea that a note is an atomic, indivisible and homogenous unit, defined only by a pitch and a volume, easily captured in all its completeness in musical notation — it completely flies in the face of pretty much any vocal performance ever, as well as many other instruments. Certainly there have been a handful of composers who believed that the written composition is the real music and it need not actually be heard or performed. But while clearly not agreeing with them, he still claims that it was electronic music that freed composers from the tyranny of the note, and introduced timbre as a compositional element (somebody please show him a pipe organ, or perhaps any orchestral score).

He has something of a point, but he takes it too far. Meanwhile a lot of electronic musicians don’t take advantage of that freedom — especially in popular genres there’s still a fixation on notes and scales and chords as distinct events — that’s why we have MIDI and why it mostly works — and a tendency to treat the timbre of a part as a mostly static thing, like choosing which instrument in the orchestra gets which lines.

And I’m also being picky — it was a thoughtful and thought-provoking book overall. I awkwardly highlighted a few passages on my Kindle, though in some cases I’m not sure why:

  • “There is no such thing as an avant-garde artist. This is an idea fabricated by a lazy public and by the critics that hold them on a leash. The artist is always part of his epoch, because his mission is to create this epoch. It is the public that trails behind, forming an
    arrièregarde.(this is an Edgard Varèse quote.)
  • “Music” (well, I can’t argue with that.)
  • “We experience music in real time as a flux of energetic forces. The instantaneous experience of music leaves behind a wake of memories in the face of a fog of anticipation.”
  • “stationary processes”
  • “spatial patterns by subtraction. Algorithmic cavitation”
  • “cavitation”
  • “apophenia”
  • “Computer programs are nothing more than human decisions in coded form. Why should a decision that is coded in a program be more important than a decision that is not coded?”
  • “Most compositional decisions are loosely constrained. That is, there is no unique solution to a given problem; several outcomes are possible. For example, I have often composed several possible alternative solutions to a compositional problem and then had to choose one for the final piece. In some cases, the functional differences between the alternatives are minimal; any one would work as well as another, with only slightly different implications.

    In other circumstances, however, making the inspired choice is absolutely critical. Narrative structures like beginnings, endings, and points of transition and morphosis (on multiple timescales) are especially critical junctures. These points of inflection — the articulators of form — are precisely where algorithmic methods tend to be particularly weak.”

On that last point: form, in my music, tends to be mostly in the domains of improvisation and post-production. The melody lines, rhythmic patterns and so on might be algorithmic or generative, or I might have codified them into a sequence intentionally, or in some cases they might be improvised too. On a broad level, the sounds are designed with a mix of intention and serendipity, while individual events are often a coincidence of various interactions — to which I react while improvising. I think it’s a neat system and it’s a lot of fun to work with.

The algorithmic stuff varies. Some of it’s simply “I want this particular rhythm and I can achieve that with three lines of code”, which is hardly algorithmic at all. Sometimes it’s an interaction of multiple patterns, yielding a result I didn’t “write” in order to get a sort of intentionally inhuman groove. Sometimes it includes behavioral rules that someone else wrote (as when I use Marbles) and/or which has random or chaotic elements, or interactions of analog electronics. And usually as I’m assembling these things it’s in an improvisational, iterative way. It’s certainly not a formal process where I declare a bunch of rules and then that’s the composition I will accept.