inharmonic collision

Monday evening, I set up my composition idea as planned: three triangle oscillators feeding a sine shaper, with two of them under polymetric control from Teletype (3-in-8 vs 4-in-9 Euclidean rhythms) and one under manual control.

It was frankly pretty boring when I was just using octaves. So I decided to go off the rails a bit and sequence pitches with the Sputnik 5-Step Voltage Source. I clocked it with the master clock, regardless of the rhythmic pattern; the first voice used a channel directly and the second sampled a channel every 8 clock steps. So what we’d get is a complex pattern that starts something like this:

Where time runs left to right, and each color in each lane represents a knob on the 5-Step (not necessarily indicating what the pitch value is set to, nor is “red” on the top and bottom necessarily the same pitch but sometimes they are, and different colors within the same lane are in some cases tuned to the same pitch). The pattern in the top lane runs for 40 beats before repeating, and the bottom lane runs for 72 beats. Because these two are interacting thanks to the sine shaper, they can’t be thought of as individual parts and so it’s going to take 2880 beats for the pattern to repeat. At the tempo I used, the whole recording is roughly 1/10 of a full cycle. (Or… it would be, except I put the two patterns under manual control, suppressing the triggers while letting the sequencer keep clocking. Monkey wrench!)

Complex patterns from relatively simple rules. But that was kind of a tangent — the point is, the frequencies I dialed in, relative to each other, often collided in non-integer ratios. Even if they sound good as individual notes played together, when you use them in phase modulation things get a bit dissonant and skronky, with new sidebands at weird frequencies.

An unlikely scenario.

This is the 21st century — beauty is complex, artistic merit isn’t directly tied to beauty, we’re not limiting ourselves to Platonic perfection, and the idea that certain intervals and chords could accidentally invoke Satan isn’t something we lose sleep over anymore. I think the result I got is pretty neat! But it’s not really what I had originally imagined. So I’m going to keep the basics of this idea, follow a different branching path with it and see where that goes.

The third voice, I controlled with the 16n Faderbank — one slider for level, one for pitch. The latter went through the ER-301’s scale quantizer unit, so it always landed on something that fit reasonably well with the other two voices. It turns out this unit supports Scala tuning files, and TIL just how crazy those can get.

Scala is a piece of software and a file format which lets you define scales quite freely — whether you just want to limit something to standard 12TET tuning, or a subset of that (such as pentatonic minor), or just intonation, non-Western scales, xenharmonic tunings, or exactly matching that slightly-off toy piano. The main website for Scala has an archive of 4800 different tuning files and that’s just too much. This is super-specialist stuff with descriptions such as:

  • Archytas[12] (64/63) hobbit, sync beating
  • Supermagic[15] hobbit in 5-limit minimax tuning
  • Big Gulp
  • Degenerate eikosany 3)6 from 1.3.5.9.15.45 tonic 1.3.15
  • Hurdy-Gurdy variation on fractal Gazelle (Rebab tuning)
  • Left Pistol
  • McLaren Rat H1
  • Weak Fokker block tweaked from Dwarf(<14 23 36 40|)
  • Semimarvelous dwarf: 1/4 kleismic dwarf(<16 25 37|)
  • Three circles of four (56/11)^(1/4) fifths with 11/7 as wolf
  • Godzilla-meantone-keemun-flattone wakalix
  • One of the 195 other denizens of the dome of mandala, <14 23 36 40| weakly epimorphic

With all these supermagic hobbits and semimarvelous dwarves and Godzilla, and all the other denizens with their Big Gulps and pistols, where do I even start with this? The answer is, I don’t. I’ll just try making a couple of my own much simpler scales that I can actually understand. Like 5EDO — instead of dividing an octave into 12 tones, divide it into 5.

think tank

Today’s an especially slow workday and I’ve been reading a lot of interviews and articles at The Creative Independent. I haven’t had any particular epiphanies as a result, but it’s stirring the brain juices a little.

But I did have a minor revelation this morning about the connection between wavefolding and phase modulation thanks to Open Music Labs being, well, open about their designs. In particular, the Sinulator, which is similar to the Happy Nerding FM Aid — a module I owned once, let go of because I figured I had enough FM/PM capability in my system. (Frequency modulation and phase modulation are very closely related; the simple version is that a continuously advancing phase is frequency, and PM is basically indistinguishable from linear FM in terms of results.) I’ve wished a few times that I’d kept the FM Aid, but could sometimes get similar results out of Crossfold. I didn’t understand why, though.

OML’s description and blessedly simple mathematical formula (no calculus or funny Greek letters!) make me realize, this is basically the same thing described by Navs some time ago (I think in a forum post rather than the blog though). And it ties in with my recent efforts to do nice-sounding wavefolding with the ER-301.

“Sine shaping” is a commonly used shortcut to wavefolding as well as triangle-to-sine shaping. It’s literally just plugging an audio input in as x in the function sin(xg), where g is the gain.
If g is 1, and x happens to be a sawtooth or triangle wave, you’ll get a sine wave out of it. If the input is a sine, you get a sine that folds back on itself a bit… and the higher g goes above 1, the more the output will fold over on itself and get more complex and bright. (Better sounding analog wavefolders and their digital imitators don’t map to a sine exactly, but it’s a similar-looking curve. Also they use mulitple stages in series for more complex behavior. But a sine totally does work.) What I learned here is that adding another term inside that function will shift the phase of the output… tada, phase modulation exactly how Yamaha did it in the DX series (and then confusingly called it FM). A whole lot of puzzle pieces clicked together.

Anyway… in this model because one just adds the two inputs, it doesn’t really matter which is the carrier and which is the modulator. Why not use independent VCAs on both, and sequence them separately? Maybe some kind of polymetric, occasionally intersecting thing where it’s like two interacting fields, totally fitting the theme of the album I’m working on? To lend form to the piece, one of those inputs can be transposed, have its envelope or intensity changed, or a third input can be added (it’s just addition)…

I don’t normally plan my compositions quite so much when I’m away from the instrument itself, and I almost never get this… academic about it. (Is that a dirty word?) But I’m eager to try this one.

So there’s a free peek inside a process I don’t usually use.

put some clothes on, Your Majesty

Reading about the history of synths, or about the use of synths in rock, one always comes across worshipful descriptions of Keith Emerson’s “Lucky Man” solo and the Moog Modular he took on tour to perform it.

I never really bothered to check it out. I don’t think I ever heard the song, or paid attention if I did. But I took the authors at face value: that this was a blistering, awesome performance that was part of the pincer maneuver which made Moog more or less a household name and doomed Buchla to relative obscurity (Switched-On Bach being the other) and that Emerson was a master both of modular synthesis and rock performance.

My curiosity was finally prompted by the MST3K riffing on Monster A-Go Go which made references to both “Fly Like An Eagle” and “Lucky Man” during a particularly synthy part of the soundtrack.

So I watched a couple of videos, and… well. Maybe a rock fan in 1970, having seen nothing like it, would have been blown away. But the first thing I noticed is the patch is really, really simple. Five years later he could have been playing that on the one-oscillator Micromoog. At the time, he could have pulled out 95% of the patch cable spaghetti draping the thing. Sure, it had an impressively powerful bass sound which Emerson made good use of, but there was nothing very sophisticated about the patch. The synth was mostly serving as a prop. “Look at all this equipment and all those cables, this guy must be a wizard!”

(I’m not disparaging Emerson’s synthesis skills — maybe this is the exact sound he was going for. Maybe it was set up for a quick between-songs repatch to do something completely different; pull one cable here and plug one in there and it’s ready to go. But I do think a lot of it was for show.)

The second thing is, the timing was really sloppy, at least in the performances I watched. Particularly in a more recent performance, there was a slow portamento and I wonder if that’s throwing off his playing, because he’s just not playing to the tempo of the rest of the band. It didn’t feel like expressive timing but just bad timing. Otherwise, what he played was… okay, but not the most acrobatic or virtuosic or creative solo I’ve ever heard by any means.

So, yeah. I guess this is just one of those cases where the historical context was the fuel and the art was a spark; with the fuel burned out we can see that the spark was a small thing.

droning on

I wrote up a forum post in a “how to synthesize drones” thread which, I think, contains the most coherent thoughts I’ve put together on the subject. Maybe that’s not saying much, but here it is for posterity, expanded a little bit.

I use the word “drone” in a more general sense than some people, but more strictly than others. If I control a sound in terms of level rather than “playing notes”, I generally consider it a drone. It’s not an absolute rule, but drones usually have a (more or less) fixed pitch. There may be rhythmic accents.

I don’t quite understand how a band like Earth is considered “drone” when they’re clearly playing riffs, have melodies and standard chord progressions and so on. That’s far too loose a definition for me. Nor does it have to be an unrelenting, 25-minute long pure sine wave.

When I create drone-based music, this is what I think about:

  • Depth, width, power, distance, gentleness vs. forcefulness, cleanliness vs dirtiness, spectral balance, harmonic structure.
  • Texture. Micro-structure, granularity, etc. This can come from FM or other (near-) audio rate modulation, the beating of inharmonic frequencies against each other, repeating delays, granular synthesis, timestretching, the content of any samples used, or other sources. It could be a “natural” and inherent part of the means of sound production, or it could be intentionally added modulation. As an example, the sound of the carrier of a dial-up modem is a steady beep, which I would categorize as having little or no texture, but when the actual signals modulate it, we can hear structure in it even if it’s too rapid for us to follow — that’s a kind of texture.
  • The balance between stasis and change in the medium term. Perhaps it’s a weakness of mine, but I want some motion to take the place of discrete notes and melodies. That motion could be the result of random or periodic modulation (including rhythm), “natural” feedback processes, or manual (usually improvised) control.
  • Form. That is, structural change over time on the “song” scale.
    Simply fading in, holding steady for some minutes, and fading out is usually unsatisfying, regardless of any meta-narrative about separating music from time, or a temporal window on an endless vibration. Changes in volume, timbre, adding or removing layers, changes in harmonic structure or spatial cues or background noise add interest even when they are not the defining feature of the piece. Usually I don’t plan form in advance, but set up opportunities for improvisation and then let the form flow naturally as I record. If that’s not effective enough, I will edit the recording to enhance or expand these structural changes, or reject the recording if I feel it just doesn’t say anything.

I almost always set up at least two voices, because relative variations in level, spatial characteristics or timbre can be much more interesting than absolute variations of a single voice, and because they can lead to shifts in texture or the creation of new textures. Sometimes extra voices have their origin in the original voice, and just involve additional or different processing.

Although I’m talking about drones here, this corresponds quite a lot to Curtis Roads’ concept of “multiscale composition.” As I’ve said before, my act of composition is spread out between pre-recording, recording and post-recording phases — but it’s all composition, even if there are no “notes”, some is spontaneous, and some a reaction. Why not use the ears as a tool of imagination, and not just the brain?

No Sudden Moves

I’ve shared this recording elsewhere, so why not here?

  • Plaits in waveshaping mode (with an LFO over the level) feeds the audio input of Rings in inharmonic string mode, which feeds the ER-301.
  • In ER-301 channel 1, three Schroeder allpass filters in a feedback loop are manually controlled by a 16n Faderbank. In channel 2 there’s just a grain delay in a feedback loop with its time manually controlled. There’s a bit of cross-feedback.
  • The two channels are recorded as mid-side stereo, and some ValhallaRoom is applied.

I’m really enjoying the 16n Faderbank as a controller for all sorts of things. In another recent recording, I used it to control levels in Maschine over USB MIDI, as well as the levels and timbre of a harmonic oscillator in the ER-301. In this one, constant manual micro-adjustments of the allpass filters prevented the feedback from building up into something piercing and unpleasant, and changes in harmonic content were a combination of tweaking Plaits and Rings as well as the filters. The impression I get from this piece is light refracting off the curved surface of some mysterious alien artifact, perhaps… which might have been better title inspiration than what I chose. Ah well.


I’m reading A Study In Honor, a novel set during a near-future civil war. A post-Trump leftist government implements universal healthcare, guarantees LGBT rights, and does much for racial justice and income inequality and so on — and then radicalized right-wing idiots are so upset about it that some states start a war, and federal centrists are in the process of eroding rights and breaking the economy again to placate the crazies. Our protagonist is a wounded veteran of that war, a queer woman of color who suffers from PTSD, a poorly fitting, irritating, poorly functioning prosthetic arm that the VA won’t replace, and fresh waves of alienation. Needless to say, this has not been a happy story so far. It’s well-written and gripping, though.

So with that bouncing around in my subconscious, last night the infamous Shitgibbon-in-Chief actually appeared in my dreams. This cartoonish con man has been a mental health threat to the entire country for the last 30 months or so, but up until now he’s avoided direct appearances in my brain at night. Well… he’s officially banished.

it’s just a show, I should really just relax

Recent dreams:

  • I was the third of three drummers for a high school heavy metal band. Like, an official class, taught by the guy who had taught the school’s jazz ensemble in real life. And just like the jazz ensemble in real life, we were really not good.
  • Throwing books at Hitler, while he sat for an interview with an NPR reporter. Nice hardcover editions of The Sandman graphic novels. I’m not sure if Neil Gaiman would approve, or would suggest I find something else, but it was effective — The Kindly Ones really took a chunk out of his arm, messed up his uniform and completely disrupted the interview.

And speaking of relaxation, I have found that while the nasty-tasting CBD oil helps my anxiety and mood, the capsules I bought from a different company (at a higher concentration, even) just don’t do very much. I must be carrying a lot of tension in my back muscles just from the anxiety, because switching back to the oil for a day relieved a knot that had been bothering me for a week. And here I thought it wasn’t doing that much to help physical pain. Hopefully I can find an option that is less yucky, but still effective.


In local music news: those thoughts about an “acoustic universe” — and maybe watching season three of The Expanse — led to a general concept for the next album. The working title is Passing Through, as in both travel and permeation. I’ve got four candidate songs in place now and one rejected, all coming from experiments with QPAS, ER-301, and the Volca Modular.

The VM is a fascinating and sometimes frustrating little beast. It’s rough around the edges and has a lot of limitations, compared to Eurorack modules or software. Some of those are the “do more with less” kind which encourage creativity; some give it character; some are just annoying. But overall it’s pretty amazing for such a tiny, cheap synth.

Some people have been trying to compare it to a Buchla Music Easel (at $3000+) or a Make Noise 0-Coast (at $500+) and that doesn’t seem fair. But I think I can honestly say it’s at least as interesting as an Easel, and I honestly like its wavefolding sound and its LPGs better than the 0-Coast. (But the 0-Coast is really good at big, solid triangle basses, which the VM will never be, and it feels like a really good, well-calibrated, quality instrument and not a toy.)

My new case arrived, and I was eager to get moved into it but my spouse wisely pointed out that it’s probably better to burn it — that is, in the sense of pyrography — before loading it up with fancy electronics. Okay, that makes sense. 🙂 I spent a few hours poring over clip art and tattoo designs of stars, meteors and black holes for inspiration; she spent a day or so working up a rough draft design in a paint program. I think it’s going to be pretty spiffy and I’m eager to see the results!

I’ve occasionally thought about getting a tattoo, but decisiveness was not my strong suit and most of the symbology that meant much to me wasn’t something I’d want to wear on my skin. But it strikes me that it’d be really cool to have a tattoo with a neat design made by my spouse, which has some thematic similarity to her tattoo, and matches the design on my instrument…

thinking with the ears

I begin with a digression, because I must share this.

You’re welcome.

I mean. Once you have combined the concepts of “donut”, “prune” and “salad” into a single dish, why not serve it with mayonnaise?

What gets me here is the apparent random anarchy of the ingredient choices, paired with the strictly limited, generic, whitebread pool of possible ingredients that must have been drawn from. There are no spices or seasonings, nothing that would indicate a culture — except we all know it’s got to be “American, white, 1950-1975.” It’s almost mechanical, like it was created with a very crude randomization algorithm that lacks the finesse and charm of a neural network recipe.

I appreciate how this one (brown) leaves certain factors — not least, all of the actual preparation instructions — up to the cook’s improvisational judgement, so that each performance is unique. John Cage would approve.

Maybe the weirdest thing about the donut prune salad recipe is that it’s not unique. Coincidence or conspiracy?

The aesthetics of the first one, such as they are, seem a little better but I’d honestly rather 86 the mayo and use cream rather than cottage cheese. I’d also rather break my left arm than my right arm.

And in fact yes, I have added “Donut Salad” to my list of potential song titles, but under the category of “probably will never use.”


Anyway, what I was going to write about: I’m currently reading DJ Spooky’s Sound Unbound: Sampling Digital Music and Culture and it’s thrown some provoking thoughts my way. One of them is the idea that our primary mode of thinking is a visual/spatial one, with a coordinate grid, objects that take up space, and the spaces between them. The argument is that this spread in Western thought during the Renaissance with Descartes, the printing press, explorers and maps, etc. It’s probably not much of a stretch to say that movies, television and computers were heavily influenced by, but also strongly reinforced, this spatial paradigm.

It all seems very rational, scientific, and straightforward. Of course, it’s pretty wrong and/or useless at the quantum level, or when considering energy, or for a lot of metaphorical or magical uses, but it’s pervasive and sometimes we try to make things fit anyway. My career has been based on it — 3D graphics and modeling for games and then engineering.

I will speculate with some confidence that the previous mode of thought for most people for most of human history was a bit less spatial and more narrative. When we say “myth” now, unfortunately there’s usually a connotation of falsehood, disdain for the primitive etc. rather than the understanding that the idea of truth itself wasn’t necessarily so fixed and binary.

But steering a little more toward the inspiration from the book: the idea of an “acoustic” mode of thinking, where the measurement of space is more vague, and reality is inhabited by an infinity of interpenetrating fields of energy and motion, pressure and density, transmission and absorption and reflection. There are no distinct “objects,” just a whole where any divisions one makes are arbitrary slices of a spectrum that we know we could have sliced up differently. This ties back into what Curtis Roads was talking about when he claimed electronic music removes dependency on notes.

Of course, we still have a tendency to think of sound in terms of grid coordinate systems:

Amplitude over (a very short) time; literally the path the speaker cones will trace, sending waves of pressure through the air but also through wood, water, metal, bone, brick (not very well) etc.
Intensity over frequency, on a logarithmic scale; the strength of various frequencies of sound at a single moment (by some mathematical definition)
Intensity represented by brightness/color, with the frequency spectrum on the vertical axis and a span of time on the horizontal; we see a note or chord as a series of stripes, and notice that higher frequencies fade away faster than lower ones; there’s some intrusion of broadband noise in the middle.
Musical notes in a sequencer. Another kind of frequency scale on the vertical axis and time on the horizontal; much easier to identify pitches, rhythm, music theory type stuff but it says nothing about tempo, timbre, volume, “expression”, etc. Notes aligned on this grid will be perfectly in time with 32nd, 16th, 8th, quarter, half or whole notes…
This rhythm generator is even called Grids.
This is a Monome Grid, a button/light controller with many different software- and hardware-based friends, often used for music sequencing.
And this module is named for Descartes and is referred to as a “Cartesian sequencer” due to its ability to navigate in two or three dimensions, as opposed to linear sequencers which navigate either forward or backward but are still, conceptually, grids.

Grids are certainly a useful paradigm, in music and outside it. But it is also very much worth simultaneously thinking about all those overlapping, permeating, permeable fields of energy. Blobs rather than objects. Sounds, rather than notes. Salads, rather than donuts and prunes (sorry). Not just in terms of music and sound, but whatever else may apply. Personal relationships, memes, influences, cultures, societies? Economies, ecologies? Magic, mysticism? In a sense, I think this “acoustic” view of a universe is closer to the narrative one than the visual view is. (And there’s that word “view”, illustrating the bias… and oh we’re illustrating now, also visual…)

Using some of those grid-based tools above, I did some editing this evening of a recording I’d made earlier. There was a point where a feedback-based drone fell into a particular chord, which I thought made a much nicer ending than what happened later. So I took a bit of that ending, ran it through the granular player in the ER-301 to extend it for several seconds, resampled that and smoothly merged it back into the original audio — one continuous drone. No longer two things spliced, nor five thousand overlapping grains of sound; those metaphors stopped being useful, just like eggs stop being eggs when they’re part of a cake.

highlights

I’ve just finished reading Curtis Roads’ Composing Electronic Music: A New Aesthetic.

Roads has some pet techniques and technologies he is fond of (having developed some of them) as well as some pet compositional concepts, and tries to shoehorn mentions of them in everywhere. Granular synthesis is a big one. “Multiscale” anything is another (along with macroscale, mesoscale and microscale). “Dictionary-based pursuit” is one I’ve never heard of before and can’t actually find much about.

Roads comes from the more academic side of electronic music, as opposed to the more “street” end of things, or the artist-hobbyist sphere where I would say I am. But he recognizes that music is very much a human, emotional, irrational, even magical endeavor and that prescriptive theory, formalism, etc. have their limits.

The book was primarily about composition — and by the author’s own admission, his own views of composition. He gives improvisation credit for validity but says it’s outside his sphere. Still, I found some of the thinking relevant to my partially composed, partially improvised style.

At times he pushes a little too much at the idea that electronic music is radically different that everything that came before. For instance, this idea that a note is an atomic, indivisible and homogenous unit, defined only by a pitch and a volume, easily captured in all its completeness in musical notation — it completely flies in the face of pretty much any vocal performance ever, as well as many other instruments. Certainly there have been a handful of composers who believed that the written composition is the real music and it need not actually be heard or performed. But while clearly not agreeing with them, he still claims that it was electronic music that freed composers from the tyranny of the note, and introduced timbre as a compositional element (somebody please show him a pipe organ, or perhaps any orchestral score).

He has something of a point, but he takes it too far. Meanwhile a lot of electronic musicians don’t take advantage of that freedom — especially in popular genres there’s still a fixation on notes and scales and chords as distinct events — that’s why we have MIDI and why it mostly works — and a tendency to treat the timbre of a part as a mostly static thing, like choosing which instrument in the orchestra gets which lines.

And I’m also being picky — it was a thoughtful and thought-provoking book overall. I awkwardly highlighted a few passages on my Kindle, though in some cases I’m not sure why:

  • “There is no such thing as an avant-garde artist. This is an idea fabricated by a lazy public and by the critics that hold them on a leash. The artist is always part of his epoch, because his mission is to create this epoch. It is the public that trails behind, forming an
    arrièregarde.(this is an Edgard Varèse quote.)
  • “Music” (well, I can’t argue with that.)
  • “We experience music in real time as a flux of energetic forces. The instantaneous experience of music leaves behind a wake of memories in the face of a fog of anticipation.”
  • “stationary processes”
  • “spatial patterns by subtraction. Algorithmic cavitation”
  • “cavitation”
  • “apophenia”
  • “Computer programs are nothing more than human decisions in coded form. Why should a decision that is coded in a program be more important than a decision that is not coded?”
  • “Most compositional decisions are loosely constrained. That is, there is no unique solution to a given problem; several outcomes are possible. For example, I have often composed several possible alternative solutions to a compositional problem and then had to choose one for the final piece. In some cases, the functional differences between the alternatives are minimal; any one would work as well as another, with only slightly different implications.

    In other circumstances, however, making the inspired choice is absolutely critical. Narrative structures like beginnings, endings, and points of transition and morphosis (on multiple timescales) are especially critical junctures. These points of inflection — the articulators of form — are precisely where algorithmic methods tend to be particularly weak.”

On that last point: form, in my music, tends to be mostly in the domains of improvisation and post-production. The melody lines, rhythmic patterns and so on might be algorithmic or generative, or I might have codified them into a sequence intentionally, or in some cases they might be improvised too. On a broad level, the sounds are designed with a mix of intention and serendipity, while individual events are often a coincidence of various interactions — to which I react while improvising. I think it’s a neat system and it’s a lot of fun to work with.

The algorithmic stuff varies. Some of it’s simply “I want this particular rhythm and I can achieve that with three lines of code”, which is hardly algorithmic at all. Sometimes it’s an interaction of multiple patterns, yielding a result I didn’t “write” in order to get a sort of intentionally inhuman groove. Sometimes it includes behavioral rules that someone else wrote (as when I use Marbles) and/or which has random or chaotic elements, or interactions of analog electronics. And usually as I’m assembling these things it’s in an improvisational, iterative way. It’s certainly not a formal process where I declare a bunch of rules and then that’s the composition I will accept.


a small gesture

All Starthief albums on Bandcamp are now set to “pay what you want.” $0 is enough to download my albums, but people can add a tip if they like (or if the maximum free download count is exceeded and Bandcamp temporarily enforces a minimum price).

Across six albums over a year, I’ve brought in $126. It’s not nothing, but it’s not significant either; a fraction of a fraction of minimum wage, if I wanted to look at it that way. I choose instead to think of it as a collection of tokens of appreciation.

I think “pay what you want” is more consistent with my values. If I made music for the money, I’d be (A) actively trying to seek a wider audience, (B) making the sort of music I think a wider audience would like, (C) playing live, making videos etc. to grow that audience, and (D) probably failing to meet my goals and stressing over it.

Half of that income came from Materials, and I know exactly why. My audience right now is almost entirely fellow electronic musicians, to whom gear demos and technical bits are mostly more enticing than yet another one of their several hundred acquaintances releasing yet another album that takes time to listen to. Conversely, I have two albums that sold $0 (aside from people who bought my whole catalog as a bundle because they liked Materials) but got positive comments.

I actually appreciate the positive comments more. With the income from my day job, validation is rarer than dollars. When fellow musicians, or anyone else, has nice things to say about my work it confirms that my aesthetic sense isn’t completely alien to everyone else’s (or more bluntly, that I’m not terrible and wrong).

I do like it when people like my art, and I recognize that I am not great at finding those people — so I may still wind up trying to get onto a label that will help with that.