slightly delayed

Next weekend is Superbooth — modular synthesis’ biggest trade show / gathering / bunch of performances, in Berlin. There will be announcements of new stuff. I’ll be out in the woods, camping and not following the hype (or if there’s a good connection, checking websites a couple times a day maybe).

But right now when I look at my system, I think “geez there’s a lot of stuff here to explore” — partially because of the wide and deep ocean that is the Rainmaker. I don’t want to add to that for a bit, I just want to grab a spoon and start digging. So the gear is going to sit as-is for a while, at version 2.05 or whatever it is, and not commit the last 20HP. I have a few thoughts on it, but I’ll reserve most of those for my personal “what if” notes.

One thought I’ve been having is that I kind of miss having nice hands-on complex oscillators. ER-301 is capable, and is a fantastic blank slate. But neither the unit that I wrote for it, nor the Volca Modular, are quite filling that ecological niche. I have some thoughts about a different way to solve this in the ER-301 — separating the oscillators onto different channels and routing via a combination of patch cables and “prewired” internal connections. If that doesn’t get me there, I see a Synth Farm 2.2 plan (not one that replaces the ER-301, but other things).

Between the recent release of the long-awaited Valhalla Delay and especially my first explorations of the Rainmaker, I’ve gained some new insights into the relationship between delays (especially multitap) and comb filtering, and what can be done with them. And I’ve taken that insight back into exploring the older Valhalla UberMod, which is a multitap delay with a quite different paradigm. The result is something a little like the big knowledge download I recently got with wavefolding/FM/PM, but more on the intuitive side and much less geometric.

inharmonic collision

Monday evening, I set up my composition idea as planned: three triangle oscillators feeding a sine shaper, with two of them under polymetric control from Teletype (3-in-8 vs 4-in-9 Euclidean rhythms) and one under manual control.

It was frankly pretty boring when I was just using octaves. So I decided to go off the rails a bit and sequence pitches with the Sputnik 5-Step Voltage Source. I clocked it with the master clock, regardless of the rhythmic pattern; the first voice used a channel directly and the second sampled a channel every 8 clock steps. So what we’d get is a complex pattern that starts something like this:

Where time runs left to right, and each color in each lane represents a knob on the 5-Step (not necessarily indicating what the pitch value is set to, nor is “red” on the top and bottom necessarily the same pitch but sometimes they are, and different colors within the same lane are in some cases tuned to the same pitch). The pattern in the top lane runs for 40 beats before repeating, and the bottom lane runs for 72 beats. Because these two are interacting thanks to the sine shaper, they can’t be thought of as individual parts and so it’s going to take 2880 beats for the pattern to repeat. At the tempo I used, the whole recording is roughly 1/10 of a full cycle. (Or… it would be, except I put the two patterns under manual control, suppressing the triggers while letting the sequencer keep clocking. Monkey wrench!)

Complex patterns from relatively simple rules. But that was kind of a tangent — the point is, the frequencies I dialed in, relative to each other, often collided in non-integer ratios. Even if they sound good as individual notes played together, when you use them in phase modulation things get a bit dissonant and skronky, with new sidebands at weird frequencies.

An unlikely scenario.

This is the 21st century — beauty is complex, artistic merit isn’t directly tied to beauty, we’re not limiting ourselves to Platonic perfection, and the idea that certain intervals and chords could accidentally invoke Satan isn’t something we lose sleep over anymore. I think the result I got is pretty neat! But it’s not really what I had originally imagined. So I’m going to keep the basics of this idea, follow a different branching path with it and see where that goes.

The third voice, I controlled with the 16n Faderbank — one slider for level, one for pitch. The latter went through the ER-301’s scale quantizer unit, so it always landed on something that fit reasonably well with the other two voices. It turns out this unit supports Scala tuning files, and TIL just how crazy those can get.

Scala is a piece of software and a file format which lets you define scales quite freely — whether you just want to limit something to standard 12TET tuning, or a subset of that (such as pentatonic minor), or just intonation, non-Western scales, xenharmonic tunings, or exactly matching that slightly-off toy piano. The main website for Scala has an archive of 4800 different tuning files and that’s just too much. This is super-specialist stuff with descriptions such as:

  • Archytas[12] (64/63) hobbit, sync beating
  • Supermagic[15] hobbit in 5-limit minimax tuning
  • Big Gulp
  • Degenerate eikosany 3)6 from 1.3.5.9.15.45 tonic 1.3.15
  • Hurdy-Gurdy variation on fractal Gazelle (Rebab tuning)
  • Left Pistol
  • McLaren Rat H1
  • Weak Fokker block tweaked from Dwarf(<14 23 36 40|)
  • Semimarvelous dwarf: 1/4 kleismic dwarf(<16 25 37|)
  • Three circles of four (56/11)^(1/4) fifths with 11/7 as wolf
  • Godzilla-meantone-keemun-flattone wakalix
  • One of the 195 other denizens of the dome of mandala, <14 23 36 40| weakly epimorphic

With all these supermagic hobbits and semimarvelous dwarves and Godzilla, and all the other denizens with their Big Gulps and pistols, where do I even start with this? The answer is, I don’t. I’ll just try making a couple of my own much simpler scales that I can actually understand. Like 5EDO — instead of dividing an octave into 12 tones, divide it into 5.

highlights

I’ve just finished reading Curtis Roads’ Composing Electronic Music: A New Aesthetic.

Roads has some pet techniques and technologies he is fond of (having developed some of them) as well as some pet compositional concepts, and tries to shoehorn mentions of them in everywhere. Granular synthesis is a big one. “Multiscale” anything is another (along with macroscale, mesoscale and microscale). “Dictionary-based pursuit” is one I’ve never heard of before and can’t actually find much about.

Roads comes from the more academic side of electronic music, as opposed to the more “street” end of things, or the artist-hobbyist sphere where I would say I am. But he recognizes that music is very much a human, emotional, irrational, even magical endeavor and that prescriptive theory, formalism, etc. have their limits.

The book was primarily about composition — and by the author’s own admission, his own views of composition. He gives improvisation credit for validity but says it’s outside his sphere. Still, I found some of the thinking relevant to my partially composed, partially improvised style.

At times he pushes a little too much at the idea that electronic music is radically different that everything that came before. For instance, this idea that a note is an atomic, indivisible and homogenous unit, defined only by a pitch and a volume, easily captured in all its completeness in musical notation — it completely flies in the face of pretty much any vocal performance ever, as well as many other instruments. Certainly there have been a handful of composers who believed that the written composition is the real music and it need not actually be heard or performed. But while clearly not agreeing with them, he still claims that it was electronic music that freed composers from the tyranny of the note, and introduced timbre as a compositional element (somebody please show him a pipe organ, or perhaps any orchestral score).

He has something of a point, but he takes it too far. Meanwhile a lot of electronic musicians don’t take advantage of that freedom — especially in popular genres there’s still a fixation on notes and scales and chords as distinct events — that’s why we have MIDI and why it mostly works — and a tendency to treat the timbre of a part as a mostly static thing, like choosing which instrument in the orchestra gets which lines.

And I’m also being picky — it was a thoughtful and thought-provoking book overall. I awkwardly highlighted a few passages on my Kindle, though in some cases I’m not sure why:

  • “There is no such thing as an avant-garde artist. This is an idea fabricated by a lazy public and by the critics that hold them on a leash. The artist is always part of his epoch, because his mission is to create this epoch. It is the public that trails behind, forming an
    arrièregarde.(this is an Edgard Varèse quote.)
  • “Music” (well, I can’t argue with that.)
  • “We experience music in real time as a flux of energetic forces. The instantaneous experience of music leaves behind a wake of memories in the face of a fog of anticipation.”
  • “stationary processes”
  • “spatial patterns by subtraction. Algorithmic cavitation”
  • “cavitation”
  • “apophenia”
  • “Computer programs are nothing more than human decisions in coded form. Why should a decision that is coded in a program be more important than a decision that is not coded?”
  • “Most compositional decisions are loosely constrained. That is, there is no unique solution to a given problem; several outcomes are possible. For example, I have often composed several possible alternative solutions to a compositional problem and then had to choose one for the final piece. In some cases, the functional differences between the alternatives are minimal; any one would work as well as another, with only slightly different implications.

    In other circumstances, however, making the inspired choice is absolutely critical. Narrative structures like beginnings, endings, and points of transition and morphosis (on multiple timescales) are especially critical junctures. These points of inflection — the articulators of form — are precisely where algorithmic methods tend to be particularly weak.”

On that last point: form, in my music, tends to be mostly in the domains of improvisation and post-production. The melody lines, rhythmic patterns and so on might be algorithmic or generative, or I might have codified them into a sequence intentionally, or in some cases they might be improvised too. On a broad level, the sounds are designed with a mix of intention and serendipity, while individual events are often a coincidence of various interactions — to which I react while improvising. I think it’s a neat system and it’s a lot of fun to work with.

The algorithmic stuff varies. Some of it’s simply “I want this particular rhythm and I can achieve that with three lines of code”, which is hardly algorithmic at all. Sometimes it’s an interaction of multiple patterns, yielding a result I didn’t “write” in order to get a sort of intentionally inhuman groove. Sometimes it includes behavioral rules that someone else wrote (as when I use Marbles) and/or which has random or chaotic elements, or interactions of analog electronics. And usually as I’m assembling these things it’s in an improvisational, iterative way. It’s certainly not a formal process where I declare a bunch of rules and then that’s the composition I will accept.


you had to make final decisions as you went

While this interview goes a little more into recording engineer geekery
(*) than I can appreciate, there’s something to it at the end.

Part of why my process works for me so well is not treating music as something to be assembled jigsaw-like from many little recorded bits. I may not play live in front of audiences, but the recording process is still performance of a sort. I hit record, I do things with my hands that shape the course of the music. It’s usually improvisational to some degree, and it’s usually done in one take. Even when it’s not, there is no separation between what I hear and what I record. All the mixing and effects and stuff are done. Recording is commitment. In some ways it’s more primitive than all the psychedelic rock groups The Ambient Century was praising.

…of course sometimes I will edit my recording, and do things to it that extend and enhance it. But it’s still not cut-and-paste.

(*) Recording engineers are the people who know which microphone to use and exactly where to put it, how to set up the acoustic space, how loud to record on tape (or whatever), how to mix mic signals in ways that sound better instead of causing phase cancellations and such, and all of that. It’s a whole area of expertise that is only tangent to what I do. To me, the voltages in the wires, the data in the computer, and the vibrations in my eardrums are all extremely similar, and the few variables that confound that are constant and familiar. But microphones just don’t “hear” the way our ears do, nor is human attention a factor. If you’ve ever tried to record a neat birdsong, only to have the recording make you aware of traffic noises and an air conditioner and barking dogs in the background that you didn’t notice before and the movement of your hands and rustle of your clothes, that’s just a small part of the challenge. And if you’ve ever tried to record extremely loud drums in a concrete warehouse without it sounding like it’s in a concrete warehouse, while still capturing the subtleties of the sound of the stick hitting the head, that’s another five or six technical problems to solve.

release the brain clutch

One of the reasons I like the Lines forum so much is to get peoples’ random insights. Sometimes those come in the form of quotes by other artists. Sometimes they are in the form of art itself. This time it was both:


Bruce Nauman, The True Artist Helps the World by Revealing Mystic Truths, 1967

I don’t know if I agree or disagree with this statement. I reflexively flinch at “true artist”, I don’t think mystic truths can be revealed except by themselves (and mostly they defend themselves), and “helps the world” can sound more than a little conceited. But at the same time… yeah, kinda.

It turns out the artist thought the same thing.

Anyway, the thread where that was posted, along with some other thoughts about what it means to “figure out how to be an artist”, had my head spinning but also inspired me to overcome the lethargy of the last several days and record something.

Four minutes into the first take, which was going excellently, the phone rang. Oops. Gotta remember to put it in airplane mode next time. The next several takes were flubs, but then I finally nailed it.

I’ve decided that the next album will definitely have no theme. In fact, no-theme kind of is its theme. Action over contemplation.

Some time back, while I was Kemetic Orthodox, I was unsure about my musical direction at the time and did some divination to get unstuck. What came to me were the words “use force.” While a lot of music is about control and dexterity (either physical or mental) and is often a very intellectual exercise, sometimes you just have to pull out all the stops. (This phrase refers to the controls on a pipe organ; pulling them all out makes the thing as loud and dense as it can get. It’s like turning the amp up to 11.)

That’s… not exactly what my plan is here, but the idea is: just turn the synth on and do something.

that’s a wrap… almost

I am so bad at gift wrapping.  I think I inherited that from my dad, who is not above using cardboard tubes, newspapers and duct tape to get the job done.  I failed this evening at wrapping a perfectly rectangular package and had to throw the paper out and start over.

I’m doing a better job with the album mastering…. except, it turns out, I’ve been doing it wrong.  

MusicTech magazine’s current issue has a feature about mastering.  I read it, and most of the advice is on the order of “use this $4000 worth software and these $3000 monitors” and uh, no thanks.  But I did learn that editing the beginning and end of a track is “topping and tailing”, and that electronic music technology magazines in 2018 are pretty much overpriced garbage.

I got more specific, up-to-date advice from the first website that popped up on a Google search.  It turns out that in general, you should meter in LUFS (“Loudness Units relative to Full Scale”) for loudness and dBTP (decibels True Peak) for peaks.  Nobody thinks you should compress heavily to make your music as loud as possible, because many streaming services normalize everything to the same volume level anyway.  And while I was being relatively gentle with my own work compared to the previous album, I was still going beyond recommended levels.

I’d been ignoring metering plugins because there’s nothing more boring than that, and I assumed dbFS peak and RMS as shown in Sound Forge were good enough anyway.  But the free version of Youlean Loudness Meter shows the relevant info and how I’m breaking the rules.  (-23 LUFS is a European broadcast standard; -14 seems to be a common goal for streaming audio but the important thing there is more “don’t over-compress”).  And -1 dBTP is a recommended peak maximum so that MP3 converters don’t accidentally cause clipping.

Of course it would have been smart to do this research before “nearly finishing” all 11 songs.  In a lot of cases I think I can just turn it down and be fine, but I’ll double-check I didn’t compress too much.

Sound Forge Pro 10 has been crashing on a semi-regular basis, and it’s a few years old now.  I’m happy to see that it’s not abandonware and there is a new version — though Sony (having bought it from Sonic Foundry) sold it to Magix.  Unfortunately, the demo crashes immediately on startup.  I can use it okay after that as long as I never close the bug reporting window, but it doesn’t say a lot about the potential stability, so I’m not sure I want to pay for an upgrade.  Maybe I will look for another tool in the future, though I do like Sound Forge’s dynamics tool and the ease of crossfading every edit.

master blaster

I’ve mentioned I’m in the process of mastering my fifth album of the year.  But what is that, really? Or what is it to me?

What it used to mean was the preparation of a “master” copy of the final mix, to be duplicated — almost like a mold for casting.  For CDs and DVDs, there’s a digital file of course — but for large-scale duplication, a physical glass master is prepared in a cleanroom with a laser burner and a nickel deposition process, and then a “mother” is created as a sort of negative of that, to stamp pits into the actual CDs.

Mastering requires making some adjustments to suit the limitations of the medium.  For instance, if the difference in bass content between the left and right channels on a stereo LP is too great, it will throw the needle right out of the groove.  Digital media have their own limitations, and some master for specific sound systems in clubs.  “Mastering for MP3” or “for iTunes” might be a little snake-oily, but certainly earbuds or headphones are a different sort of target than a big speaker system.  (Generally, I use headphones throughout the whole process, including as my mastering target.)

Historically, recording engineers found this was the best time to make adjustments to the final mix as a whole, so it sounds as consistent and appealing as possible.  That generally means having a nice balance in different frequency bands, but mostly it means means loud.

Quiet recordings are more susceptible to noise, from random particles and errors in the medium to cosmic rays and other interference getting amplified along with the music.  Also, louder music generally sounds “better” than quiet from a psychoacoustic standpoint.  Some stereos have a “loudness” button which fakes a louder sound by changing the curve.  But too much loudness causes distortion.

Certain kind of distortion sound great.  The sound of the electric guitar is dependent on it.  Different kinds of distortion are involved in synthesis.  Saturation involves nice smooth curvy distortion that sounds “full” and “warm” if it’s kept subtle enough; you can get that by recording to tape a little bit louder than it was designed for.

But distortion can definitely be undesirable, too.  There’s a reason why chords on electric guitars tend to be very simple, such as the open fifth “power chord.”  Distortion creates more harmonics in the signal, and if the harmonic relationships are already complex going in, what comes out will be mushy and gross (technical term).  And a too-loud digital recording is subject to “clipping”, where the peaks of waves are sheared off in a flat, sudden way that is very inharmonic and does not sound natural or organic at all.

Dynamics are important — the balance and change of quiet and loud over time.  Dynamics in playing style creates drama, and is an important element in groove.  Many instruments, such as drums, are highly dynamic in themselves.  But excessive dynamics in a recording can be annoying (when you constantly have to adjust the volume to hear clearly) and cause technical challenges (too quiet overall, subtle details are easily missed, or the recording gets too loud at times).  Often to make a recording louder and more balanced overall, the engineer has to reduce the dynamics through compression and/or limiting — usually in a way that doesn’t noticeably sound like the dynamics have been changed or anything has been lost — as well as “riding the gain” more gradually.

The actual dynamics in a file can include all kinds of weirdness we don’t perceive — lots of little spikes of volume that our ears and brains just smooth right over.  That’s why these tricks can work. Both compression and limiting basically just turn down the volume as the signal gets louder, and back up as it calms down — but the devil is in the details.  At what level this attenuation takes place, how smoothly or suddenly it applies on a volume scale, how quickly it applies on a time scale, and so on.  It’s part science and part art.

(Don’t confuse dynamic compression with the kind of compression that makes an MP3, WMA or OGG file smaller than a WAV file.  Lossless audio compression uses algorithms to represent the same data in less space, and is guaranteed to sound exactly the same as no compression.  Lossy compression removes data that contributes little or nothing to what we can actually perceive, and is generally a compromise between size and perfection.  Blind tests on thousands of listeners have shown that on average there’s a barely discernible difference between a 192kbps VBR MP3 and a CD it was ripped from, and nobody can distinguish 320kbps from the real thing.)

If you lower the relative volume of the spiky bits, you have more room to turn it up overall.  There was something of an arms race or “Loudness War” which reached its peak (so to speak) in the mid 2000s, with Metallica’s Death Magnetic frequently cited as one of the most egregious examples.  Things have calmed a bit since then.

There’s also equalization (EQ) — this is the raising or (more usually) lowering the volume of particular frequency ranges to get a nice, balanced, full sound.  This can be combined with dynamics processing in tools such as dynamic equalizers and multiband compressors.

Of course both EQ and dynamics can be used for “creative” effects as well; it’s common to compress drums more than is strictly natural-sounding, or to “squash” a singer’s voice into a narrow, telephone-like or old-timey-radio range, or to really bring out the breathiness in a voice or squeaks on a guitar fingerboard, and so on.  Usually that’s done as part of the mix rather than mastering, though.

There are a lot of tools out there to help with mastering.  Some plugins or services promise to do it all automatically with a single button or knob, and usually that’s better than nothing.  I have a whole process and a set of tools I use.


I try to get levels reasonably okay in the original recordings, with the compressor/limiter ToneBoosters Barricade.  I don’t push it very hard at this point because I won’t be able to undo it.  The idea here is mostly to keep any unexpected spikes from clipping, and having a good monitoring tool to make sure I’m not recording too quietly with my headphones turned way up or vice versa.

My first pass at editing in Sound Forge Pro does only a little dynamics work to get levels generally okay — it’s mostly about overall sound, good first and last notes, and so on.  I save the more strenuous mastering work for a separate step.

 Sound Forge has a few built-in dynamics tools.  There’s “normalize” which can raise everything to within a certain threshold, either by peaks (safest) or RMS (useful for general “perceived” loudness but risks pushing the peaks too far) and is good at reporting maximum peak and average RMS levels to compare the different songs on an album.  There’s a fantastic graphic dynamics tool that lets you draw the response on a graph, and you can compare to levels shown in a recording.  There’s a “clip detection and repair” tool that’s a kind of gentle compressor that lowers peaks to safer levels.  And sometimes I highlight a section and crossfade into and out of a general “volume” tool to raise or lower the volume in a specific area.

I use other plugins with Sound Forge as well.  u-he Presswerk is a full-featured compressor that goes a bit beyond my pay grade, but I have some standard favorites among its presets.  I’ll almost always try “A Touch of Glue” and/or “AF Master Transparent” to see if either of them brings out subtle details and reigns in peaks a bit, but sometimes neither of them really helps.  Undo is just a click away.  The aforementioned Barricade is also good to try for a big boost; it can produce what looks like clipped-off peaks but in practice are carefully set to sound clean while maximizing overall volume.

I don’t do a whole lot of fiddling with EQ in mastering.  Sometimes I’ll decide that if I cut out some sub-bass I’ll have more room for everything else, or that a particular note or frequency band is a little too intense.  Sound Forge has a good graphic EQ (for more general changes) as well as a parametric EQ (for surgical edits to specific bands).  Sometimes I want to reduce the strongest frequencies a little bit all across the file, whatever they may be, to enhance the timbre and make it “howl” a bit less — for this I use Melda MSpectralDelay‘s level transformation tool, being careful to disable the delay, spectral panning, and frequency shifting first.

EQ changes the dynamics, and often it’s best to cycle between different tools, make small and gradual changes, and keep getting feedback from one’s ears and the various measuring tools in the software.

Write drunk; edit sober.

— not Hemmingway, who wrote in the mornings, avoided alcohol until the afternoon, and was to avoid hangovers.  It was Peter de Vries, and was not meant literally but to encourage both “spontaneity and restraint, emotion and discipline.”

Between the ultra-close attention this process demands, and the changes to dynamics bringing out more detail, it can expose flaws that were previously unnoticed.  I suspect that sometimes the Firewire connection between my audio interface and computer gets a little overwhelmed, and there are any number of other things that can find their way into a recording.  Usually it’s just subtle quirks of the modules and effects I’m using, or sometimes I pushed something a little hard for effect and got more than I bargained for. I accept a certain amount of this as a part of the process and the charm of working this way, and I’m sure Tony Rolando would agree.  Sometimes I even bring these “flaws” out intentionally, such as enhancing background noise through manipulating dynamics and EQ — or creating the noises intentionally via modular or plugins.

But other times I want to repair things.  Smoothing them out is rarely as easy as using Sound Forge’s “Clicks and Crackles” automatic tool, which has a penchant for making things worse.  Sometimes I just need to zoom way in and literally draw a smooth curve over where there was a sudden jump, an edit affecting the tiniest fraction of a second.  Or it might require some careful copying and pasting from another part of the file, being especially careful to keep the transition smooth, or just cutting out a tiny bit and stitching the edges together.  Reverb can smooth things over so long as it doesn’t cause a sudden shift in timbre, or it’s done in an intentional-sounding way and fits in with the busy things that are already present.  Sometimes mixing in something else will help mask it.  There really are no hard-and-fast rules, and this bit can be time-consuming, but persistence usually pays off.

One of the goals of mastering is consistency in volume levels across an album, and generally in line with other music of a relatively similar nature.  My goal is to get them where Sound Forge’s Normalize tool reads about -0.3dB peak and -10.5dB RMS. 
I wouldn’t read too much into those specific numbers though, because other tools are likely to report differently.  (0db is the maximum possible level in a digital recording; “bigger” negative numbers are quieter.)  That allows a little bit of room for the playback device to hopefully not clip, and seems to match the volume levels of other modern albums.  I’m not too worried about a little deviation here, as long as it doesn’t sound so different from one song to the next that you want to reach for a volume control too frequently.

This sort of work can be tiring to the ears and mind, so I break it up a bit, get the headphones off and reset myself.  Mastering Materials has seemed particularly grueling so far, but of course I hope the result is worthwhile.

when in doubt…

When I create songs, I do it in a single session whenever possible, or two if necessary.  From the point where experimenting/jamming crosses my “I have to make this a song” threshold to when I put the headphones down and walk away, it is rarely more than 4-5 hours.

Usually it’s fine.

Usually the first thing I say to myself when I have done so is “I’m not sure about this one.”  It’s too boring, it’s too harsh, it’s too weird, it’ll never work with the rest of the album…

The next day when I listen to it?  It’s fine!

It’s not that I have some kind of house elf who comes in and fixes my recordings while I’m off sleeping or playing video games, it’s just my perception.  (A) I’ve been listening to variations on the same thing for a few hours and am getting pretty jaded, vs. (B) I have fresh ears and am listening to a 4-9 minute song in the context of other things I’ve recorded recently.

Not everything I record gets released.  Turning songs into an album usually requires filtering a few things out to make it a stronger whole.  Some of them are weaker, some of them just don’t really fit.  I currently have 16 songs in a folder called “other unreleased” and another 6 simply called “no.”


This method of working quickly has been really helpful to me.  Through 2016-2017 I recorded over 380 songs this way.  After several months, I found myself consciously critiquing my overall output and realizing that some things worked and others didn’t.  Decades of scattershot music-making — trying to do almost everything that interests me, which is a lot — were brought into focus.  After a few more months of refining both my gear and technique, I started recording albums again with this new focus.

David Bayles and Ted Orland, Art & Fear.  One of my favorite things I’ve read this year.

Going back to make little tweaks and adjustments and additions to a song dozens of times didn’t really serve me so well.  Many changes didn’t necessarily improve anything, just make it different, satisfying my ear fatigue.  More importantly, those changes only affected a single song — what I needed was to improve my whole practice.  I stopped arranging the pine needles in my forest just so when I realized I preferred oaks anyway.

It would be nice to tell the story that I went to this single-session thing as part of a grand plan.  Really, the single-session thing was driven by my transition from 100% software music-making to a mixed approach — and some faulty, unreliable little desktop synths I was trying to work with at the time — and the fact that using these synths meant taking up space on my desk that I needed for other things.  It was easier to just get it done and put it away, than to keep it set up and running for multiple days while hoping nothing would crash or get accidentally unplugged!

I stuck with these habits — to me it fits perfectly with the transitory nature of patching a modular synthesizer.  About which I’ll just throw out another quote:

“I think the modular sound has less to do with timbre and more to do with the fact that when people are patching a modular, they seem to be less interested in micro-management as a music-making process.  The extreme magnification of musical event time and pitch provided by modern DAWs seems to curate what people believe to be perfect music through the aid of a machine.


Music made with the modular system is, in my opinion, a pure and interesting collaboration between human and machine.  It displays well the beauty and the blemish in both (human and machine)… it might be less perfect by judgement of mainstream music taste, but perhaps more exciting to those of us seeking a deeper connection to the music.”

Tony Rolando, founder of Make Noise

My process is this:  I set up the entire song:  sound design, composition, sequencing and/or performance plan, mixing, effects, all of it.  And then I record it “live” to a stereo channel.  If I feel it’s a good take, that’s it — it’s committed.  I take notes to satisfy my later curiosity, shut it down and unpatch the modular.  There’s no multitracking, no going back and making small changes or revisiting mixing decisions.  The only editing possible from that point is on the “finished” mix.

Sometimes a lot does happen in that editing, but generally it falls into “mastering” enhancement and cleanup, or bold-stroke creative changes — not revisiting past decisions.  Always moving forward, no going back.

Regret that I can’t make those changes is rare and minor at most.  For all of the fear that some people have of working with a synthesizer that can’t save presets, this is never a thing that has bothered me about modular synths.  Instead of saving and loading sounds with perfect recall, I remember general techniques that will lead to new creations in the future.  Always moving forward!

blob blob concern

Maybe you’ve never wondered how I come up with song titles, but there is a thread on Ambient Online about that question, and reading it today coincided with an update from one of my favorite sources of name inspiration.

I’m sure I’ll write plenty later about my process(es) for creating music, but this is what happens after.  Or during.  Or before!

Sometimes, songs name themselves.  I’ll be finishing it up and stop to listen through, about to record, and some impression will strike me and lead to a name.  Or not, and I might just pick a temporary name so I can save the project file, and get around to a real name later.

OriginalFinal
Textura IKermadec Trench
Textura IIBathyal
Assorted CitrusHadal Pressure
TarnWhale Fall
Five BoolLoki’s Castle

Sometimes I have a theme in mind for the album, and that helps me choose a name.  Although in the case of Nereus, I already had most of the album done when the theme struck me, and I wound up renaming several songs (some of them, several times.)  I had to keep a chart for a while so I wouldn’t lose track. 

When all else fails, I consult my list.  Whenever I invent or find a turn of phrase that I think has a remote chance of working — or when my spouse suggests it — I put it on my “Song Names” note in Simplenote.  Most of the things on this list will never be used, and sometimes I cull the least interesting and least likely.  But sometimes going over the list and finding these goofy phrases will trigger a better idea.

Also contributing to this list:  the neural network antics featured on AI Weirdness.  During my period of prolific exploration in 2016-2017, I leaned on it quite heavily, giving such fantastic titles as “Zuby Glong,” “Crab Water,” and “Corcaunitiol.”  I haven’t used it so much on my album releases, but again, sometimes those random strings trigger some ideas.

Here are some great bits from today’s blog post over there:

lower blob blob blob blob blob blob blob blob blob blob blob blob dragon right , screamed . , as sneak pet ruined a whatever their sole elven found chief of their kind , at which involving died other bastard dwarven blob blob blob blob blob blob blob blob blob blob blob concern

he was a wizard, and explained that he was in a small town of stars. 

a rat in the darkness

in the blood of curious

How could one fail to be inspired by such poetry?

(I really like “Zuby Glong” though.)

Another great source of names, phrases, and inspiration is Botnik Predictive Writer.  Being both themed and somewhat human-directed, it tends to make actual coherent phrases.

With Salt On Your Arms
Of This Debris is a World Built
A Small Change of Wavelength

(I may actually use one or all of these.)

Really though, coming up with names is not the hard part.  I feel like, if you’re creative enough to do all of the other stuff then you should have no trouble with…

…nevermind.