yes, master

I spent much of my weekend mastering four tracks from Vultur Cadens. Just as important as the specific things I learned: I’m more invested in the result, and have the goal of making my own mastering as consistent as possible with that of an experienced professional. It’s a challenge, but I’m very happy with the results I’m getting so far.

Nathan Moody was a guest recently at the Velocity synth gathering in Seattle, and gave a talk on “Mixing Modular Music” — which includes and complements the advice he gave me about my mixes.

Trying not to make this too long and technical, here are the things I’ve learned that I’m applying now:

  • It’s important when using a spectrum analyzer to set the block size high to detect infrasonic content that needs filtering out. Voxengo Span at a block size of 8192, and minimum frequency of 5 Hz, will do the job.
  • Stereo phase correlation matters even if you’re not cutting vinyl. Voxengo Span and Correlometer are easy to read: 0 to +1 is good, anything more than brief dips below 0 is bad. Toneboosters EQ4 is an excellent mid-side EQ that can correct it — after a few hours of struggle I’ve found a set of techniques that not only fixes these issues, but often results in a better sounding stereo field. I don’t necessarily care about “natural” here and I’m not aligning multiple microphones, so what I’m learning for myself doesn’t necessarily apply to every mix.
  • Fixing big resonances. Something that makes some synth sounds harsh and strident (with both the Lyra and with feedback-based synthesis), and leads to listener fatigue, is frequencies that are really loud compared to the rest of the spectrum. Often bringing them down just a little bit makes them sound great — and then subsequent compression and limiting might mean having to shave them again. Again, EQ4 is great for targeting these. For my purposes, finding these manually and fixing them once they’re already recorded is fine.
  • Compression: this can be complex, subtle and subjective. I’m just loosely imitating what Nathan Moody said he did, mostly with Klanghelm MJUC or NI Solid Bus Comp so far, tweaking until I feel like it’s doing something positive. If I were mixing a rock band or making techno or hip-hop there might be more of a system to it. I still feel I have a lot more to learn here.
  • With limiting, I still rely on Toneboosters Barricade and Bitwig’s peak limiter for ease of use, transparency and simplicity. But I have a new favorite preset as a starting point in Presswerk.
  • In terms of other flavor/vibe/etc. I find I really like u-he Uhbik-Q for flavor EQ. For saturation though, I am still very much “try things at random to see what works, if anything.” Another area where I want to learn more — I hope to someday find a favorite secret sauce, and/or recognize what to use without needing as much experimentation every time.
  • Overall I find myself bouncing between Bitwig Studio and Sound Forge Pro 13 for mastering. The former is better for chains of plugins — the effects and the analyzers to monitor them, or EQ that wants tweaking because I tweaked a compressor. The latter has a few handy tools (dynamics statistics, some noise/crackle removal options, and easy fade in/out and crossfading of effects).

“The Grid. A digital frontier.”

No, this isn’t a follow-up about TRON Legacy or its soundtrack (which was decent, but not as fascinatingly weird as 1982 TRON… the visuals were way better though). But rather, Bitwig Studio 3.

I know they look similar, but don’t be confused!

I’ve been using Maschine as my DAW ever since 2.0 was shiny and new. It began as a drum sampler/groovebox similar to Akai’s MPC, but it gained full MIDI sequencing and VST plugin support, and gradually expanded into something more full-fledged. It was an easy switch from FL Studio back in the day — cleaner and more structured, where FLS had 13 years of features bolted on and very little consistency to its UI. But its “beat making” nature is a really strange fit for the kind of music that I make now.

MCP, MPC, whatever. You know, that villain looked better in the video game than the movie, and I’m glad they didn’t bring it back for TRON Legacy. End of line.

I’ve stuck with Maschine only because I was used to it, and learning a new DAW with a completely different paradigm takes some effort. I’ve given Ableton Live and Reaper demo versions very brief attempts, as well as a newer version of FL Studio, but each time I just felt more frustrated and returned to the path of least resistance.

But I’m between albums and have a new computer on the way (*), so this is a good time. Plugins are starting to be released in VST3 format only — which Maschine doesn’t support yet — and I’ve heard a lot of praise for Bitwig’s modernity and modular orientation.

So I’ve been looking into it. I have to say the experience of learning it could have been made a lot smoother with a single official, officially endorsed, or built-in tutorial — rather than the bewildering array of unorganized, free and paid tutorial series that both the official site and a web search throw at you. It’s a bit intimidating. But I’ve found a couple of recommendations, and after a few hours I feel like I can record with it, start to finish, without getting too lost.

For what it’s worth, Thomas Foster’s “Bitwig Studio 3 Tutorial for Beginners” and Brian Bollman’s “Migrating to Bitwig Studio” series have been helpful so far.

There are deeper things to learn, such as the aforementioned Grid (a modular effect/synth builder) and various shortcuts and customizations, as well as optional hardware controller integration.

There’s also the possibility that I could use Bitwig for post-processing and mastering, without the need for Sound Forge or something else. We’ll see.

(*) I was notified yesterday that the computer was in production. I don’t expect it takes more than a couple of days for experienced builders to assemble, install Windows and drivers, and do a burn-in test, even with a customized parts list.

accidental mimicry

People who don’t know much about electronic music, or people who are very good and very patient at sound design, might assume that most of the sounds I come up with are intentional.

It’s more like, I experiment with sounds, hear something that becomes the inspiration for the song, and everything else follows. The more parts I add after that, the more I find a need for specific sounds: a sub drone, a whooshy noise, a melodic counterpoint made from simple beeps, or even occasionally a sample. But there’s often still usually some seat-of-the-pants element to the assembly.

If I’m in that later stage and need a Rings-like sound, I usually reach for Rings. The path of least resistance, you know?

Yet this keeps happening: I put together a voice, and it occurs to me afterward that it sounds like Rings, even though it isn’t — and in the same song, I have used Rings for something else that doesn’t sound like Rings.

On “Rat Facts,” which I just recorded today, there’s a lovely Rings-esque “guitar” sound which is Hertz Donut mk3 through Natural Gate and Prism. There’s also a big deep sub bass, which is Rings.

On “Soliton,” also from the new album, there’s a drum-like voice, more djembe than “bongo” (*), which sounds a lot like drums I’ve made with Rings. But it’s raw pulses from Teletype through Rainmaker. A different voice, sort of filtered crackly noise that blends in with other things, is Kermit through Rings FMd by DPO.

From Internal Reflections, “Who is This?” had non-Rings “guitar strumming”, which was E370 through Natural Gate into Rainmaker. But I used Rings for a “broken reverb” effect on an E370/QPAS voice.

(*) a common sound/trope in modular synthesis is “Buchla Bongos.” It involves pinging a lowpass gate with a trigger while some inharmonic FM stuff goes through it. It’s kind of part of the trend of calling every hand drum (and some that aren’t) a “bongo.”

Djembe are not bongos.
Congas are not bongos.
Tablas are not bongos.
Doumbeks are not bongos.
Taiko are definitely not bongos.

Don’t do that. Pet peeve.

(Finding Star Wars gifs for this entry was also not something I originally designed. I just happened to want a “that’s not how this works” and when I saw what came up, I knew exactly what the rest of the entry needed. And that is how the Force works.)

focus on focus

First off: that’s right, spammer, I’m not monetizing my website.

Okay then. I’ve been pondering my modular journey and where it goes next. I feel like Synth Farm 2.0 is really good, but not perfect yet. There’s something of a conflict between focus and flexibility that I feel I need to resolve one way or the other if possible. I started my modular journey with exploration, but I’m making specific music. There are questions I need to ask myself about what’s essential, what’s optional, what’s irrelevant or distracting.

Instruments like the Lyra-8 really appeal to me: focused on human expression and improvisation, made for the general region of sound and feel that my music has. I may yet end up with one, but I want to make sure it’s not just “greener grass” and that it doesn’t create more redundancies. If I would use it to create the kind of music I’m already making, do I really need it? …or if I can make the same kind of music with a Lyra-8 as I can with an entire modular synth, do I need the modular synth?

I’ve been listening to the new-ish Esoteric Modulation podcast, among other things. It deals with the more boutique and exotic electronic instruments even beyond Eurorack, the intersection of music and other arts, and the thought that goes into instrument design. So it’s some excellent food for this kind of thought.

I’ve also been thinking about Knobcon, which is upcoming in about 6 weeks. Two years ago I was at a different place in my journey: I had a good feel for the synthesis techniques I wanted to work with, but a smaller system, hadn’t gotten into sequencing and control questions very deeply yet, and I hadn’t really found “the Starthief sound” quite yet. I went in hoping to try a few specific things, and to just get some overall perspective. What I wound up with was impressions of specific modules and instruments, some enjoyable performances, and feeling overwhelmed (I also didn’t have a handle on anxiety at the time).

My goals for this Knobcon are to relax and take it slow, retreat or stop and collect my thoughts when I need to — and to try things in the context of the music that I make, and think about focus.

slightly delayed

Next weekend is Superbooth — modular synthesis’ biggest trade show / gathering / bunch of performances, in Berlin. There will be announcements of new stuff. I’ll be out in the woods, camping and not following the hype (or if there’s a good connection, checking websites a couple times a day maybe).

But right now when I look at my system, I think “geez there’s a lot of stuff here to explore” — partially because of the wide and deep ocean that is the Rainmaker. I don’t want to add to that for a bit, I just want to grab a spoon and start digging. So the gear is going to sit as-is for a while, at version 2.05 or whatever it is, and not commit the last 20HP. I have a few thoughts on it, but I’ll reserve most of those for my personal “what if” notes.

One thought I’ve been having is that I kind of miss having nice hands-on complex oscillators. ER-301 is capable, and is a fantastic blank slate. But neither the unit that I wrote for it, nor the Volca Modular, are quite filling that ecological niche. I have some thoughts about a different way to solve this in the ER-301 — separating the oscillators onto different channels and routing via a combination of patch cables and “prewired” internal connections. If that doesn’t get me there, I see a Synth Farm 2.2 plan (not one that replaces the ER-301, but other things).

Between the recent release of the long-awaited Valhalla Delay and especially my first explorations of the Rainmaker, I’ve gained some new insights into the relationship between delays (especially multitap) and comb filtering, and what can be done with them. And I’ve taken that insight back into exploring the older Valhalla UberMod, which is a multitap delay with a quite different paradigm. The result is something a little like the big knowledge download I recently got with wavefolding/FM/PM, but more on the intuitive side and much less geometric.

inharmonic collision

Monday evening, I set up my composition idea as planned: three triangle oscillators feeding a sine shaper, with two of them under polymetric control from Teletype (3-in-8 vs 4-in-9 Euclidean rhythms) and one under manual control.

It was frankly pretty boring when I was just using octaves. So I decided to go off the rails a bit and sequence pitches with the Sputnik 5-Step Voltage Source. I clocked it with the master clock, regardless of the rhythmic pattern; the first voice used a channel directly and the second sampled a channel every 8 clock steps. So what we’d get is a complex pattern that starts something like this:

Where time runs left to right, and each color in each lane represents a knob on the 5-Step (not necessarily indicating what the pitch value is set to, nor is “red” on the top and bottom necessarily the same pitch but sometimes they are, and different colors within the same lane are in some cases tuned to the same pitch). The pattern in the top lane runs for 40 beats before repeating, and the bottom lane runs for 72 beats. Because these two are interacting thanks to the sine shaper, they can’t be thought of as individual parts and so it’s going to take 2880 beats for the pattern to repeat. At the tempo I used, the whole recording is roughly 1/10 of a full cycle. (Or… it would be, except I put the two patterns under manual control, suppressing the triggers while letting the sequencer keep clocking. Monkey wrench!)

Complex patterns from relatively simple rules. But that was kind of a tangent — the point is, the frequencies I dialed in, relative to each other, often collided in non-integer ratios. Even if they sound good as individual notes played together, when you use them in phase modulation things get a bit dissonant and skronky, with new sidebands at weird frequencies.

An unlikely scenario.

This is the 21st century — beauty is complex, artistic merit isn’t directly tied to beauty, we’re not limiting ourselves to Platonic perfection, and the idea that certain intervals and chords could accidentally invoke Satan isn’t something we lose sleep over anymore. I think the result I got is pretty neat! But it’s not really what I had originally imagined. So I’m going to keep the basics of this idea, follow a different branching path with it and see where that goes.

The third voice, I controlled with the 16n Faderbank — one slider for level, one for pitch. The latter went through the ER-301’s scale quantizer unit, so it always landed on something that fit reasonably well with the other two voices. It turns out this unit supports Scala tuning files, and TIL just how crazy those can get.

Scala is a piece of software and a file format which lets you define scales quite freely — whether you just want to limit something to standard 12TET tuning, or a subset of that (such as pentatonic minor), or just intonation, non-Western scales, xenharmonic tunings, or exactly matching that slightly-off toy piano. The main website for Scala has an archive of 4800 different tuning files and that’s just too much. This is super-specialist stuff with descriptions such as:

  • Archytas[12] (64/63) hobbit, sync beating
  • Supermagic[15] hobbit in 5-limit minimax tuning
  • Big Gulp
  • Degenerate eikosany 3)6 from 1.3.5.9.15.45 tonic 1.3.15
  • Hurdy-Gurdy variation on fractal Gazelle (Rebab tuning)
  • Left Pistol
  • McLaren Rat H1
  • Weak Fokker block tweaked from Dwarf(<14 23 36 40|)
  • Semimarvelous dwarf: 1/4 kleismic dwarf(<16 25 37|)
  • Three circles of four (56/11)^(1/4) fifths with 11/7 as wolf
  • Godzilla-meantone-keemun-flattone wakalix
  • One of the 195 other denizens of the dome of mandala, <14 23 36 40| weakly epimorphic

With all these supermagic hobbits and semimarvelous dwarves and Godzilla, and all the other denizens with their Big Gulps and pistols, where do I even start with this? The answer is, I don’t. I’ll just try making a couple of my own much simpler scales that I can actually understand. Like 5EDO — instead of dividing an octave into 12 tones, divide it into 5.

highlights

I’ve just finished reading Curtis Roads’ Composing Electronic Music: A New Aesthetic.

Roads has some pet techniques and technologies he is fond of (having developed some of them) as well as some pet compositional concepts, and tries to shoehorn mentions of them in everywhere. Granular synthesis is a big one. “Multiscale” anything is another (along with macroscale, mesoscale and microscale). “Dictionary-based pursuit” is one I’ve never heard of before and can’t actually find much about.

Roads comes from the more academic side of electronic music, as opposed to the more “street” end of things, or the artist-hobbyist sphere where I would say I am. But he recognizes that music is very much a human, emotional, irrational, even magical endeavor and that prescriptive theory, formalism, etc. have their limits.

The book was primarily about composition — and by the author’s own admission, his own views of composition. He gives improvisation credit for validity but says it’s outside his sphere. Still, I found some of the thinking relevant to my partially composed, partially improvised style.

At times he pushes a little too much at the idea that electronic music is radically different that everything that came before. For instance, this idea that a note is an atomic, indivisible and homogenous unit, defined only by a pitch and a volume, easily captured in all its completeness in musical notation — it completely flies in the face of pretty much any vocal performance ever, as well as many other instruments. Certainly there have been a handful of composers who believed that the written composition is the real music and it need not actually be heard or performed. But while clearly not agreeing with them, he still claims that it was electronic music that freed composers from the tyranny of the note, and introduced timbre as a compositional element (somebody please show him a pipe organ, or perhaps any orchestral score).

He has something of a point, but he takes it too far. Meanwhile a lot of electronic musicians don’t take advantage of that freedom — especially in popular genres there’s still a fixation on notes and scales and chords as distinct events — that’s why we have MIDI and why it mostly works — and a tendency to treat the timbre of a part as a mostly static thing, like choosing which instrument in the orchestra gets which lines.

And I’m also being picky — it was a thoughtful and thought-provoking book overall. I awkwardly highlighted a few passages on my Kindle, though in some cases I’m not sure why:

  • “There is no such thing as an avant-garde artist. This is an idea fabricated by a lazy public and by the critics that hold them on a leash. The artist is always part of his epoch, because his mission is to create this epoch. It is the public that trails behind, forming an
    arrièregarde.(this is an Edgard Varèse quote.)
  • “Music” (well, I can’t argue with that.)
  • “We experience music in real time as a flux of energetic forces. The instantaneous experience of music leaves behind a wake of memories in the face of a fog of anticipation.”
  • “stationary processes”
  • “spatial patterns by subtraction. Algorithmic cavitation”
  • “cavitation”
  • “apophenia”
  • “Computer programs are nothing more than human decisions in coded form. Why should a decision that is coded in a program be more important than a decision that is not coded?”
  • “Most compositional decisions are loosely constrained. That is, there is no unique solution to a given problem; several outcomes are possible. For example, I have often composed several possible alternative solutions to a compositional problem and then had to choose one for the final piece. In some cases, the functional differences between the alternatives are minimal; any one would work as well as another, with only slightly different implications.

    In other circumstances, however, making the inspired choice is absolutely critical. Narrative structures like beginnings, endings, and points of transition and morphosis (on multiple timescales) are especially critical junctures. These points of inflection — the articulators of form — are precisely where algorithmic methods tend to be particularly weak.”

On that last point: form, in my music, tends to be mostly in the domains of improvisation and post-production. The melody lines, rhythmic patterns and so on might be algorithmic or generative, or I might have codified them into a sequence intentionally, or in some cases they might be improvised too. On a broad level, the sounds are designed with a mix of intention and serendipity, while individual events are often a coincidence of various interactions — to which I react while improvising. I think it’s a neat system and it’s a lot of fun to work with.

The algorithmic stuff varies. Some of it’s simply “I want this particular rhythm and I can achieve that with three lines of code”, which is hardly algorithmic at all. Sometimes it’s an interaction of multiple patterns, yielding a result I didn’t “write” in order to get a sort of intentionally inhuman groove. Sometimes it includes behavioral rules that someone else wrote (as when I use Marbles) and/or which has random or chaotic elements, or interactions of analog electronics. And usually as I’m assembling these things it’s in an improvisational, iterative way. It’s certainly not a formal process where I declare a bunch of rules and then that’s the composition I will accept.


you had to make final decisions as you went

While this interview goes a little more into recording engineer geekery
(*) than I can appreciate, there’s something to it at the end.

Part of why my process works for me so well is not treating music as something to be assembled jigsaw-like from many little recorded bits. I may not play live in front of audiences, but the recording process is still performance of a sort. I hit record, I do things with my hands that shape the course of the music. It’s usually improvisational to some degree, and it’s usually done in one take. Even when it’s not, there is no separation between what I hear and what I record. All the mixing and effects and stuff are done. Recording is commitment. In some ways it’s more primitive than all the psychedelic rock groups The Ambient Century was praising.

…of course sometimes I will edit my recording, and do things to it that extend and enhance it. But it’s still not cut-and-paste.

(*) Recording engineers are the people who know which microphone to use and exactly where to put it, how to set up the acoustic space, how loud to record on tape (or whatever), how to mix mic signals in ways that sound better instead of causing phase cancellations and such, and all of that. It’s a whole area of expertise that is only tangent to what I do. To me, the voltages in the wires, the data in the computer, and the vibrations in my eardrums are all extremely similar, and the few variables that confound that are constant and familiar. But microphones just don’t “hear” the way our ears do, nor is human attention a factor. If you’ve ever tried to record a neat birdsong, only to have the recording make you aware of traffic noises and an air conditioner and barking dogs in the background that you didn’t notice before and the movement of your hands and rustle of your clothes, that’s just a small part of the challenge. And if you’ve ever tried to record extremely loud drums in a concrete warehouse without it sounding like it’s in a concrete warehouse, while still capturing the subtleties of the sound of the stick hitting the head, that’s another five or six technical problems to solve.

release the brain clutch

One of the reasons I like the Lines forum so much is to get peoples’ random insights. Sometimes those come in the form of quotes by other artists. Sometimes they are in the form of art itself. This time it was both:


Bruce Nauman, The True Artist Helps the World by Revealing Mystic Truths, 1967

I don’t know if I agree or disagree with this statement. I reflexively flinch at “true artist”, I don’t think mystic truths can be revealed except by themselves (and mostly they defend themselves), and “helps the world” can sound more than a little conceited. But at the same time… yeah, kinda.

It turns out the artist thought the same thing.

Anyway, the thread where that was posted, along with some other thoughts about what it means to “figure out how to be an artist”, had my head spinning but also inspired me to overcome the lethargy of the last several days and record something.

Four minutes into the first take, which was going excellently, the phone rang. Oops. Gotta remember to put it in airplane mode next time. The next several takes were flubs, but then I finally nailed it.

I’ve decided that the next album will definitely have no theme. In fact, no-theme kind of is its theme. Action over contemplation.

Some time back, while I was Kemetic Orthodox, I was unsure about my musical direction at the time and did some divination to get unstuck. What came to me were the words “use force.” While a lot of music is about control and dexterity (either physical or mental) and is often a very intellectual exercise, sometimes you just have to pull out all the stops. (This phrase refers to the controls on a pipe organ; pulling them all out makes the thing as loud and dense as it can get. It’s like turning the amp up to 11.)

That’s… not exactly what my plan is here, but the idea is: just turn the synth on and do something.

that’s a wrap… almost

I am so bad at gift wrapping.  I think I inherited that from my dad, who is not above using cardboard tubes, newspapers and duct tape to get the job done.  I failed this evening at wrapping a perfectly rectangular package and had to throw the paper out and start over.

I’m doing a better job with the album mastering…. except, it turns out, I’ve been doing it wrong.  

MusicTech magazine’s current issue has a feature about mastering.  I read it, and most of the advice is on the order of “use this $4000 worth software and these $3000 monitors” and uh, no thanks.  But I did learn that editing the beginning and end of a track is “topping and tailing”, and that electronic music technology magazines in 2018 are pretty much overpriced garbage.

I got more specific, up-to-date advice from the first website that popped up on a Google search.  It turns out that in general, you should meter in LUFS (“Loudness Units relative to Full Scale”) for loudness and dBTP (decibels True Peak) for peaks.  Nobody thinks you should compress heavily to make your music as loud as possible, because many streaming services normalize everything to the same volume level anyway.  And while I was being relatively gentle with my own work compared to the previous album, I was still going beyond recommended levels.

I’d been ignoring metering plugins because there’s nothing more boring than that, and I assumed dbFS peak and RMS as shown in Sound Forge were good enough anyway.  But the free version of Youlean Loudness Meter shows the relevant info and how I’m breaking the rules.  (-23 LUFS is a European broadcast standard; -14 seems to be a common goal for streaming audio but the important thing there is more “don’t over-compress”).  And -1 dBTP is a recommended peak maximum so that MP3 converters don’t accidentally cause clipping.

Of course it would have been smart to do this research before “nearly finishing” all 11 songs.  In a lot of cases I think I can just turn it down and be fine, but I’ll double-check I didn’t compress too much.

Sound Forge Pro 10 has been crashing on a semi-regular basis, and it’s a few years old now.  I’m happy to see that it’s not abandonware and there is a new version — though Sony (having bought it from Sonic Foundry) sold it to Magix.  Unfortunately, the demo crashes immediately on startup.  I can use it okay after that as long as I never close the bug reporting window, but it doesn’t say a lot about the potential stability, so I’m not sure I want to pay for an upgrade.  Maybe I will look for another tool in the future, though I do like Sound Forge’s dynamics tool and the ease of crossfading every edit.