that’s a wrap… almost

I am so bad at gift wrapping.  I think I inherited that from my dad, who is not above using cardboard tubes, newspapers and duct tape to get the job done.  I failed this evening at wrapping a perfectly rectangular package and had to throw the paper out and start over.

I’m doing a better job with the album mastering…. except, it turns out, I’ve been doing it wrong.  

MusicTech magazine’s current issue has a feature about mastering.  I read it, and most of the advice is on the order of “use this $4000 worth software and these $3000 monitors” and uh, no thanks.  But I did learn that editing the beginning and end of a track is “topping and tailing”, and that electronic music technology magazines in 2018 are pretty much overpriced garbage.

I got more specific, up-to-date advice from the first website that popped up on a Google search.  It turns out that in general, you should meter in LUFS (“Loudness Units relative to Full Scale”) for loudness and dBTP (decibels True Peak) for peaks.  Nobody thinks you should compress heavily to make your music as loud as possible, because many streaming services normalize everything to the same volume level anyway.  And while I was being relatively gentle with my own work compared to the previous album, I was still going beyond recommended levels.

I’d been ignoring metering plugins because there’s nothing more boring than that, and I assumed dbFS peak and RMS as shown in Sound Forge were good enough anyway.  But the free version of Youlean Loudness Meter shows the relevant info and how I’m breaking the rules.  (-23 LUFS is a European broadcast standard; -14 seems to be a common goal for streaming audio but the important thing there is more “don’t over-compress”).  And -1 dBTP is a recommended peak maximum so that MP3 converters don’t accidentally cause clipping.

Of course it would have been smart to do this research before “nearly finishing” all 11 songs.  In a lot of cases I think I can just turn it down and be fine, but I’ll double-check I didn’t compress too much.

Sound Forge Pro 10 has been crashing on a semi-regular basis, and it’s a few years old now.  I’m happy to see that it’s not abandonware and there is a new version — though Sony (having bought it from Sonic Foundry) sold it to Magix.  Unfortunately, the demo crashes immediately on startup.  I can use it okay after that as long as I never close the bug reporting window, but it doesn’t say a lot about the potential stability, so I’m not sure I want to pay for an upgrade.  Maybe I will look for another tool in the future, though I do like Sound Forge’s dynamics tool and the ease of crossfading every edit.

master blaster

I’ve mentioned I’m in the process of mastering my fifth album of the year.  But what is that, really? Or what is it to me?

What it used to mean was the preparation of a “master” copy of the final mix, to be duplicated — almost like a mold for casting.  For CDs and DVDs, there’s a digital file of course — but for large-scale duplication, a physical glass master is prepared in a cleanroom with a laser burner and a nickel deposition process, and then a “mother” is created as a sort of negative of that, to stamp pits into the actual CDs.

Mastering requires making some adjustments to suit the limitations of the medium.  For instance, if the difference in bass content between the left and right channels on a stereo LP is too great, it will throw the needle right out of the groove.  Digital media have their own limitations, and some master for specific sound systems in clubs.  “Mastering for MP3” or “for iTunes” might be a little snake-oily, but certainly earbuds or headphones are a different sort of target than a big speaker system.  (Generally, I use headphones throughout the whole process, including as my mastering target.)

Historically, recording engineers found this was the best time to make adjustments to the final mix as a whole, so it sounds as consistent and appealing as possible.  That generally means having a nice balance in different frequency bands, but mostly it means means loud.

Quiet recordings are more susceptible to noise, from random particles and errors in the medium to cosmic rays and other interference getting amplified along with the music.  Also, louder music generally sounds “better” than quiet from a psychoacoustic standpoint.  Some stereos have a “loudness” button which fakes a louder sound by changing the curve.  But too much loudness causes distortion.

Certain kind of distortion sound great.  The sound of the electric guitar is dependent on it.  Different kinds of distortion are involved in synthesis.  Saturation involves nice smooth curvy distortion that sounds “full” and “warm” if it’s kept subtle enough; you can get that by recording to tape a little bit louder than it was designed for.

But distortion can definitely be undesirable, too.  There’s a reason why chords on electric guitars tend to be very simple, such as the open fifth “power chord.”  Distortion creates more harmonics in the signal, and if the harmonic relationships are already complex going in, what comes out will be mushy and gross (technical term).  And a too-loud digital recording is subject to “clipping”, where the peaks of waves are sheared off in a flat, sudden way that is very inharmonic and does not sound natural or organic at all.

Dynamics are important — the balance and change of quiet and loud over time.  Dynamics in playing style creates drama, and is an important element in groove.  Many instruments, such as drums, are highly dynamic in themselves.  But excessive dynamics in a recording can be annoying (when you constantly have to adjust the volume to hear clearly) and cause technical challenges (too quiet overall, subtle details are easily missed, or the recording gets too loud at times).  Often to make a recording louder and more balanced overall, the engineer has to reduce the dynamics through compression and/or limiting — usually in a way that doesn’t noticeably sound like the dynamics have been changed or anything has been lost — as well as “riding the gain” more gradually.

The actual dynamics in a file can include all kinds of weirdness we don’t perceive — lots of little spikes of volume that our ears and brains just smooth right over.  That’s why these tricks can work. Both compression and limiting basically just turn down the volume as the signal gets louder, and back up as it calms down — but the devil is in the details.  At what level this attenuation takes place, how smoothly or suddenly it applies on a volume scale, how quickly it applies on a time scale, and so on.  It’s part science and part art.

(Don’t confuse dynamic compression with the kind of compression that makes an MP3, WMA or OGG file smaller than a WAV file.  Lossless audio compression uses algorithms to represent the same data in less space, and is guaranteed to sound exactly the same as no compression.  Lossy compression removes data that contributes little or nothing to what we can actually perceive, and is generally a compromise between size and perfection.  Blind tests on thousands of listeners have shown that on average there’s a barely discernible difference between a 192kbps VBR MP3 and a CD it was ripped from, and nobody can distinguish 320kbps from the real thing.)

If you lower the relative volume of the spiky bits, you have more room to turn it up overall.  There was something of an arms race or “Loudness War” which reached its peak (so to speak) in the mid 2000s, with Metallica’s Death Magnetic frequently cited as one of the most egregious examples.  Things have calmed a bit since then.

There’s also equalization (EQ) — this is the raising or (more usually) lowering the volume of particular frequency ranges to get a nice, balanced, full sound.  This can be combined with dynamics processing in tools such as dynamic equalizers and multiband compressors.

Of course both EQ and dynamics can be used for “creative” effects as well; it’s common to compress drums more than is strictly natural-sounding, or to “squash” a singer’s voice into a narrow, telephone-like or old-timey-radio range, or to really bring out the breathiness in a voice or squeaks on a guitar fingerboard, and so on.  Usually that’s done as part of the mix rather than mastering, though.

There are a lot of tools out there to help with mastering.  Some plugins or services promise to do it all automatically with a single button or knob, and usually that’s better than nothing.  I have a whole process and a set of tools I use.


I try to get levels reasonably okay in the original recordings, with the compressor/limiter ToneBoosters Barricade.  I don’t push it very hard at this point because I won’t be able to undo it.  The idea here is mostly to keep any unexpected spikes from clipping, and having a good monitoring tool to make sure I’m not recording too quietly with my headphones turned way up or vice versa.

My first pass at editing in Sound Forge Pro does only a little dynamics work to get levels generally okay — it’s mostly about overall sound, good first and last notes, and so on.  I save the more strenuous mastering work for a separate step.

 Sound Forge has a few built-in dynamics tools.  There’s “normalize” which can raise everything to within a certain threshold, either by peaks (safest) or RMS (useful for general “perceived” loudness but risks pushing the peaks too far) and is good at reporting maximum peak and average RMS levels to compare the different songs on an album.  There’s a fantastic graphic dynamics tool that lets you draw the response on a graph, and you can compare to levels shown in a recording.  There’s a “clip detection and repair” tool that’s a kind of gentle compressor that lowers peaks to safer levels.  And sometimes I highlight a section and crossfade into and out of a general “volume” tool to raise or lower the volume in a specific area.

I use other plugins with Sound Forge as well.  u-he Presswerk is a full-featured compressor that goes a bit beyond my pay grade, but I have some standard favorites among its presets.  I’ll almost always try “A Touch of Glue” and/or “AF Master Transparent” to see if either of them brings out subtle details and reigns in peaks a bit, but sometimes neither of them really helps.  Undo is just a click away.  The aforementioned Barricade is also good to try for a big boost; it can produce what looks like clipped-off peaks but in practice are carefully set to sound clean while maximizing overall volume.

I don’t do a whole lot of fiddling with EQ in mastering.  Sometimes I’ll decide that if I cut out some sub-bass I’ll have more room for everything else, or that a particular note or frequency band is a little too intense.  Sound Forge has a good graphic EQ (for more general changes) as well as a parametric EQ (for surgical edits to specific bands).  Sometimes I want to reduce the strongest frequencies a little bit all across the file, whatever they may be, to enhance the timbre and make it “howl” a bit less — for this I use Melda MSpectralDelay‘s level transformation tool, being careful to disable the delay, spectral panning, and frequency shifting first.

EQ changes the dynamics, and often it’s best to cycle between different tools, make small and gradual changes, and keep getting feedback from one’s ears and the various measuring tools in the software.

Write drunk; edit sober.

— not Hemmingway, who wrote in the mornings, avoided alcohol until the afternoon, and was to avoid hangovers.  It was Peter de Vries, and was not meant literally but to encourage both “spontaneity and restraint, emotion and discipline.”

Between the ultra-close attention this process demands, and the changes to dynamics bringing out more detail, it can expose flaws that were previously unnoticed.  I suspect that sometimes the Firewire connection between my audio interface and computer gets a little overwhelmed, and there are any number of other things that can find their way into a recording.  Usually it’s just subtle quirks of the modules and effects I’m using, or sometimes I pushed something a little hard for effect and got more than I bargained for. I accept a certain amount of this as a part of the process and the charm of working this way, and I’m sure Tony Rolando would agree.  Sometimes I even bring these “flaws” out intentionally, such as enhancing background noise through manipulating dynamics and EQ — or creating the noises intentionally via modular or plugins.

But other times I want to repair things.  Smoothing them out is rarely as easy as using Sound Forge’s “Clicks and Crackles” automatic tool, which has a penchant for making things worse.  Sometimes I just need to zoom way in and literally draw a smooth curve over where there was a sudden jump, an edit affecting the tiniest fraction of a second.  Or it might require some careful copying and pasting from another part of the file, being especially careful to keep the transition smooth, or just cutting out a tiny bit and stitching the edges together.  Reverb can smooth things over so long as it doesn’t cause a sudden shift in timbre, or it’s done in an intentional-sounding way and fits in with the busy things that are already present.  Sometimes mixing in something else will help mask it.  There really are no hard-and-fast rules, and this bit can be time-consuming, but persistence usually pays off.

One of the goals of mastering is consistency in volume levels across an album, and generally in line with other music of a relatively similar nature.  My goal is to get them where Sound Forge’s Normalize tool reads about -0.3dB peak and -10.5dB RMS. 
I wouldn’t read too much into those specific numbers though, because other tools are likely to report differently.  (0db is the maximum possible level in a digital recording; “bigger” negative numbers are quieter.)  That allows a little bit of room for the playback device to hopefully not clip, and seems to match the volume levels of other modern albums.  I’m not too worried about a little deviation here, as long as it doesn’t sound so different from one song to the next that you want to reach for a volume control too frequently.

This sort of work can be tiring to the ears and mind, so I break it up a bit, get the headphones off and reset myself.  Mastering Materials has seemed particularly grueling so far, but of course I hope the result is worthwhile.

when in doubt…

When I create songs, I do it in a single session whenever possible, or two if necessary.  From the point where experimenting/jamming crosses my “I have to make this a song” threshold to when I put the headphones down and walk away, it is rarely more than 4-5 hours.

Usually it’s fine.

Usually the first thing I say to myself when I have done so is “I’m not sure about this one.”  It’s too boring, it’s too harsh, it’s too weird, it’ll never work with the rest of the album…

The next day when I listen to it?  It’s fine!

It’s not that I have some kind of house elf who comes in and fixes my recordings while I’m off sleeping or playing video games, it’s just my perception.  (A) I’ve been listening to variations on the same thing for a few hours and am getting pretty jaded, vs. (B) I have fresh ears and am listening to a 4-9 minute song in the context of other things I’ve recorded recently.

Not everything I record gets released.  Turning songs into an album usually requires filtering a few things out to make it a stronger whole.  Some of them are weaker, some of them just don’t really fit.  I currently have 16 songs in a folder called “other unreleased” and another 6 simply called “no.”


This method of working quickly has been really helpful to me.  Through 2016-2017 I recorded over 380 songs this way.  After several months, I found myself consciously critiquing my overall output and realizing that some things worked and others didn’t.  Decades of scattershot music-making — trying to do almost everything that interests me, which is a lot — were brought into focus.  After a few more months of refining both my gear and technique, I started recording albums again with this new focus.

David Bayles and Ted Orland, Art & Fear.  One of my favorite things I’ve read this year.

Going back to make little tweaks and adjustments and additions to a song dozens of times didn’t really serve me so well.  Many changes didn’t necessarily improve anything, just make it different, satisfying my ear fatigue.  More importantly, those changes only affected a single song — what I needed was to improve my whole practice.  I stopped arranging the pine needles in my forest just so when I realized I preferred oaks anyway.

It would be nice to tell the story that I went to this single-session thing as part of a grand plan.  Really, the single-session thing was driven by my transition from 100% software music-making to a mixed approach — and some faulty, unreliable little desktop synths I was trying to work with at the time — and the fact that using these synths meant taking up space on my desk that I needed for other things.  It was easier to just get it done and put it away, than to keep it set up and running for multiple days while hoping nothing would crash or get accidentally unplugged!

I stuck with these habits — to me it fits perfectly with the transitory nature of patching a modular synthesizer.  About which I’ll just throw out another quote:

“I think the modular sound has less to do with timbre and more to do with the fact that when people are patching a modular, they seem to be less interested in micro-management as a music-making process.  The extreme magnification of musical event time and pitch provided by modern DAWs seems to curate what people believe to be perfect music through the aid of a machine.


Music made with the modular system is, in my opinion, a pure and interesting collaboration between human and machine.  It displays well the beauty and the blemish in both (human and machine)… it might be less perfect by judgement of mainstream music taste, but perhaps more exciting to those of us seeking a deeper connection to the music.”

Tony Rolando, founder of Make Noise

My process is this:  I set up the entire song:  sound design, composition, sequencing and/or performance plan, mixing, effects, all of it.  And then I record it “live” to a stereo channel.  If I feel it’s a good take, that’s it — it’s committed.  I take notes to satisfy my later curiosity, shut it down and unpatch the modular.  There’s no multitracking, no going back and making small changes or revisiting mixing decisions.  The only editing possible from that point is on the “finished” mix.

Sometimes a lot does happen in that editing, but generally it falls into “mastering” enhancement and cleanup, or bold-stroke creative changes — not revisiting past decisions.  Always moving forward, no going back.

Regret that I can’t make those changes is rare and minor at most.  For all of the fear that some people have of working with a synthesizer that can’t save presets, this is never a thing that has bothered me about modular synths.  Instead of saving and loading sounds with perfect recall, I remember general techniques that will lead to new creations in the future.  Always moving forward!

blob blob concern

Maybe you’ve never wondered how I come up with song titles, but there is a thread on Ambient Online about that question, and reading it today coincided with an update from one of my favorite sources of name inspiration.

I’m sure I’ll write plenty later about my process(es) for creating music, but this is what happens after.  Or during.  Or before!

Sometimes, songs name themselves.  I’ll be finishing it up and stop to listen through, about to record, and some impression will strike me and lead to a name.  Or not, and I might just pick a temporary name so I can save the project file, and get around to a real name later.

OriginalFinal
Textura IKermadec Trench
Textura IIBathyal
Assorted CitrusHadal Pressure
TarnWhale Fall
Five BoolLoki’s Castle

Sometimes I have a theme in mind for the album, and that helps me choose a name.  Although in the case of Nereus, I already had most of the album done when the theme struck me, and I wound up renaming several songs (some of them, several times.)  I had to keep a chart for a while so I wouldn’t lose track. 

When all else fails, I consult my list.  Whenever I invent or find a turn of phrase that I think has a remote chance of working — or when my spouse suggests it — I put it on my “Song Names” note in Simplenote.  Most of the things on this list will never be used, and sometimes I cull the least interesting and least likely.  But sometimes going over the list and finding these goofy phrases will trigger a better idea.

Also contributing to this list:  the neural network antics featured on AI Weirdness.  During my period of prolific exploration in 2016-2017, I leaned on it quite heavily, giving such fantastic titles as “Zuby Glong,” “Crab Water,” and “Corcaunitiol.”  I haven’t used it so much on my album releases, but again, sometimes those random strings trigger some ideas.

Here are some great bits from today’s blog post over there:

lower blob blob blob blob blob blob blob blob blob blob blob blob dragon right , screamed . , as sneak pet ruined a whatever their sole elven found chief of their kind , at which involving died other bastard dwarven blob blob blob blob blob blob blob blob blob blob blob concern

he was a wizard, and explained that he was in a small town of stars. 

a rat in the darkness

in the blood of curious

How could one fail to be inspired by such poetry?

(I really like “Zuby Glong” though.)

Another great source of names, phrases, and inspiration is Botnik Predictive Writer.  Being both themed and somewhat human-directed, it tends to make actual coherent phrases.

With Salt On Your Arms
Of This Debris is a World Built
A Small Change of Wavelength

(I may actually use one or all of these.)

Really though, coming up with names is not the hard part.  I feel like, if you’re creative enough to do all of the other stuff then you should have no trouble with…

…nevermind.