Description and patch notes are here.
Description and patch notes are here.
With about 5 hours of minimal effort last night, Internal Reflections is mastered. Once again, I didn’t really leave myself anything difficult to work with, just a few spikes to manually tame, a couple of generally-too-loud tracks and a couple that benefited from a pass with a compressor/limiter.
I’m sure if I hired a professional who’s used to this genre, like Nathan Moody, to master my work it’d come out a bit better. But I don’t think I can justify the expense as it is. That’s almost a reason to wish I had a bigger audience right there though 🙂
I’m certainly happier with my own mastering work than with super-cheap or free services I’ve heard that seem to either pass everything through a single algorithmic process, or… completely neglect to address major differences in loudness between tracks on the same album so you wonder whether they did anything at all.
I have put together some high-contrast art this time — not the original idea I was going to work with, but I think it’s better — and I’m trying to decide how to work the text in. I might even forgo text, but I have some graphic design ideas for it that I’d like to work in somehow. I also have the concept blurb finally hashed out, and making the patch notes more readable isn’t that much work… so the release will be quite soon!
The Panharmonium got held back for a month for some new software features their testers asked for, which as I see it, just gives me more time to get familiar with the DPO before learning something new. I’ve had a few insights with it — figuring out why the FM felt so wild at first, delineating where the “sweet spots” for less noisy sounds are, and coming up with a set of experiments I want to try.
I’ve got a bit over 58 minutes recorded for the new album. I’ve just gone through a full listen, and aside from two minor edits, mastering and artwork are next (and I have a solid idea about the artwork). I will still need to bash on the accompanying text, because the initial concept sort of proved itself, but also proved itself trivial? It’s hard to explain, and that’s why I need to work on that explanation some more.
In some sense I feel like the album’s cohesion arises naturally rather than due to conscious effort on my part. Aspects of the composition, sound, feel, etc. just come together a certain way. The previous album was different, and the next will be different again, but this one hangs together. This is a big part of why I prefer albums.
I’ve been playing a lot of Guild Wars 2 recently. I finished the Personal Story for the first time — despite having had several level 80 characters previously. There was some tedium and frustration and eye-rolling, but I made it.
Then I started on the Path of Fire expansion. This skips 3 years of “Living World” story and a prior expansion (and apparently enough happening to the player’s character to make them much more brash and forceful in personality), so I read up on that and… wow. This game and the lore behind it are huge, and kind of crazy at times. There’s a frightening amount of content, past and present.
I was hoping to unlock the Mirage specialization for my character quickly, but circumstances require a bit more effort. Meanwhile I’m mostly enjoying the ride with the story, though the area design — based on training various mounts for jumping, flying etc. — has stymied me a bit. The setting is much more gorgeous and creative than I expected, with minimal “faux Egypt” elements and much more “desert/oasis region with its own rich history and present story.” Overall, it feels like a different game — still partially an open world explore-fest, but far more like a single-player, story-driven adventure.
Here’s the new album!
And here are the patch notes (and other notes, really).
Mastering’s done. I’m happy with the sound and I’m working on the image and the words. I have some patch notes and a bit of explanation to write up. That was going to happen tonight but I wound up exploring some sound experiments a little instead, and reading The Rhesus Chart.
I don’t often like to choose single favorites among wide categories. But it’s safe to say that The Laundry Files is my favorite series in the horror-comedy-spy-fantasy-software development genre. It’s up for a Hugo award this time (and it’s got good company; the Sick Puppy bloc aka “everything must be made by, for and about white manly men” must be too busy with QAnon or MAGA rallies these days to bother with merely extinguishing diversity and creativity in genre fiction).
A survey of the patch notes from Passing Through told me:
The newest synth trade show, Synthplex, took place in California last weekend. Less news than I expected came out of it given the amount of hype around it, but the Rossum Electro-Music Panharmonium stood out. It’s basically an FFT spectrum analyzer which then controls a cluster of analog oscillators — not quite a vocoder, but an odd and intriguing take on a spectral resynthesizer. I literally had dreams about the thing. I may find myself picking one up before Knobcon after all, once I’ve sold a little more gear to fund it 100%.
Speaking of synth trade shows, Knobcon has now also missed its postponed date for opening up ticket sales. The Facebook page still says March 1, with no updates since January. The website itself still says “Tickets On Sale in March 2019” and the “Buy Tickets” link still goes to the exhibitor registration page (which sometimes appears broken or closed). I hope things are okay with everyone involved.
Writing this while mastering the album. A few tracks have given me minor difficulties, and Sound Forge Pro 10 continues to be about as stable as a game of Jenga running on a Packard Bell laptop running Windows Vista on the back of a neurotic chihuahua on a ship in a storm. Or something like that. But it progresses.
Because listening to the same new songs several times in a row and making minor adjustments isn’t enough I guess, I’ve started a sequential listen through my Starthief albums. And I noticed something.
A few months before starting Nereus I had felt like I’d “found my sound” and was refining it. If you listen over the course of a few hundred songs in 2017 it does sound a bit like I’m closing in on something, and Nereus is the pinnacle as well as the end of that phase. The album is full of sequenced bass/melody lines with hard attacks and exponential decays and octave leaps; lots of snappy LPG plucks and saturated triangle waves, and backgrounds made busy with exotic modulation techniques.
And then there’s a line. Or perhaps an ellipsis…
And then there’s Shelter In Place. That was when I really got into the improvisational, drones-and-rhythm thing. As I listened to it, my thought was “I bet that was when I traded away the 0-Coast.” I just doublechecked, and yes, it was. In a sense, SIP is really the first proper Starthief album, and Nereus is the end of the transitional phase that created Starthief.
My 2019 albums felt more like they stood on opposite sides of a line: this change from “modular 1.1” to “2.0” that I kept on about. One saw me paring down my gear, the other saw the first usage of a lot of new stuff. But the gear change was carefully arranged to preserve and streamline the things I wanted to keep doing, and so these two albums really aren’t very far apart in composition, technique or sound.
My interpretation of the Passing Through theme varied per song — sometimes I carved out more breathing space (which is where I think the album sounds a little more different), and sometimes I layered things on like those interpenetrating energy fields I was talking about. In both cases, I was exploring some new technique as well as the gear. But it all still sounds like Starthief to me, and I should know 😉
Monday evening, I set up my composition idea as planned: three triangle oscillators feeding a sine shaper, with two of them under polymetric control from Teletype (3-in-8 vs 4-in-9 Euclidean rhythms) and one under manual control.
It was frankly pretty boring when I was just using octaves. So I decided to go off the rails a bit and sequence pitches with the Sputnik 5-Step Voltage Source. I clocked it with the master clock, regardless of the rhythmic pattern; the first voice used a channel directly and the second sampled a channel every 8 clock steps. So what we’d get is a complex pattern that starts something like this:
Where time runs left to right, and each color in each lane represents a knob on the 5-Step (not necessarily indicating what the pitch value is set to, nor is “red” on the top and bottom necessarily the same pitch but sometimes they are, and different colors within the same lane are in some cases tuned to the same pitch). The pattern in the top lane runs for 40 beats before repeating, and the bottom lane runs for 72 beats. Because these two are interacting thanks to the sine shaper, they can’t be thought of as individual parts and so it’s going to take 2880 beats for the pattern to repeat. At the tempo I used, the whole recording is roughly 1/10 of a full cycle. (Or… it would be, except I put the two patterns under manual control, suppressing the triggers while letting the sequencer keep clocking. Monkey wrench!)
Complex patterns from relatively simple rules. But that was kind of a tangent — the point is, the frequencies I dialed in, relative to each other, often collided in non-integer ratios. Even if they sound good as individual notes played together, when you use them in phase modulation things get a bit dissonant and skronky, with new sidebands at weird frequencies.
This is the 21st century — beauty is complex, artistic merit isn’t directly tied to beauty, we’re not limiting ourselves to Platonic perfection, and the idea that certain intervals and chords could accidentally invoke Satan isn’t something we lose sleep over anymore. I think the result I got is pretty neat! But it’s not really what I had originally imagined. So I’m going to keep the basics of this idea, follow a different branching path with it and see where that goes.
The third voice, I controlled with the 16n Faderbank — one slider for level, one for pitch. The latter went through the ER-301’s scale quantizer unit, so it always landed on something that fit reasonably well with the other two voices. It turns out this unit supports Scala tuning files, and TIL just how crazy those can get.
Scala is a piece of software and a file format which lets you define scales quite freely — whether you just want to limit something to standard 12TET tuning, or a subset of that (such as pentatonic minor), or just intonation, non-Western scales, xenharmonic tunings, or exactly matching that slightly-off toy piano. The main website for Scala has an archive of 4800 different tuning files and that’s just too much. This is super-specialist stuff with descriptions such as:
With all these supermagic hobbits and semimarvelous dwarves and Godzilla, and all the other denizens with their Big Gulps and pistols, where do I even start with this? The answer is, I don’t. I’ll just try making a couple of my own much simpler scales that I can actually understand. Like 5EDO — instead of dividing an octave into 12 tones, divide it into 5.
Today’s an especially slow workday and I’ve been reading a lot of interviews and articles at The Creative Independent. I haven’t had any particular epiphanies as a result, but it’s stirring the brain juices a little.
But I did have a minor revelation this morning about the connection between wavefolding and phase modulation thanks to Open Music Labs being, well, open about their designs. In particular, the Sinulator, which is similar to the Happy Nerding FM Aid — a module I owned once, let go of because I figured I had enough FM/PM capability in my system. (Frequency modulation and phase modulation are very closely related; the simple version is that a continuously advancing phase is frequency, and PM is basically indistinguishable from linear FM in terms of results.) I’ve wished a few times that I’d kept the FM Aid, but could sometimes get similar results out of Crossfold. I didn’t understand why, though.
OML’s description and blessedly simple mathematical formula (no calculus or funny Greek letters!) make me realize, this is basically the same thing described by Navs some time ago (I think in a forum post rather than the blog though). And it ties in with my recent efforts to do nice-sounding wavefolding with the ER-301.
“Sine shaping” is a commonly used shortcut to wavefolding as well as triangle-to-sine shaping. It’s literally just plugging an audio input in as x in the function sin(xg), where g is the gain.
If g is 1, and x happens to be a sawtooth or triangle wave, you’ll get a sine wave out of it. If the input is a sine, you get a sine that folds back on itself a bit… and the higher g goes above 1, the more the output will fold over on itself and get more complex and bright. (Better sounding analog wavefolders and their digital imitators don’t map to a sine exactly, but it’s a similar-looking curve. Also they use mulitple stages in series for more complex behavior. But a sine totally does work.) What I learned here is that adding another term inside that function will shift the phase of the output… tada, phase modulation exactly how Yamaha did it in the DX series (and then confusingly called it FM). A whole lot of puzzle pieces clicked together.
Anyway… in this model because one just adds the two inputs, it doesn’t really matter which is the carrier and which is the modulator. Why not use independent VCAs on both, and sequence them separately? Maybe some kind of polymetric, occasionally intersecting thing where it’s like two interacting fields, totally fitting the theme of the album I’m working on? To lend form to the piece, one of those inputs can be transposed, have its envelope or intensity changed, or a third input can be added (it’s just addition)…
I don’t normally plan my compositions quite so much when I’m away from the instrument itself, and I almost never get this… academic about it. (Is that a dirty word?) But I’m eager to try this one.
So there’s a free peek inside a process I don’t usually use.
Reading about the history of synths, or about the use of synths in rock, one always comes across worshipful descriptions of Keith Emerson’s “Lucky Man” solo and the Moog Modular he took on tour to perform it.
I never really bothered to check it out. I don’t think I ever heard the song, or paid attention if I did. But I took the authors at face value: that this was a blistering, awesome performance that was part of the pincer maneuver which made Moog more or less a household name and doomed Buchla to relative obscurity (Switched-On Bach being the other) and that Emerson was a master both of modular synthesis and rock performance.
My curiosity was finally prompted by the MST3K riffing on Monster A-Go Go which made references to both “Fly Like An Eagle” and “Lucky Man” during a particularly synthy part of the soundtrack.
So I watched a couple of videos, and… well. Maybe a rock fan in 1970, having seen nothing like it, would have been blown away. But the first thing I noticed is the patch is really, really simple. Five years later he could have been playing that on the one-oscillator Micromoog. At the time, he could have pulled out 95% of the patch cable spaghetti draping the thing. Sure, it had an impressively powerful bass sound which Emerson made good use of, but there was nothing very sophisticated about the patch. The synth was mostly serving as a prop. “Look at all this equipment and all those cables, this guy must be a wizard!”
(I’m not disparaging Emerson’s synthesis skills — maybe this is the exact sound he was going for. Maybe it was set up for a quick between-songs repatch to do something completely different; pull one cable here and plug one in there and it’s ready to go. But I do think a lot of it was for show.)
The second thing is, the timing was really sloppy, at least in the performances I watched. Particularly in a more recent performance, there was a slow portamento and I wonder if that’s throwing off his playing, because he’s just not playing to the tempo of the rest of the band. It didn’t feel like expressive timing but just bad timing. Otherwise, what he played was… okay, but not the most acrobatic or virtuosic or creative solo I’ve ever heard by any means.
So, yeah. I guess this is just one of those cases where the historical context was the fuel and the art was a spark; with the fuel burned out we can see that the spark was a small thing.
I wrote up a forum post in a “how to synthesize drones” thread which, I think, contains the most coherent thoughts I’ve put together on the subject. Maybe that’s not saying much, but here it is for posterity, expanded a little bit.
I use the word “drone” in a more general sense than some people, but more strictly than others. If I control a sound in terms of level rather than “playing notes”, I generally consider it a drone. It’s not an absolute rule, but drones usually have a (more or less) fixed pitch. There may be rhythmic accents.
I don’t quite understand how a band like Earth is considered “drone” when they’re clearly playing riffs, have melodies and standard chord progressions and so on. That’s far too loose a definition for me. Nor does it have to be an unrelenting, 25-minute long pure sine wave.
When I create drone-based music, this is what I think about:
I almost always set up at least two voices, because relative variations in level, spatial characteristics or timbre can be much more interesting than absolute variations of a single voice, and because they can lead to shifts in texture or the creation of new textures. Sometimes extra voices have their origin in the original voice, and just involve additional or different processing.
Although I’m talking about drones here, this corresponds quite a lot to Curtis Roads’ concept of “multiscale composition.” As I’ve said before, my act of composition is spread out between pre-recording, recording and post-recording phases — but it’s all composition, even if there are no “notes”, some is spontaneous, and some a reaction. Why not use the ears as a tool of imagination, and not just the brain?