think tank

Today’s an especially slow workday and I’ve been reading a lot of interviews and articles at The Creative Independent. I haven’t had any particular epiphanies as a result, but it’s stirring the brain juices a little.

But I did have a minor revelation this morning about the connection between wavefolding and phase modulation thanks to Open Music Labs being, well, open about their designs. In particular, the Sinulator, which is similar to the Happy Nerding FM Aid — a module I owned once, let go of because I figured I had enough FM/PM capability in my system. (Frequency modulation and phase modulation are very closely related; the simple version is that a continuously advancing phase is frequency, and PM is basically indistinguishable from linear FM in terms of results.) I’ve wished a few times that I’d kept the FM Aid, but could sometimes get similar results out of Crossfold. I didn’t understand why, though.

OML’s description and blessedly simple mathematical formula (no calculus or funny Greek letters!) make me realize, this is basically the same thing described by Navs some time ago (I think in a forum post rather than the blog though). And it ties in with my recent efforts to do nice-sounding wavefolding with the ER-301.

“Sine shaping” is a commonly used shortcut to wavefolding as well as triangle-to-sine shaping. It’s literally just plugging an audio input in as x in the function sin(xg), where g is the gain.
If g is 1, and x happens to be a sawtooth or triangle wave, you’ll get a sine wave out of it. If the input is a sine, you get a sine that folds back on itself a bit… and the higher g goes above 1, the more the output will fold over on itself and get more complex and bright. (Better sounding analog wavefolders and their digital imitators don’t map to a sine exactly, but it’s a similar-looking curve. Also they use mulitple stages in series for more complex behavior. But a sine totally does work.) What I learned here is that adding another term inside that function will shift the phase of the output… tada, phase modulation exactly how Yamaha did it in the DX series (and then confusingly called it FM). A whole lot of puzzle pieces clicked together.

Anyway… in this model because one just adds the two inputs, it doesn’t really matter which is the carrier and which is the modulator. Why not use independent VCAs on both, and sequence them separately? Maybe some kind of polymetric, occasionally intersecting thing where it’s like two interacting fields, totally fitting the theme of the album I’m working on? To lend form to the piece, one of those inputs can be transposed, have its envelope or intensity changed, or a third input can be added (it’s just addition)…

I don’t normally plan my compositions quite so much when I’m away from the instrument itself, and I almost never get this… academic about it. (Is that a dirty word?) But I’m eager to try this one.

So there’s a free peek inside a process I don’t usually use.

put some clothes on, Your Majesty

Reading about the history of synths, or about the use of synths in rock, one always comes across worshipful descriptions of Keith Emerson’s “Lucky Man” solo and the Moog Modular he took on tour to perform it.

I never really bothered to check it out. I don’t think I ever heard the song, or paid attention if I did. But I took the authors at face value: that this was a blistering, awesome performance that was part of the pincer maneuver which made Moog more or less a household name and doomed Buchla to relative obscurity (Switched-On Bach being the other) and that Emerson was a master both of modular synthesis and rock performance.

My curiosity was finally prompted by the MST3K riffing on Monster A-Go Go which made references to both “Fly Like An Eagle” and “Lucky Man” during a particularly synthy part of the soundtrack.

So I watched a couple of videos, and… well. Maybe a rock fan in 1970, having seen nothing like it, would have been blown away. But the first thing I noticed is the patch is really, really simple. Five years later he could have been playing that on the one-oscillator Micromoog. At the time, he could have pulled out 95% of the patch cable spaghetti draping the thing. Sure, it had an impressively powerful bass sound which Emerson made good use of, but there was nothing very sophisticated about the patch. The synth was mostly serving as a prop. “Look at all this equipment and all those cables, this guy must be a wizard!”

(I’m not disparaging Emerson’s synthesis skills — maybe this is the exact sound he was going for. Maybe it was set up for a quick between-songs repatch to do something completely different; pull one cable here and plug one in there and it’s ready to go. But I do think a lot of it was for show.)

The second thing is, the timing was really sloppy, at least in the performances I watched. Particularly in a more recent performance, there was a slow portamento and I wonder if that’s throwing off his playing, because he’s just not playing to the tempo of the rest of the band. It didn’t feel like expressive timing but just bad timing. Otherwise, what he played was… okay, but not the most acrobatic or virtuosic or creative solo I’ve ever heard by any means.

So, yeah. I guess this is just one of those cases where the historical context was the fuel and the art was a spark; with the fuel burned out we can see that the spark was a small thing.