good reads

I’m a software developer, or software engineer if you like. (Which isn’t “IT” nor really the old-fashioned “data processing” but “computer programming” I’ll allow.) I’ve long had a certain skepticism about computer science as a discipline, or at least the way people in my experience have tried to apply it to practical problems.

During my career I’ve had a few coworkers who had been Computer Science majors, and who wanted to (and sometimes did) build these complex structures full of indirection and obfuscation, with multiple layers of “controllers” that didn’t control anything, “handlers” that didn’t handle anything, generic interfaces that were always used for one specific purpose to which they were not well suited, and so on. The code took longer to develop, was harder to follow and maintain, and often not a great performer compared to a more direct approach. Some of them also disliked the idea of caching or of shortcutting algorithms because it wasn’t “elegant” even if it improved performance very noticeably when I was asked to improve its performance. A classic collision of theory and practice, I thought.

The second-to-last book I read though was Algorithms to Live By: The Computer Science of Human Decisions, and I’ll allow that the field maybe wasn’t well represented by its advocates. A lot of the book (and the field) deals with finding “good enough” solutions for intractable problems, or making decisions with insufficient or unreliable data. The book gives several different categories of examples and how they can apply to personal decisions as well as political approaches and policies.

It turns out to be very practical. Like, sometimes the best sort algorithm is to not sort at all, because the time required to sort data is not justified by the time saved when searching it. For that, you have to have a pretty good idea of how much data you’re going to deal with, and how it’s going to be used.

Sorting efficiently is non-trivial, but it’s a solved problem, and developers are just going to call a standard library function and not worry about it. Other problems are quite different, especially when human behavior gets involved.

In life there are plenty of situations where everyone behaving rationally in their best interests turns out worse for everyone, and occasional situations where acting irrationally or taking risks turns out better for everyone. In almost every case, it’s because the system sucks, and a change to the rules will improve everyone’s outcomes. This applies to everything from network traffic to auctions, economic systems and policy, criminal justice, etc. (So the meta-question as always is, how to change the entrenched political system so that it’s willing to actually apply changes that are in everyone’s best interests?)

There were a couple of computer science metaphors where I could see applications to my own creative endeavors. For instance, Early Stopping and perhaps Overfitting. There’s a point where you should just stop working on a piece instead of trying to perfect it, because you’re just making changes that aren’t improvements. In the case of art/music, I think it’s because you’ve been exposed to it too long, and anything different has novelty value. It’s often better to stick with first instincts. This is a lesson I learned some time back, but with more recent changes to my editing process where I’m tending to layer in more things after the initial recording, it’s good to have a reminder.

Simulated Annealing comes from metallurgy, where annealing is a process of melting a metal and letting it cool very gradually to align the crystalline structure and make it stronger. Heat is just random motion, so carrying the metaphor to simulations and modeling, the idea is to begin with some amount of randomness but decrease that randomness with each iteration. This turns out to be a good way to “jiggle” the solution away from getting stuck in local maxima.

It also is, if I understand correctly (and my understanding is admittedly vague, though better informed than some), the way that “AI” art algorithms work (in a very general sense), with the generator network starting from raw random noise and refining it to try to satisfy the discriminator network (which decides whether the pixels look like a duck), each iteration using less randomness and more of its “knowledge”.

My creative process has a similar pattern. I will generally start with an idea, but will take wilder swings and wobbles at variations and then gradually settle down. The last couple of voices I add during my patching phase (not necessarily the last to be played chronologically) are designed to complement the rest; there’s less experimentation and more drawing on existing experience.


The latest book was No One Is Talking About This. I didn’t know much about the book before checking it out, just that it was highly recommended and had something to do with internet culture. I had the impression that it was going to lean toward science fiction, about someone trapped online (either in a more literal cyberpunk sense, or a mental health sense of obsession/addiction).

Instead… it’s almost non-fiction, an extremely relatable stream-of-consciousness journal of 2016-2019, a sort of satire by way of simply reporting life and culture and letting the absurdity stand out on its own. To summarize, I’d have to say it was about how people connect (or don’t), whether they’re strangers or family, and about how people react to each other. I will avoid spoilers, but about halfway through, the narrator is shocked by personal life events out of their Extremely Online life into something else, and it’s both heartbreaking and heartwarming.

It felt very odd somehow to read a story that was very clearly about the Trump era and our cultural/political response to it, that cut off right before the pandemic and the January 6 fiasco that both loom so large now. It’s almost scary to think about, but it goes back to what I had said about COVID being one of those definite “before” and “after” points in history.