Last night was our performance for the 80s Alternative winter session at School of Rock. I got to play some more synth-forward songs with a bunch of talented adult musicians, many of the same folks from the prior session.

Here was our set list:

  • Genius of Love - Tom Tom Club
  • Wild Wild Life - Talking Heads
  • Waiting Room - Fugazi
  • Topaz - The B52s
  • Girl U Want - Devo
  • Teenage Riot - Sonic Youth
  • The Metro - Berlin
  • Cars - Gary Numan
  • Where Is My Mind? - The Pixies
  • Weird Science - Oingo Boingo
  • Running Up That Hill - Kate Bush
  • Omegaman - The Police

In the prior performance, because I joined late, I only got to play three songs. In this show, I had to learn and play seven songs (in bold), and a little bit of saxophone sound on Topaz. It was a lot to learn and keep in my head!

One different aspect of this show was learning the instrument itself. the Downingtown School of Rock has Korg Krome keyboards which are pretty nice, but because I don’t get to spend time with them outside of our rehearsals, it’s really hard to prepare the right sound for a song. If you listened, you’d probably not be surprised at how much difference the right sound makes, from the “bwuippy” sound in Genius of Love to the iconic Vox Humana synth sound of Gary Numan’s music. I asked our music director if I could bring in my own keyboard instead of using the Krome, since there was an extra slot in the stand and I would then be able to do some sound design at home - he agreed!

Over the holiday break between performance sessions, I got an Arturia AstroLab 61 and the Arturia V Collection. The V Collection is a library of classic synthesizer simulators. The interface lets you tune the individual knobs and switches from the original classic synths, like the Fairlight CMI, which is the synthesizer that Kate Bush originally used to make the iconic sounds in Running Up That Hill. If the V Collection can make the sound, then there’s a pretty good chance that the sound can be exported to the AstroLab to play without being connected to the computer, which is a pretty big deal. Most stage keyboards produce their own sound, and while it’s possible to connect to a computer via MIDI and have the computer do all of the work, having the keyboard make the sound just feels like the more “pro” way to go.

Some of the sounds were fun to design. The Genius of Love song was possibly the most satisfying when I finally got it locked in. The Vox Humana sound for Cars was really easy to get - just use that specific vintage synth - after I learned that’s what the song called for. I spent a lot of time on YouTube watching breakdowns of these songs or the other songs of the same artists to try to get at just the right sounds within my rig. A couple of the sounds were named presets in my keyboard. For example, the entire set of sounds from The Metro were presets that were already there, including not just the bass sound that I played in performance, but also the ambulance sound effect that plays a couple of times during the song.
Sadly, switching sound presets on stage is tricky business, since these simulations are not small and take a second to load. If you try to play the new sound before it’s ready, you’ll end up playing the old one, and it’ll “hang on” to the old sound for longer which often defeats the purpose. I tried splitting the keyboard so that the halves play different instruments, but sometimes this doesn’t work because the preset is actually two synths layered on top of each other, consuming both synthesis engines. In the end, I didn’t split the keyboard for any of the songs; It was enough just to memorize and perform one sound for each song. Learning to play the organ and the horms and the sequenced xylophone sound for Weird Science was just too much.

I did manage to create a sound for Topaz using Pigments, which is Arturia’s alternative to software synthesizers like Serum and Massive. Again, things you design in Pigments can be directly exported to the AstroLab, which is very nice, although the latest version of Pigments, version 7, does not yet have a compatible firmware version in the AstroLab. I had to downgrade to version 6, which was fine, but then I some presets couldn’t be manipulated in the Analog Lab software that is used to connect to the keyboard. I made it work, but I’ll be glad for a firmware upgrade that lets the AstroLab support Pigments 7 sounds directly. In any case, I used the sampler in Pigments to split three of the Topaz sounds across octaves of the keyboard. So when I play C3, it’s be the opening sax, but when I play C5, it plays the rocket launching sound from later in the song. Unfortunately, I didn’t get all of this in the keyboard before our other keyboard player figured out how to make the Krome make those sounds (I think he either owns a Krome and/or spends a lot more time at School of Rock), so I only got to play the sax part. Still, I know how for next time!

Some challenges with this performance were the same as from last time. At least for the adult performances, but possibly for the student ones, School of Rock just gives you the names of the songs they expect you to perform, and you have to figure out the rest. They don’t give you music. They don’t tell you what you’re supposed to play. All you have to go on is what you hear and the eight rehearsal sessions with the rest of the band. When you have two keyboard players, there doesn’t seem to be a magic formula for figuring out who plays what. And so I practiced certain parts of certain songs that I didn’t get to play, just so that I would be able to play the part that we’d negotiate at rehearsal time. I honestly don’t know how else you might do this, but it does lead to having to listen and learn a lot more than you might otherwise. Is this a good thing? Maybe?

The weird performance issue with monitors was a thing in this show, too. I have said many times that I really don’t hear what I’m playing on stage. There’s a monitor behind me while I play, so I should be able to get a mix of the things I need to hear for cues and what I’m playing to ensure that I’m hearing myself. And even during sound check these things sound ok, but when the performance comes, it’s all banging on keys and hearing nothing. For next time, assuming I’m still using my own keyboard, I might bring the wireless IEM pack and plug it into the headphone port of the keyboard. The headphone out doesn’t turn off the mains on my keyboard, so I should be able to play and hear in one IEM, then hear the rest of the band in the other ear. Something worth trying in rehearsal, anyway. I wish the mixing board was a little more robust; Being able to set my own monitor levels from my phone would be sweet.

This time I really enjoyed playing Cars. The sound in the keyboard was dead-on, and was my only real fancy instrument switch, where I started with a “wah” pedal effect and turned it off as the song picked up. The harmony at the end of the song was all me and sounded fantastic.

The Metro was a lot of fun to play, too. In spite of what I said earlier about the instrument presets for this song, I’m a little disappointed that the bass sound that is the part I played was a “bass” sound in the instrument itself, because this limits the sound to being monophonic. As a result, I couldn’t play some of the nice power chords during the chorus. Nonetheless, the song is a driver, and playing all of those double notes so many times while also pausing for the choruses are the right time was a lot of fun to get right.

The other keyboard player didn’t seem fond of Devo as a band and didn’t like Girl U Want at all, but I thought it had a nice groove and was fun to play. We also had a female vocalist for this song, which was an interesting twist. It’s a goofy jam, and that’s fun.

My most confident song was Omegaman. There isn’t much to playing it, and it all sounds terrible. The dissonance in the vocals and power chords seemed impressive to some of the other musicians in our group, but the song didn’t impress me at all. But at least I got to play some (two!) actual chords instead of just melodies, so that’s nice.

I did not like playing Weird Science for a number of reasons. As fun as a the song is, trying to listen to it to tear it down into playable parts is a nightmare. There are so many sounds happening at so many random times, it’s a mess. There is a lot for keyboards to play, but it’s all different sounds, and as I mentioned, it’s hard to quickly and reliably switch between instruments on the keyboard. I ended up playing only the organ parts, which was both boring and frustrating, because halfway through the song, the sequence just happens randomly. I think I might have been able to do better with some MIDI-triggered sounds, but given the tight timeline (I only got my own keyboard into the last two rehearsals) I didn’t get to work that out. Getting real-sounding horns and that xylophone sound into the mix might have been cool. Alas.

I really had a lot of fun with this session. I’ve been noticing that my ear for playing music is developing, which is the whole point. I mentioned to Berta that getting better is “all about the reps” and I really think this is true. Much like the online EDM mastery class that I have been taking, I think you just need to do it. Make a song, break down a song, play a song, etc. Put some dedicated time in and aim for an output, not just fiddle and never complete anything. And sure, it’s probably not perfect, but what is? And the more you do it, the better you get, and the better it sounds.

I’m not going to be able to join the next session “Arena Rock”, which is a real bummer. I won’t be able to commit to practice in the way I’d want because we’re having to pack up the house for the kitchen remodel (a post for a different time). I will nonetheless continue with the weekly lessons, and I hope to return to practicing for another performance in May.

Comments

To comment on this post, search for this URL in your ActivityPub client (such as Mastodon): https://asymptomatic.net/posts/2026-02-28-school-of-rock-round-two

No comments yet. Be the first to reply via ActivityPub!