Static State

Hi Everybody-

Just some quick hits tonight. First, thanks to the people who have shown interest and offered word of encouragement recently - I appreciate it. Another you can tell from all of the sketches I've been tweeting out, I've been doing a lot of experimentation with Spectrasonics' Moog Tribute Library and enjoying it a lot. You should probably expect to hear a lot of Moogish instrumentation on the upcoming stream. I have three interval structures from Static Void pretty well fleshed out so I'm taking a break to do some software work while I have the energy.

I've been listening to as much generative as I can over the past few weeks. It's a little tough with me being such a rap & rock guy, and interesting because a lot of generative music is either ambient or almost purely rhythmic. I seem to be the only game in town in terms of cohesively generative melody and/or harmony. That said there are a lot of generative audio artists doing some interesting things - Renick Bell specifically is doing some really great work in the generative space. Also you might wanna listen to Rob Clouth and some of the other Leisure System stuff. A lot of this music isn't purely generative, but will have some generative or algorithmic elements. I'm still looking for an area where I might fit in a little more and be part of the conversation...something closer to generative pop. So far I'm not seeing it though....maybe that's not a bad thing.

I'm not gonna get into Autechre or Brian Eno. If you're reading this, you already know about them already.

Anyway...Static Void. Unfortunately, I have to do the thing I hate the most for a little while: run endurance tests with the Aleator, with a little regression thrown in. I'm introducing a ton of changes and there were known issues with the plugin running indefinitely as it was. I'm making some progress...if I get it to the point where it is consistently running for 3 days or more, I can live with that.

Once I get past that I will add a feature or two. Dynamic instrumentation (i.e. not having all the instruments always playing) is definite, and I might look into either fades or tuning the drum kits. I'm definitely going to add one more kit as well - I'll go with four total for this stream. Considering the fact that Facets only has one, that's plenty. Then finally, I'll get three more structures in there (6 total). I'm really excited about how the sketches are turning out and think the stream is going to shock a lot of people into believing.

In the meantime, I'll keep pumping out the sketches when I get time to tide you guys over. Thanks again for caring...

Space Case

One of the main reasons I started working on Staggered was because I wanted some insight into my own musical tendencies. My knowledge of music theory was (and remains, relatively) limited. I just played shit that I thought sounded cool and never really thought about exactly why I felt that way about it.

Part of the fun of actually working on streams as opposed to working on the Aleator itself or the site is that I get to analyze my stylistic decisions as a songwriter, specifically as they relate to chord coloring and harmony. If we imagine melody and harmony occurring on the x and y axes, respectively, contemporary pop music is horizontally focused. Meaning - the most important thing is the melodic hook...that's where the money gets made. There is a ton of sound being crammed down our throats as listeners - whether it be words, weird ass noises, effects or just pure saturation. Harmony requires a certain amount of aural space to really have an impact and as a result it seems that harmonic concerns aren't really at the forefront right now. That makes the topic very interesting to me.

I want to open up my process a little for you guys and provide some visibility into how these streams come into being. First off, everything starts with acoustic guitar. That may seem counter intuitive, but its the truth. So, let's have a listen to a quick sketch of a piece I'm working on called Space Case. As a song, it's not overly complicated:

So a quick rundown of what's going on here in terms of progressions. I am in C major, but I work key agnostic so I'm providing roman numeral analysis. For part "A", each chord is held for a measure. I alternate between Cmaj9sus2 (I) and Am7 (vi) three times and then for the final cadence move to Em7 (iii) <--> Dm7 (ii). Then, for part "B", I modulate to the relative minor (A minor in this case) and hang out on the newly established tonic. This time I'm cramming two chords into each measure, but its a similar pattern. I alternate between Am7(b13) (i) and Am7 (i) for the first three measures, then in the final measure go from Em7 (v) to F (VI).

Right away you'll notice a lot of 7th chords, and some are even extended. I've always known that I tended to use a lot of 7ths and 9ths, but my use of suspended and extended chords was news to me. This was all pretty problematic for with Facets, as the Aleator could only play basic triads at that point. As a result, it lacks a certain amount of nuance; I'm hoping that the work I did on the Aleator in the fall and winter makes Static Void a lot richer and that some of the subtleties that come through on the acoustic can find their way through everyone's speakers.

Anyway, when transposed into XML for the Aleator to consume (with another tiny part added), we get:


With this, the first step is complete; the harmonic framework for this particular passage is represented in XML with no reference to key. Next time I'll talk a little about some of the challenges that have arisen as a result of the additional intervals, altered 5ths, etcetera. Space Case is the first Staggered piece to incorporate any of that so we'll see how it goes. In the meantime I'll keep tweeting sketches.

Nth World Problems

nth world problems

nth world problems

To anyone who has paid even the slightest bit of attention to what I'm doing, I want to say Hapy Holidays & Happy New Year.

As far as Staggered is concerned, this has easily been the most frustrating and humbling year of work since year one. In a lot of ways, it was worse. In 2013, I was working in total abstraction and didn't know for a fact that live, generative MIDI was even possible. As far as I knew, nothing like it had been attempted before. It was an amazing moment when my code generated those first rhythmic pulses of white noise. It wasn't music (or was it!?), but I knew that I was onto something.

In a lot of ways, not much has changed. Musicians are dipping their feet into generative techniques using software, the same as they were before I started working. However, in the realm of music, two facts placed Staggered on an island in 2013 and still do:

  • Code is my primary instrument
  • My output is ephemeral by design - I do not aspire to be a recording artist

The second point cannot be overstated. Since I am streaming live 24/7, I have infrastructure concerns that even another musician working primarily in generative MIDI would not. Availability, fallbacks, recovery, etc. In this space, there is no one for me to defer known blueprint. I'm still working from scratch.

For this reason, I deployed the minimum viable product in early 2014 as Facets, which is still what you hear on this site. Before I can move forward, that prototype must become an application with all of the features I need to create the desired output. This is what I've been working on all year, and it has not been easy. The work itself has been painstaking and tedious, with no real gratification until recently.

Starting in the New Year, I will be posting test runs from Static Void. This isn't some sort of planned's just coincidence that I've been able to complete the implementation of some of these features over the last few weeks. These include:

  • Changing drum kits from one composition to the next (currently only one kit)
  • Changing chords on 8th notes (currently only done on quarters)
  • Lead melody playing 16ths (currently only 8ths)
  • 4+ note chords (currently only triads)

While these might not seem like a big deal and might not even be detected by a casual listener, the mathematical component of the changes made them exceedingly difficult to debug, especially in my spare time. That's really what happened to 2016. I had nothing to disclose on a given day other than what specific algebraic hell I had sunk into.

I will be transposing my notes to XML, experimenting with new presets and doing some test recordings (gasp!) for you guys in the coming days just to pull you in on the process. You know, the fun stuff. The reason I started doing this in the first place. There's still some low hanging fruit in terms of features (for example: varying the instrumentation instead of having all instruments always playing), but I need a break form dev for a while.

As I move forward, please post ideas in the comments or tweet. Thanks again for your interest and again, Happy New Year!



Reverse Proxy: Two Birds, One Stone

Hey it's been a minute so I thought I should speak upon some nerd shit. You know, for posterity. 

Today's topic will be the reverse proxy. As you may or may not have realized, I was incapable of streaming through most corporate firewalls previously. I use SHOUTcast as my streaming server and the audio comes through on port 2199. I couldn't figure out how to change that on the SHOUTcast side; I don't believe it's possible. I've been operating at such a low level though that it really wasn't pressing. The result was simply that most people (myself included) couldn't listen to Facets - or anything else that I stream - from their work computer. That made me feel like this:

Seriously, I moved on pretty quickly. However, when I started trying to implement the visualization in the dashboard using the Web Audio API (another thing still in progress), I realized I had another, related problem. I couldn't use an audio buffer source for my stream because it never ends. It's impossible for me to fill the buffer since the onload event will never fire; the request never technically "loads". In other words, this shit will not work b:

      var url = ";";
      var request = new XMLHttpRequest();"GET", url, true);
      request.responseType = "arraybuffer";

      /* Good luck ever hitting this, dumbass */
      request.onload = function()
         /* Create the sound source */
         soundSource = context.createBufferSource();
         soundBuffer = context.createBuffer(request.response, true);
         soundSource.buffer = soundBuffer;

That meant that I needed to use a media source. No problemo. Oh wait - still one problemo: CORS. For you non developers who are weird enough to be reading this, CORS stands for Cross-Origin Resource Sharing. It basically means that you cant use Javascript to load resources from a domain other than the one your application is running on without consent on the other end. In this case, I was fucked. I can't just make SHOUTcast allow me to make JavaScript requests to resources on their domain. What to do...

With a little elbow grease (aka Google), I had my solution: set up a reverse proxy. The basic idea is that if you are administering a web server, you can set it up as a sort of relay and allow cross-origin access to the resource there. You configure your web server to forward requests meeting the desired criteria to the destination of your choice (in this case, my SHOUTcast URL). Then, on the front end (e.g. Sqaurespace), you send your request to your web server. In IIS, it looks like this:

Click URL Rewrite

Click URL Rewrite

Click Add Rule(s), then Reverse Proxy in the subsequent window and follow instructions

Click Add Rule(s), then Reverse Proxy in the subsequent window and follow instructions

All set

All set

Not sure about Apache and other servers, you're on your own there. Of course, as a side effect of the default IIS configuration the resulting audio is exposed on port 80, which alleviates any port/firewall problems. Pay me.

Don't Mind Me...

You may have noticed some changes around here recently. We're redoing the site on the fly so just act like everything's normal. As you can see, we're installing a custom player. You can see the new functionality in the upper left hand corner of the site, but we're (obviously) still working on styling. As a side note, the nav items have been moved from the center to the upper right.

In the coming weeks we'll be updating banners and...(drum roll)...adding our new dashboard to the navigation. It's still under construction, but you can preview it here.

Y'all finished or y'all done?

Y'all finished or y'all done?

Don't say I never did anything for you.



Happy Saturday true believers. Another weekend, another immense dev task to tackle in my spare time. Today (and for the foreseeable future) I am working on changes to the Aleator that will allow chord changes on 8th notes; currently it only happens on quarters. This is one of the four major changes I was planning on implementing for Static Void:

  • Chord changes on 8ths
  • Drum kit changes between passages
  • Increased chord coloring (7ths, 9ths, etc)
  • Arrangement variation (inclusion/omission of instruments in a given passage)

Given where I am though, I will probably just move forward with the first three and implement arrangement variation in a phase 2. There have been a ton of changes since I last pushed to production which isn't good, so I will look to get all of that stuff tested and deployed and then get the new XML sets in place before any further Aleator changes.

Last week, my DAW (Reaper) stopped producing audio for a very long time, but didn't crash. This is the worst case scenario since it leads to silence in the production environment (a crash results in fallback files being played). Before I launch the next stream, I have to figure out a way to stop this from happening, or at least bring down Reaper when the Aleator stops producing notes. That's what we call a "P1" in the biz.

Wanna see how boring this shit is to implement? Here's a peep at the method I'm currently altering - this one determines drum totals if the Aleator has decided that it's going to play a Reggae style beat. Ugh.

Static Void

Long time no see. I just wanted to take moment to let anyone reading this know that after a long time just coding, we are getting close to having what you might call an alpha. That means that activity around these parts will see a welcome increase. There are still a lot of improvements to be made on the software, but the framework is in a place where we can bring the project in front of the public.

In the coming months we will be releasing Aleator improvements to allow for more chord coloring and dynamic capabilities, as well as launching a Kickstarter campaign and continuing development on our forthcoming stream, Static Void. I feel like that sentence should have ended with at least one exclamation point but it seemed stupid so I just went with the period. Anyway, the goal of the Kickstarter campaign will be to fund both streams (SV as well as 2014's Facets) for two years, so it's important that it go well. A lot of effort and energy are being put there. We also will be optimizing the site for mobile and making some aesthetic changes, so pardon us in advance.

I'd like the thank The Melissas for their continued support - it's very much appreciated. And now, allow me to leave you a very nice integer notation diagram that's helped me immensely: 

Never forget.

Never forget.




When Facets went live in February, it seemed like a pretty big accomplishment, and it was. The Aleator plugin had been in development for an extremely long time and we were finally at a point where we could attempt to run it in a production environment. Of course, attempting and accomplishing are entirely different things.

If I've learned one thing over the last 4 months, it's that this project is still in it's infancy from a technology standpoint. The problem of delivering live audio 24/7 using the methods described here isn't a simple one - there are many points of failure it's taken a long time to work through them. Some of them have been detailed in this blog and some of them haven't.  Most recently, I discovered that when responding to the data received from the DAW's transport, there are instances when it instructs the Aleator to stop playing. That has been fixed in the most recent assembly, but even now there are instances when the stream inexplicably disconnects from the SHOUTcast server. When that happens, the connection has to be manually reconnected in Reaper. These sorts of things can be near impossible to debug, as a lot of times it can take days to observe the behavior. For this reason, it was necessary to implement pretty extensive logging - otherwise it gets pretty hard to tell what precipitated certain events that take place.

Did any of that sound exciting? It's hard to post blog entries and tweet about this stuff because it's incredibly boring, but it's nonetheless necessary. Before there can be any real promotional effort behind for Facets, the uptime has to be considerably higher. That means we need to figure out what is causing these disconnections, or at least figure out a way to be notified when they occur. The more of these problems get addressed, the more confident we can be that the stream will be active as listeners try to access it. Only when that confidence is high does it make sense to aggressively promote it.

In light of all this, I've come to the conclusion that putting a lot of effort toward the release of an assembly doesn't make sense right now. We will maintain the CodePlex project for posterity (and because we need source control), but we will not be focusing on building out that project in any formal sense.

Finally, I am starting to put some of the primary building blocks in place for the next release. Just sketches really, but I am getting a sense of the palette. Should be fun.

The Next Episode

Long time no talk. Lots going on with Staggered in recent days, especially on the tech side. If you're reading this you probably know that tempo changes have been implemented in production. Whew that was something. We also switched streaming data centers - we were initially set up in the EU since that's the default location with our host. The result of that is much less latency, meaning that if we are doing anything on the production server we can hear the live impact a lot more quickly.

One great thing that happened is iTunes Radio has picked up Facets...crazy right? Proof:

We exist

We exist

Announced with all of our deserved fanfare.

So the next thing that will happen is that our production server will be migrated to new data center. This is going to result in a significant outage tomorrow night (3/30/14) into Monday afternoon. We will provide additional fallback files before the outage so if you happen to visit during that time you may not notice a change, but that audio will be prerecorded.

Most important of all, we have begun writing for our next stream. There's a lot to consider there...a lot of aspects to musical composition that we didn't have the time or resources to cover for Facets. A big one is the notion of commonality between different passages in a movement. When a (good) songwriter crafts a song, it isn't just a bunch of parts arbitrarily crammed together. The various sections of the song are related to each other in some sort of musical sense - same keys, complementary melodic patterns, rhythmic phrases supporting or countering one another...maybe a combination of different things. As a result, Facets can feel somewhat jerky at times. It's fine if that is the desired effect, but one thing we will definitely address before we go to production with another stream is making our compositions more cohesive, unless the goal for a particular one is a more fragmented feel. We will get into of more of this stuff in the coming  weeks, but just know that we definitely want to take what we are doing to another level in terms of quality (a higher one to be specific).

Ok everybody enjoy the weekend -  catch me on Titanfall. 1!


According To My Calculations Part II

Taking a little break from the tempo change implementation to do a follow up on the first post as promised...

I won't spend a lot of time here going over the same topics I did in part 1. Continuing with the scenario discussed there, lets assume we are approximating a One Drop rhythm.



Pete approves. So we have guessed that there are going to be two kicks to the measure. That's great but where do they go? This is kind of where this really becomes more of an art form than a development exercise. You are really writing algorithms to apply a certain sense of style to the phrases you generate and the kick is a really good example in this situation.

For these sorts of rhythms, the routine is pretty simplistic. We determine the basic sort of beat based on the number of kicks predicted per measure. If we guessed 1 or 2, we know this is going to be a One Drop and as such will look to place all kicks on a 2nd or 4th beat - remember we are counting half time so that may be considered on the 3 by some . If we only guessed 1 (as opposed to 2) or ended up with an odd total, the placement of a given note may be randomly on either the 2nd or 4th beat of the measure. This logic gets extrapolated out to Rockers, meaning that if we guessed four kicks per measure or less, we will look to place all kicks on quarters. This expands to Steppers in the same manner (except with eighths).

Now, these are by far the most straightforward calculations we make with respect to generating the note lists. Most of the time we make these predictions using distribution models like the one mentioned in part 1. We finely tune the parameters to the point where we are comfortable with the possibilities, and let chance sort out the rest. That's part of the art of it so I won't get into details - if you are reading this you are obviously interested in implementing your own solution so just do it the way you think it should be done.

Tempo variation is coming...not easy...




Since we don't have a "Links" section, I'm gonna take a little space here to quickly mention a couple of interesting sites I came across this weekend in my dealins. Caliper is a blog that focuses specifically on instrumental and experimental's extremely well curated and you can hear some really great stuff on there. I actually thought the The Use song that's up there sounded not unlike something we might generate. Ugh I hate being forced to use two "thes" in a row. Anyway - the guys were even nice enough to give us mention here.

The second thing I wanted to throw at you guys was this piece by solo.op:

Kota is a label that likes to bend minds and this drone bomb is definitely in line with that aesthetic. RanDrone is generative much like one of our streams, but'll just have to visit and hear for yourself. Far out.

For the party people, you should really listen to the new Major Lazer EP, Apocalypse Soon. If you know Major Lazer, there are no surprises - just the dusted dancehall we've grown to love.

Damn I gotta get to that other post. Sorrryyyyy...



Abe looking pensive.

Abe looking pensive.

Happy Prez Day everybody. Made a lot of progress this weekend on the overall stability of the stream and a lot of it was due to conceptual shift in what a "stream" represents. I'd like to wax philosophical about it for a moment.

It takes a while for the Aleator to render the MIDI for an entire movement, let a lone a set of them. As a listener, my favorite format of recorded music is the album. Songs are great, but putting together a well crafted album that is engaging for its entire duration is extremely difficult to do.

When applied to the stream though, the problem is that 99.9% of all listeners will be joining it in progress. There is no real way to control what is perceived as the beginning, middle or end of the stream. In essence, they don't exist.

Previously, the MIDI for the entire set of compositions would be loaded into memory at the beginning of each cycle. As I said - that took a while, so I would start a separate background thread to spin up the new set as the current one was ending. This multithreading worked fine in the short term, but seemed to cause memory problems when the plugin ran for an extended amount of time (4+ hours). My gut instinct tells me that something in one or some of the libraries I am using isn't thread safe, but I digress.

Looking back on it, it's awful design...I just fell into the trap of thinking of the whole thing as a single unit that needed to be dealt with as such. The fact that I can't force a listener to the beginning really forced me out of that mentality. Now, one composition is loaded at a time, and added to a list of compositions that have already been played. When looking to load the next composition, the plugin randomly selects one that hasn't already been played. This continues until all compositions have been played, at which time that list is cleared out and the process starts all over again. It will still play through all included compositions without repeating, but the sequence is random.

This approach is really inline with the general direction of the project, a part of which is to walk the line between control and chance in our algorithms. And the best part, of course, is that loading one composition at a time allows everything to run on a single thread, leading to increased stability overall.

It doesn't make sense to think of these resulting streams in a linear sense with respect to any sort of track sequence. There literally isn't one. Ideally, we want the listener to observer a stream aurally in much the same way they would take in piece of visual art. There can be edges or borders, but it doesn't really have beginnings and endings. It's just there, existing.

I'm gonna step away from the dev stuff for a second and do another music post next time, promise.

The Memory Pit

So I know I was supposed to do a post on how notes get distributed on the virtual staff, and I will get to that. I just wanted to post a quick update and talk about whats been going on with the project lately.

One of the ideas that I put forth when describing the project is that a stream is equivalent to an "endless album". It should loop through the same progressions indefinitely but should spin up new sets of notes with each iteration. Full disclosure - I discovered a bug in the plugin in late January and realized that wasn't happening. The same notes where being spit out with each cycle, the only difference in the resulting audio being the different presets that happened to be loaded in my synths at the time.

The fix for that was easy enough. It basically amounted to rearranging a few blocks of code. However, making this change exposed an enormous memory leak in the plugin. This one was actually more like a memory pit - with each iteration through the composition lists, memory allocated for certain objects (most notably compositions, progressions and notes) basically doubled, along with the object counts. Why?

To get to the bottom of this, I fired up dotTrace memory profiler. I'm not too used to dealing with memory leaks in .NET - generally garbage collection will release resources for all eligible objects and I am pretty careful not to create objects willy-nilly. Point being, it took me a little while to figure out what exactly I was looking at and how to really go about diagnosing the leak. The view that made the most sense to me was the root path, as that was the easiest way for me to visualize where the references to old objects where originating from. The following two images are screenshots from iterations 2 and 3 of an Aleator run:

Iteration 2

Iteration 2

Iteration 3

Iteration 3

I know the images are small - I am looking specifically at the memory allocated on the heap for Composition objects. As you may or may not be able to see, there were 5 compositions in the second node during iteration 2, but 10 for iteration 3. There are only supposed to be five compositions in memory for Facets, so basically we have a smoking gun here.

As it happens, I lucked out with respect to a solution. Digging down further into the node, we can see that old notes have references in a list called m_isPlayingList. This was actually a holdover from a time when I wanted to keep track of currently playing notes so I could turn them off if they were hanging after a progression or composition switch. That is no longer needed, since at this point "all notes off" messages are sent on all channels except for drums if the progression is changing. I was able to simply remove all references to this list and that in and of itself fixed the leak. The fact that notes hold references to their parent progressions and compositions meant that none of these objects were ever eligible for garbage collection. Ugh. Since the leak was causing intermittent OutOfMemory exceptions, I am hopeful that this fix will result in a little more stability for the stream.

At the end of the day, I guess this is really an advertisement for JetBrains dotTrace. If you run into trouble with memory or performance during your dev adventures, it's a pretty nice tool...

According To My Calculations Part I

Back To The Beat (AKA Totals)

Ok so...we're going to use this space to discuss some of our algorithms and how we've arrived at the specific implementations used in our code. Although we write in C#, we'll keep the conversation at a relatively high level and refrain from referencing particular code blocks. Since we've made no mention of how we build drum patterns in the other pages, that seems like like a good first entry.

The word 'Techniques' is an important one to us - it is actually a namespace within the Aleator application that contains all of the phrase building code. You may notice when listening to Facets that there really are no driving rhythms at all. Most of the beats you will hear are either (for lack of better terms) bouncy or slower, almost reggae influenced. This is because we simply haven't written an algorithm to generate those more driving (read: rock) rhythms.

As of this post, we have two rhythm techniques - 'Bounce' and 'Vibe'. These are both classes that derive from a Technique base class that houses some of the common properties and methods. Let's concentrate on the latter of these two.

As mentioned above, the Vibe technique was really written to generate Reggae influenced drum patterns. If you know anything about Reggae drumming, you know that those patterns tend to fall into one of three groups: One Drops, Rockers, and Steppers. You can find all of the history associated with these riddims on the interwebs if you so desire, but we will do here is briefly describe what differentiates these grooves from each other and how we seek to represent them in our code. For all of these, our approach to the hi-hat is to start with 8th notes and vary it from there by adding or removing a few, or perhaps a combination of the two. You really have a lot of wiggle room with the hi-hat.

The name One Drop comes from the fact that only one beat tends to be emphasized when playing rhythms of this type - the 3 (3rd beat in the measure if you are musically challenged and still reading this). This is usually executed with a kick, a rim shot or maybe both.  You can play around with other rim shots occasionally, but the important thing is that the heavy emphasis is on the 3 and the 3 only. The Rocker's beat adds emphasis on the 1 (making the 1 AND 3 important), and the Stepper's beat emphasizes all four beats in the measure. Again, this is usually happening with the kick, but you can mix in rim shots at different points in the measure to put some sauce on there. Just for reference...

One Drop - Legalize It (Peter Tosh)

Rockers - Sponji Reggae (Black Uhuru)

Steppers - Exodus (Bob)

Its the interpretation of these guidelines that gets tricky. If we are employing the Vibe technique, we want the resulting beat to be somewhere in the neighborhood of one of the types described above, but we don't want it to be exactly the same all the time. It also needs to be reiterated that these are only guidelines - a drummer can of course do whatever he or she wants. So we try to guess where kicks, snares and hi-hats will fall using probability.

The kind of rhythm that will accompany a particular phrase is determined randomly when the phrase is built. Obviously if you were playing in a live setting with other musicians you would never work this way, but we are in the business of chance. If the type of beat to be generated is a One Drop, Steppers or Rockers beat, the Aleator knows to use the Vibe class to build it. The first thing we need to do when building any rhythm phrase is determine the number of kick, snare (or rim shot), and hi-hat notes that will be played. Within the Vibe technique, we use a normal distribution random number generator to get the kick total. A continuous generator is used for snare and hi-hat totals, but that's another story.

Normal distribution is just your standard bell curve:


In the case of the One Drop, we know that most of the time we are expecting a single kick per measure (on the 3). Now, within the Aleator, we actually count Reggae beats half time. This means that instead of counting 1 2 3 4 1 2 3 4, we count 1 & 2 & 3 & 4 &. Same thing, just a different way of counting it. That means we are expecting 2 kicks per measure, counting half time. Therefore, when calculating the kick total for a given phrase using a One Drop, we set Mu equal to 2. To allow for a fair amount of variation, we use .5 as our Sigma. Referencing the graph above, that's basically taking the green curve and moving the apex over to +2. We use our normal distribution object to retrieve a random integer along that curve. The result is that most of the time, there will be 2 kicks per measure but every once in a while there may be 1 or 3. In those instances the rhythm isn't a true One Drop, but really nobody cares. We use similar techniques to generate totals for all instruments and all phrase types across the application. Isn't music mathy!?

That's just to figure out how many kicks are going to be in a One Drop drum phrase, which is really the simplest of all the calculations we perform. We'll take a look at placing these kicks on the virtual staff in Part II - Revenge of the Beat.