When Ableton first announced Max for Live I knew this plugin / add on would be a game changer. I had to get a copy as soon as possible.
For the uninitiated max for live basically lets you use a simple graphical programming language, called max/msp within Ableton Live. This lets advanced users program custom interfaces and controls that have access to every piece of information Ableton Live knows about the audio. The possibilities of extending the features of the application are limited only by the programming skill and creative needs of the user. I have used the software a little bit in school but I have not had the opportunity to program many of my own patches for any specialized function yet.
The following tutorial video shows how quick and simple some of these patches can be. Check it out:
A few weeks ago I made a remix of a song by a band that I’ve been a really big fan of for a couple years, Vibrasphere. You can check out the link above to get caught up with the song I’m about to describe and break down.
In this post I’m going to explain how I put together this arrangement, as well as some details on how I’m configuring Ableton Live to play around with the samples before I get to the arrangement stage.
I save a lot of time by playing around with musical ideas in the Session View before commiting myself down to an arragement. After I create this arrangement I usually don’t need to spend more than an hour or two making automation changes and other mixing since I’ve been able to organize so much ahead of time.
If you haven’t heard the original song, you should check out this YouTube video of Wasteland:
Stems vs. Individual Samples:
All of the remix parts that I used can be purchased at the Vibrasphere music store for $6.80 US. I bought a whole pack of them because I think this band is really great! The samples are really well sliced, unlike some remix packs that I have bought. Generally what producers provide is a “stem” of all or some of the tracks but Vibrasphere has provided all the samples in their original form. In cases where effects have been applied, they have provided both a wet (effected) and dry version of the sound. For those of you not familiar with what a stem is I have provided a definition from Sound on Sound below. You can always find a handy list of technical definitions at the Sound on Sound Glossary.
STEMS: When mixing complex audio material it is often useful to divide the tracks into related sections and mix those sections separately before combining the whole. In mixing film soundtracks, the material would often be grouped as a dialogue stem, a music stem, an effects stem and so on. Each stem might be mono, stereo or multichannel, as appropriate to the situation. In music mixing, stems might be used for the rhythm section, backline instruments, frontline instruments, backing vocals, lead vocals and effects — or any other combination that suited the particular project.
The main disadvantage with stems is low versatility. If there are multiple layers that make up a synth part and they are mixed down into a single stem there is no way to separate these parts for editing later on. Also, I often notice that producers mix effects, such as reverb and delay, with stems making one entire drum or vocal track. This limits your creative options unless you recreate and re-synthesize the sound from scratch. With this project I was able to re-synthesize the individual vocal samples which allows much more freedom compared to working with a single vocal stem.
This is the mixer layout I used in Ableton.
Session View and saving time:
Recently I began working with a standard arrangement that I borrowed from JazzMutant. While I don’t own any of their products (Lemur or Dexter) I was interested to see what their Max4Live code looked like since I was interested in an OSC project a while back. It wasn’t very useful from that standpoint since the code was impossible for me to decipher, but I did get an fantastic layout for most “electronic” based songs that I have begun using to layout ideas in a playable format.
This basic layout has tracks for the following drum sounds: Kick, Clap, Hats, and additional Perc. There is a track for bass and four melody tracks that can play any of the synth sounds that I might want to play. I also have a “steam” channel that is always sending some of its signal to sends A & B, a reverb and delay. This steam track provides a broadband hiss (like . . . steam) that I can use at any point to provide a transition sound. This is such a great component of the set that I often use it when DJing with Live since it provides a nice sound and it is very light on the CPU (but big on the dance floor!).
I have begun to make use of Scenes within Live so that I have a playable version of the song. This makes adding compatible parts really simple since I can just record the changes in the Session view and work with new sounds until something clicks. Check out the photo to see what the Session view looked like with my samples loaded in.
Using the Slice to New MIDI Track command to chop up the vocal
Vocal Sample as point of interest:
I really wanted to work with the vocal sample, and usually to do this I would load up the sample in another audio application and start applying effects and edits. This time, rather than destructively edit the original sample, I loaded the sample into the Sampler instrument in Live so that I could re-edit the pattern using MIDI. This is easy to do with the “Slice to MIDI” operation that you can apply to any audio clip. Using the 16th note preset it automatically creates 16 slices for every bar. You can then individually manipulate the MIDI notes to re-arrange the playback. I also applied some distortion and EQ effects on certain slices. This is what provides the variation from the original vocal theme.
I would love to hear your comments, thanks for reading!
A realtime performance of creating a Max-patch, starting from zero. This is a lot like the final project in third year computer music at Carleton University. The only difference is we had all semester to make something that complex. The other minor difference was that we had to make a midi note engine rather than an audio engine, but I’m sure this would pass as acceptable.
You have 6 minutes to build a Max-patch and do a performance with it.
Start with an empty patch.
Only use the standard objects that are part of Max/MSP/Jitter.
Don’t use externals, pre-build external datafiles, help files, or anything of that kind.
This short performance was created for a Max Live Coding contest. The participant receiving the loudest applause from the audience would win.
This is a demo of some of the powerful stuff you can do with Max 4 Live, the new Max API for Ableton Live 8. This is some very unique and exciting human-computer interaction.
The video demonstrates a webcam plugin that turns on-camera movements, quantizes them to pre-defined notes, and then further adjusts them to a preset musical scale. the result is an instant midi note generator that is controlled by your movement in front of a camera.
Is it musical? Maybe not, but it certainly is cool, and has some definite applications for sound design.
I wonder what this does to the artistic merit of music. If a great song gets composed using any of this technology it takes a lot of human design out of music. Theoretically your robotic vacuum cleaner can compose a symphony while you’re at work, provided you set up the software correctly before hand.