Saturday, November 17, 2007

Audio Arts Major Project

As mentioned previously, the game I chose was Nexuiz. First Person Shooters are my preferred video-game genre, so this experience helped me to make decisions throughout the creation of my assets. My primary aim was to create the sound effects for one playable level of the game - the level I chose is called "Diesel"- and make them sound better than the original sounds. I play-tested all levels before picking one, and chose Diesel because it had the best selection of weapons and also had jumppads, moving platforms and teleporters. There were no ambient sounds in this level, but they tended to overload the sound in other levels anyway.

I mostly used Audacity for sound manipulation, and ProTools for the recordings. I recorded myself for the "Triple Kill" and associated sounds, taking some creative licence from the Halo series' multiplayer voiceover. I originally wanted to record a female for the time-related voiceovers (1 minute remaining), however a heavy workload combined with unsuitably-voiced-females resulted in no recording of such kind. My asset creation was quite long-winded, as most of the sounds that I created sounded different when played in-game; it sounds like there are strange EQs in the sound engine. Accommodating to this, I would create a batch of assets, drop them into the sounds folder and test them out, often having to pitch them down or lowpass the high end to shave off annoying high pitches. My favourite sound would have to be the grenade launcher- the PVC pipe I used worked exactly how I had hoped, and the grenade bounces are much, much less annoying than the originals. See the assets list for more detailed creation information.

I had an issue with the creation of music, as my asset list was so full of more important sounds to create and there was already music in the game files to keep me interested. If there was time I would have created a heavy metal track (as I had originally hoped to do) however such an undertaking would be similar to our project for audio arts last semester- and I cannot do three projects for the price of two! In any case, I found a royalty free dance/dark-ambient song on soundsnap.com, and played around with it for it to fit the game. Aside from the music, the issues I had with sound creation were all Internet related. The cost of downloading quality sound files built up to a point where I had to just work with what I had, but really only the 'Hagar' gun and its associated files bore the brunt of that (I didn't do it). I approached a composition student to compose some music for the main menu, however they later became unavailable to do so.

Overall I am pleased with the result, and it certainly sounds a lot better than the original sounds (success!). I had great fun creating the sounds, especially the foley recordings. In retrospect I think the shotgun could have sounded a bit more shotgun-like, however the original was absolutely horrendous. In the end I simply tried to make sure the sounds did not get annoying over time, and this was often a reason for going back to the drawing board with a sound.

Creative Computing Major Project

My creative computing project went through several transformations during its development, but out of the other end came one of my most interesting creations- a Granular Feedback effect. If you think of how a delay line works, my patch works in a similar fashion to that, but with drastically different results. The concept is basically this- a microphone sends a constant flow of sound into a buffer, which is continuously recording in a 20 second loop (or however long you want). Using granular mathematics, small clips of the ever-changing buffer are played back over the top of one another. This playback is then re-recorded into the buffer, meaning the granulated sound will now be granulated again. What surprised me was how you can listen to a sound being recorded, then hear it granulated again and again until all it is left of a wall of the sound, with the most prominent pitches humming along in a continuous note. For example, at the very beginning of the recording I drop my keys in front of the microphone. Initially there are obvious reflections of the sound, but within about 15 seconds you start to notice echoes and reverb appearing, and of course when this is re-granulated it proliferates through the sound scape. It is very analogous to the butterfly effect, as the slightest change in sound at the beginning can have drastic effects on the outcome 1 minute down the track. It is this cause-and-effect that gave my patch the name Granular Genesis, as you can pretty much listen to sounds reproduce, and even form a sort of equilibrium with each other. My patch and recording were put into the dropbox but not saved to my harddrive, so for now you'll have to use your imagination.

My initial aim was for a granular version of 'I Am Sitting In A Room', however upon testing this it turned out to be fundamentally flawed. The issue resides in the randomness of the granular synthesis, and what gets played back when. My patch is constantly updating the buffer, which means that at any point a grain can be selected which is being recorded in a room at that very moment. This of course results in feedback, and not the good kind. Even worse, this feedback gets recorded into the buffer, then granulated out into the room and recorded again- you see where I'm going. The amazing results from my internal feedback loop was the nail in the coffin, so I made an executive decision to accept the 'not following preproduction' lost marks and centre my patch around what works best.

Speaking of what works best, during the last couple days of patch building I would often become sidetracked experimenting with different sounds in the feedback loop. One of the best outcomes was the singing voice- if I recorded myself humming the same note for 10 seconds, it would turn into a choir 15 seconds later. 1 minute later it would be a continuous pitch, with any variations of pitch in my voice smoothed out. As the buffer is always recording, I would then add in intervals, which would go through the same process and eventually sound like a never-ending chord. Also successful are plosive or percussive sounds. If you record a couple of mouth clicks, pops and squelches they permeate through the feedback and often result in a wall of noise- strangely relaxing noise.

Well I'm aware I've spend the majority of this entry bragging about how cool it is, but it really is that cool. I think I'll try and turn it into a plugin, and see how well it works in that regard. I am in the process of re-acquiring the patch and recording, so watch this space.

I found a picture of the interface, but it is only the background image without the dials and sliders etc.



I should quickly discuss issues I had. I've already mentioned the fundamental flaw in my initial proposal. I had one problem getting my patch to build into an application. The issue was with an aiff file that needed to be included. I would add it as a file into the build window, but when it did the build it would not find it. I tried this every which way, but then ran out of time. This was the only issue with the app building, as without that aiff file the panners would not work, meaning that all granulations are panned to zero; silence. If I could have solved this the app would have been built no trouble. I also initially had phase vocoding in the patch, but the strain on the CPU and the less-than-worth-it results meant I had to drop it.

Special thanks is due to Matt Mazzone for helping me out in various ways. We always seem to be the only two who see the sun rise on the due date.

UPDATE: I'm not going to use 7 MB of upload for something that has probably already been marked, so no MP3.

Thursday, November 01, 2007

Putting Things Into Perspective




I think Matt Mazzone just passed Perspectives in Music Tech.