Tuesday, December 04, 2007

Avatar



Here's a little ditty I made.

Saturday, November 17, 2007

Audio Arts Major Project

As mentioned previously, the game I chose was Nexuiz. First Person Shooters are my preferred video-game genre, so this experience helped me to make decisions throughout the creation of my assets. My primary aim was to create the sound effects for one playable level of the game - the level I chose is called "Diesel"- and make them sound better than the original sounds. I play-tested all levels before picking one, and chose Diesel because it had the best selection of weapons and also had jumppads, moving platforms and teleporters. There were no ambient sounds in this level, but they tended to overload the sound in other levels anyway.

I mostly used Audacity for sound manipulation, and ProTools for the recordings. I recorded myself for the "Triple Kill" and associated sounds, taking some creative licence from the Halo series' multiplayer voiceover. I originally wanted to record a female for the time-related voiceovers (1 minute remaining), however a heavy workload combined with unsuitably-voiced-females resulted in no recording of such kind. My asset creation was quite long-winded, as most of the sounds that I created sounded different when played in-game; it sounds like there are strange EQs in the sound engine. Accommodating to this, I would create a batch of assets, drop them into the sounds folder and test them out, often having to pitch them down or lowpass the high end to shave off annoying high pitches. My favourite sound would have to be the grenade launcher- the PVC pipe I used worked exactly how I had hoped, and the grenade bounces are much, much less annoying than the originals. See the assets list for more detailed creation information.

I had an issue with the creation of music, as my asset list was so full of more important sounds to create and there was already music in the game files to keep me interested. If there was time I would have created a heavy metal track (as I had originally hoped to do) however such an undertaking would be similar to our project for audio arts last semester- and I cannot do three projects for the price of two! In any case, I found a royalty free dance/dark-ambient song on soundsnap.com, and played around with it for it to fit the game. Aside from the music, the issues I had with sound creation were all Internet related. The cost of downloading quality sound files built up to a point where I had to just work with what I had, but really only the 'Hagar' gun and its associated files bore the brunt of that (I didn't do it). I approached a composition student to compose some music for the main menu, however they later became unavailable to do so.

Overall I am pleased with the result, and it certainly sounds a lot better than the original sounds (success!). I had great fun creating the sounds, especially the foley recordings. In retrospect I think the shotgun could have sounded a bit more shotgun-like, however the original was absolutely horrendous. In the end I simply tried to make sure the sounds did not get annoying over time, and this was often a reason for going back to the drawing board with a sound.

Creative Computing Major Project

My creative computing project went through several transformations during its development, but out of the other end came one of my most interesting creations- a Granular Feedback effect. If you think of how a delay line works, my patch works in a similar fashion to that, but with drastically different results. The concept is basically this- a microphone sends a constant flow of sound into a buffer, which is continuously recording in a 20 second loop (or however long you want). Using granular mathematics, small clips of the ever-changing buffer are played back over the top of one another. This playback is then re-recorded into the buffer, meaning the granulated sound will now be granulated again. What surprised me was how you can listen to a sound being recorded, then hear it granulated again and again until all it is left of a wall of the sound, with the most prominent pitches humming along in a continuous note. For example, at the very beginning of the recording I drop my keys in front of the microphone. Initially there are obvious reflections of the sound, but within about 15 seconds you start to notice echoes and reverb appearing, and of course when this is re-granulated it proliferates through the sound scape. It is very analogous to the butterfly effect, as the slightest change in sound at the beginning can have drastic effects on the outcome 1 minute down the track. It is this cause-and-effect that gave my patch the name Granular Genesis, as you can pretty much listen to sounds reproduce, and even form a sort of equilibrium with each other. My patch and recording were put into the dropbox but not saved to my harddrive, so for now you'll have to use your imagination.

My initial aim was for a granular version of 'I Am Sitting In A Room', however upon testing this it turned out to be fundamentally flawed. The issue resides in the randomness of the granular synthesis, and what gets played back when. My patch is constantly updating the buffer, which means that at any point a grain can be selected which is being recorded in a room at that very moment. This of course results in feedback, and not the good kind. Even worse, this feedback gets recorded into the buffer, then granulated out into the room and recorded again- you see where I'm going. The amazing results from my internal feedback loop was the nail in the coffin, so I made an executive decision to accept the 'not following preproduction' lost marks and centre my patch around what works best.

Speaking of what works best, during the last couple days of patch building I would often become sidetracked experimenting with different sounds in the feedback loop. One of the best outcomes was the singing voice- if I recorded myself humming the same note for 10 seconds, it would turn into a choir 15 seconds later. 1 minute later it would be a continuous pitch, with any variations of pitch in my voice smoothed out. As the buffer is always recording, I would then add in intervals, which would go through the same process and eventually sound like a never-ending chord. Also successful are plosive or percussive sounds. If you record a couple of mouth clicks, pops and squelches they permeate through the feedback and often result in a wall of noise- strangely relaxing noise.

Well I'm aware I've spend the majority of this entry bragging about how cool it is, but it really is that cool. I think I'll try and turn it into a plugin, and see how well it works in that regard. I am in the process of re-acquiring the patch and recording, so watch this space.

I found a picture of the interface, but it is only the background image without the dials and sliders etc.



I should quickly discuss issues I had. I've already mentioned the fundamental flaw in my initial proposal. I had one problem getting my patch to build into an application. The issue was with an aiff file that needed to be included. I would add it as a file into the build window, but when it did the build it would not find it. I tried this every which way, but then ran out of time. This was the only issue with the app building, as without that aiff file the panners would not work, meaning that all granulations are panned to zero; silence. If I could have solved this the app would have been built no trouble. I also initially had phase vocoding in the patch, but the strain on the CPU and the less-than-worth-it results meant I had to drop it.

Special thanks is due to Matt Mazzone for helping me out in various ways. We always seem to be the only two who see the sun rise on the due date.

UPDATE: I'm not going to use 7 MB of upload for something that has probably already been marked, so no MP3.

Thursday, November 01, 2007

Putting Things Into Perspective




I think Matt Mazzone just passed Perspectives in Music Tech.

Wednesday, October 24, 2007

AA2 - Week 11 - Soundtracks

CC2 - Week 11 - Panning

I'm not sure exactly how we're supposed to hand up a recording of a four-channel patch. I guess I could record the front and the back as separate stereo files, but I'm running out of online credit both at home and at uni.



I originally tried to implement multi-channel panning in my granular synthesis patch, but I could not get it to run below 85% CPU to it kept chugging. I thus created an new patch, which allows you to run a stereo signal (or two entirely different signals) into it and pan each one within a quad speaker set up. You can also throw the sound to a random position, and even allow each channel to be continuously moved around the sound field in an entirely random manner. This was simple, the hard part was my implementation of "cat and mouse" and "opposing" reactive panning.



First, cat and mouse. Wherever signal 1 is positioned, signal 2 will go. How quickly signal 2 reacts is determined by the user, meaning you can set signal 1 to randomly move around, while signal 2 'chases' it. Moving on, opposing reactive panning means that wherever signal 1 is positioned, signal two is as far opposite as it can get, eg. sig 1 is in the front left speaker, sig 2 will be in the rear right. This works particularly well with a stereo sound file, as the stereo field is constantly moving, while sounds that are centred in the file remain relatively still.

Feel free to plug your own sounds into the patch, and it works very well as a bPatcher.

~PAT 9KB~

Thursday, October 18, 2007

CC2 - Week 10 - Delay



Nothing like a fresh pot of delay in the morning. This was relatively simple, as the tapin~ and tapout~ objects are logical and easy to implement. In fact most of my time was spent on the interface, and creating perfect Pink Floyd loops. Speaking of loops, I made a humorous little sample mixer this week, which is included in this patch. Slide the "Song mix" slider right or left to mix between the wavetables.



~MP3 1.9MB~
~ZIP 2.29MB~
Sorry 'bout the big size, but them stereo wav's be big buggers.

Monday, October 15, 2007

AA2 - Week 10 - Ambience

She has beautiful 'eyes'...

Wednesday, October 10, 2007

CC2 - Week 9 - Processing - FFT

I did it. Did what? IT, motherlover. It took 3 hours of tutorial reading (which was like chewing on broken glass), then a further 7 hours of hardcore Max mania, and what do I have to show for it? Every grain in my granular synth patch can now be manipulated with its own phase vocoder. HORRIBLE trying to implement it, rather interesting results. Now there is a additional control for 'grain stretching', which takes each grain and (surprise!) stretches it for any amount of time you so wish. I plan to have this control automated by the next incarnation, so each grain is not only randomly selected from within a range, but has an individual length from quick to ultra slow. This will undoubtedly be used in my project, and I plan to use granular synthesis as one of the main... things.

Getting all the numbers into
the pfft~ in the poly~... Ugh.


The most difficult things... can wait until I have more time to list them all.

Other things to do:
Change the visual aesthetic - totally over it.
Fix the buzz that occurs on grain 0.
Find the comfortable limits of use - and place limits there.
Use better sounds.


Interior of pfft~, with buffer/record/index namer.
Names each grain's recorded sample.

I will attach the zip of the files, however the controls are not fully integrated into the patch and most of them don't have loadmess' to initiate them. I'll fix this too, but I have stuff due on Monday that I haven't started. The file you want is bkp.grain.vocode.pat. If you can't be bothered figuring it out, here's an MP3 I made of it in full swing. Note the size ~MP3 2.75MB~

~ZIP 766KB~

Monday, October 08, 2007

AA2 - Week 9 - Game Audio Design (1) - Assets

This week we were asked to create 4 sound assets for our chosen game, so fingers crossed our preproductions go through. I decided to make sounds for a health pickup, a gun being shot, a voice over (TRIPLE KILL!!!) and the sound of a jumppad. The associated articles were somewhat helpful, however as Christian informed us they were pretty simple, and I did not really learn anything new, just reestablished prior knowledge. An issue that has come about is that I don't think my game has inbuilt reverb or other environmental effects, meaning such effects will need to be included within each of my assets. More in depth descriptions about the processes I used are detailed in the asset.xls within the attached zip file.

The gun I chose to create the sound for was the "Crylink", as this had by far the most uninspiring sound effect in the game. Originally it sounded like somebody flushing a miniature toilet, so I hoped to give it a bit more of a punch by using watery sounds mixed with real gun sounds.

Crylink

I also made the sound for a medium health pickup, which was a bit more complicated than I expected. Considering there is nothing even remotely similar to a health pickup in the real world, imagining the sound was difficult. In a way I guess it should have been easier because I was free to use any sound and it wouldn't technically be 'wrong', however I wanted a sound that the player would look forward to hearing, making the health pickups all the more enticing. I ended up settling for an unobtrusive reversed bongo arrangement, which has quite a happy sound to it.

Medium Health


For the voice asset I recorded myself saying "Triple Kill", used in the game when a player kills three other players within quick succession. This was more 'same old' for me- with all the voice over stuff I've done this year I'm beginning to run out of voices. I was either going to have a female voice (done by a girl of course, although my falsetto does sound a little feminine) or my cliche SuperMegaTestosterone Man voice. For last-minute-rush purposes I used my own voice. I might start listening to women a little more discerningly, keeping an ear out for a good voice. I think her voice would need to be feminine but still strong and authoritative, kind of like "I'll slap you to death if you look at me again".

The last sound I made was for the jumppad, which is used in the game to jump twice as high as normal. It was a simple combination of a click, clunk & whoosh; the sounds of which came from the deadroom door and white noise.

~ZIP 123KB~

Wednesday, September 19, 2007

CC2 - Week 8 - MIDI and MSP

After much tribulation, here it is. I have gone a little further than just applying MIDI controls, and built a whole new concept of selecting sections of a waveform. The user can select a control device, which is used to change the size of the selection. This selection can then be shifted left or right along the wavetable using the pitchbend wheel. This pic shows how full on it was just to make it do this.



Although it took ages, the result is really cool and worth the hassle. Some of the other patch controls are also controllable by a MIDI device, however we all know how that works. Space bar now turns the patch on and off. There will be no huge MP3 of this weeks task, as it would sound identical to last week, however here is a little snippet of what my new control scheme can do... cancel that, I can't get anything to record on the Macs. Why is there no option to record just the system sound? Nothing in Audacity or Peak, the recorder in Max has "unable to access..." errors, and I don't have any MIDI input devices for my laptop to do it on there. Grr. Here's the patch etc.
~ZIP 544KB~

Wednesday, September 12, 2007

Forum - Week 5 - Circuit Bending II

This week seems to have slipped through the cracks, as Jake and I seem to have forgotten to post about it. In any case, all of our toys had been broken, except for the phone, so we endeavored to permanently attach a pitch-changing potentiometer. Which we did, along with speaker wires and battery wires. Here's the picture:



We did try buying other toys, however they indubitably turned out crap, such as:



Which didn't even work. Although we did get some cool robot-head LED's from them. Good times!



Seb Tomczak. "Music Technology Forum - Circuit Bending II" Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 23rd August, 2007.

Forum - Week 6 - Physical Computing

Square wave in ya face! This week was my first experience with physical computing, and undoubtedly not my last. Controlling physical circuits with Max and vice-versa was good fun (when it worked), and it seems like there would be a huge range of possibilities when introducing a computer to a simple circuit.



The above video shows a Max slider controlling the data flow into the breadboard, which not surprisingly has a square wave sound (what with those 0's and 1's). The following video shows how we created the square wave sound without Max, using a potentiometer instead.



As you cannot see the circuit very well in the video, here's a photo of it.



The final exercise was the most interesting- controlling sliders in Max with voltage variations on a circuit. Jake and I did not get this working until 10 minutes after the lesson had finished, so unfortunately I have no video. This exercise sparked ideas of visual permutations dependent on voltage, which may even crawl into my forum instrument somehow...

Christian Haines. “Forum – Week 6 – Physical Computing.” Workshop presented at the Audio Lab, Level 4, Schultz Building, University of Adelaide, 30th of August 2007.

Sebastian Tomczak. “Forum – Week 6 – Physical Computing.” Workshop presented at the Audio Lab, Level 4, Schultz Building, University of Adelaide, 30th of August 2007.[1]

1. David Dowling "Forum - Week 7 - Physical Computing." Blog entry reference. Online http://www.notesdontmatter.blogspot.com/

Forum - Week 7 - Physical Computing 2

For "physical" computing there sure was a lot of sitting down.

This week we continued our physical computing capers, this time controlling one of our circuit-bend toys using Max. Our circuit bend toy is still the phone, as all subsequent toys we have purchased have not worked. We decided to control the battery terminal connection in the circuit using Max, and of course we ended up plugging a metro into the kslider so that the circuit would turn on and off rapidly (every 50 ms in the video).



Exercise 4 proved to be troublesome, as even though we followed the instructions directly, and under the watchful eye of Seb, it did now work. Every time something doesn't work it doubles the amount of time required, so from now on Jake and I will 1. Follow the instructions. 2. Repeat instructions if necessary. 3. Blog that result. So here is our first 'doesn't work' video. The circuit took a good 15 minutes to set up, so we were a little disappointed when it had no effect on the sound.



Christian Haines. “Forum – Week 7 – Physical Computing (2).” Workshop presented at the Audio Lab, Level 4, Schultz Building, University of Adelaide, 6th of September 2007.[1]

Sebastian Tomczak. “Forum – Week 7 – Physical Computing (2).” Workshop presented at the Audio Lab, Level 4, Schultz Building, University of Adelaide, 6th of September 2007.[1]

1. David Dowling "Forum - Week 7 - Physical Computing 2" Blog entry reference. Online http://www.notesdontmatter.blogspot.com/

CC2 - Week 7 - Sampling II

Continuing with my enjoyment from last week, I have gone all black and white in order to create an old-school granulation synthesiser- just like mama used to make. Those signalmeter~'s are soo retro...



As you can see, I have taken the knowledgeable face of Sigmund "Ol' Dirty Bastard" Freud and used it as a distraction from the rest of the patch. Genius, no? Jake and I used a different object to the prescribed "ranger" one, as this new one actually provides visual reference to the wavetable for range selection. A problem we encountered was envelope placement outside of the poly~ object- it just can't be done successfully. It would work very, very easily if function objects could be sent data straight from other function objects, however this is always unsuccessful. Before you say that it's possible, TRY IT. The function object is an absolute joke, and I hope we don't have to use it for much longer.



A last minute addition was my panning... thingy. As seen below, a random number between 0 and 1 is picked and that number increases the volume of one channel and decreases the other proportionally. Eg. the number 0.80 is picked, so the right channel volume is multiplied by 0.8 and the left channel is multiplied by 0.2. The result of this is pretty good, as each instance in the poly~ has it's own pan value set each time the instance occurs, and the grain plays at this pan position until the grain is finished.



Remember, a Freudian slip is when you say one thing but mean your mother...

Patch and wavetable files
~ZIP 540KB~

Sonic result
~MP3 670KB~


Christian Haines. “Creative Computing Week 7, Sampling II.” Lecture presented at Tutorial Room 408, Level 4, Schultz building, University of Adelaide, 6th of September, 2007.

Monday, September 10, 2007

Forum - Instrument Proposal

Instrument Name: Victoria

My instrument will take concepts created by Will and I earlier in the semester in regards to Victorian Synthesis. The instrument will primarily play itself, however there will be much involvement from the 'musician' to change tempo and component involvement.

The concept is a drum-machine-style conglomeration of rhythmically intertwined Victorian Synths (see Forum Week 1) that would hopefully provide a infectious tempo for the Forum improvisations. The overall size of the instrument is yet to be established, however I aim to fit it in a shoe box or two. As is required for the concept, I will be using copious amounts of aluminium foil, which always looks pretty.

I hope to allow other student's instruments to contaminate the 'circuit', either by running their output sound through my speakers (thus effecting tempo and rhythm) or by including my rhythmic electrical beats into their circuit.

Although it will take some 'hacking' I aim to control volume (and thus tempo) using the Arduino board and Max. What I presume needs to be done it to have the voltage of the circuit controlled by Max, which seems very possible.

I surmise that my instrument will be more aesthetically pleasing than aurally pleasing (what with all of the aluminium foil) which is where coloured lighting comes in - my instrument will act like a mirror ball!

Wednesday, September 05, 2007

CC2 - Week 6 - Sampling 1

What's this? My OWN work? How wonderful. I have figured out that the weekly Max tasks are a lot less tedious if I try to make them fun, ie. base the whole thing on James Brown (rest in peace).



I created my own samples from James Brown songs (I enjoy creating loops for some reason) and the picture is just from Google images. Another way to make it fun is by not doing the poly~ version, as this doubles the time taken and total tedium in patch creation. I will probably do it eventually, but you know how it is.



I was tentative about putting white sliders in, but brown ones were too hard to see over the red (and I had to have red to match the clothes). Get Funky!

Sonic result
~MP3 885KB~

Patch files
~PAT 1.96MB~

[1] Christian Haines. "Creative Computing: Semester 2, Week 6. Sampling I." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 30/08/2007.[2]

[2] John Delany. "Creative Computing: Semester 2, Week 6. Sampling I." Blog reference at http://johndelany.blogspot.com/

Monday, September 03, 2007

CC2 - Week 5 - Synthesis (2) - Vibrato, FM and Waveshaping.

The outcomes of FM synthesis seem to be far richer than additive synthesis- the most complex sound can come about so easily, sometimes in under a minute. I look forward to utilising this synthesis technique in my major project. I encountered some issues when trying to incorporate a 'function' envelope into the harmonicity ratio in the help file, as I wanted a choice between either a static value or enveloped. The solution ended up using a combination of gate and gate~, so only one of the controls can be used at a time. I apologise for the lack of interesting interfaces, but as you may have guessed this was all a little last minute.

FM synthesiser

FM synth GUI

FM synth help file


Here is the interesting sonic result.
~MP3 294KB~

And here's the patches.
~PAT 5KB~

Saturday, September 01, 2007

Wednesday, August 29, 2007

AA2 - Week 6 - FMOD

"Jet" Engine

Utilising the RPM tool in FMOD I have created a jet engine simulation. I used sound files from findsounds.com (not recommended for quality). Chopping up the wave files into loopable segments was easy compared to using my FMOD session on any computer other than the one I created it on. Even if I set the audio source directory to the correct place, save off and reload the session it cannot find them. To run the session again I had to load it on computer 8 in the Audio Lab, with the folders in their exact position from when the session was created. Maybe I'm missing something? In any case, I split up the sound of a jet into several areas:

1. Starter click
2. Turbine 1
3. Turbine 2
4. Jet engine warm up
5. Jet engine throttle
6. Jet engine full throttle
7. Jet engine top speed
8. Jet engine off throttle
9. Wind resistance

Event Editor window.

I would have liked to have done a 'sonic boom' sound for just before the top speed sound, however I could not easily find a suitable sound. The tutorial was helpful in some ways, but infuriating in others, such as: "37. Add a 'Volume' effect to each layer." - without stating that this should be done after clicking on the 'load' parameter. For this reason I did not realise that the volume change effects needed to be placed on the 'load' parameters, or even that there was a difference between selecting the rpm or load parameters. I did figure it out, but only after looking at the "example.fdp" and clicking around the "car" event editor window randomly.

Sound Definitions tree.

The stereo sound I added after the fact in Audacity, just to show off my mad skillz in stereoising a mono sound.

~MP3 1.02MB~

__________________________________________________

You've got to be kidding...

Monday, August 27, 2007

AA2 - Week 5 - FMOD

Aside from the (spectacular) car example, I'm having trouble seeing the use of looping the events and fading them together etc. Is there some other use for this function? I mixed together some vocal samples, which worked okay, but the only use I could see for it in a game is maybe for a dream sequence or something, where multiple layers of voice would be quite freaky. The pitch randomisation on the vocals reminds me of a Slipknot track. Those were the days.

~FMOD.MP3 400KB~

~FMOD FILE~ (Goodluck playing anything without soundfiles)


My hierarchy, or lack thereof.


Crossfade mania


The reverb I set up

Wednesday, August 22, 2007

Forum - Week 4 - Circuit Bending: Toys

I have always wanted to mess around with toy circuitry, so this week's forum was great fun. The first toy was a Light Saber, which made sword sounds when a button was pressed. Initially Jake and I hooked up a potentiometer that run through the touchpad and resistor, which allowed us to play back the sound extremely slowly.


Then we convoluted the signal with the LED, giving it a nice bleeping effect.


Finally, we gave it a soul and set it free.



The weekly assignment was to bend some circuitry on a toy of our own.

The Rap Car


Jake and I pulled apart a little rapping car, which proudly shouts the words "Uhh, You're a Dead Man!". Unfortunately the resistors on the circuit were so small we couldn't get anything to permanently attach to it, which severely limited our outcomes.

Circuit diagram


Nonetheless, we still conjured some weird and wonderful sounds. This was the first time we tried circuit bending - by which I mean we actually bent the circuit, which can be seen in the 'gradual sound speed up' video below.

Toy car before & gradual sound speed up


Jake and I were unsure of how we were actually creating the sounds in the following clips, as we did not have anything connected that we hadn't already tried before.

Tone created with circuit & strange 'boing' sound


Our hatred of the limitations of the circuit and of lo-fi toys in general meant we had no qualms in smashing the toy to bits afterwards.

Jake's distaste for toys & my distaste for toys

Tuesday, August 21, 2007

CC2 - Week 4 - Synthesis (1) - Additive, Tremolo, RM

This was definately the most fun I've had with Max this semester- the elusive 'church organ' sound took up much of my time (to no avail). I made an additive synthesis patch, which of course uses the 'function' object as an envelope controller. The most difficult part was trying to figure out a way to make a primarily visual-based patch (re: the function object) into a poly~ object. After much deliberation (and subsequent complication) I managed to make a 16-or-so input poly-capable version, which is as much fun to use as it was to create (so, not much). Nonetheless, the gui version seems to work quite wonderfully, and can be seen in action in the .help file.


Additive synth unlocked

Additive synth poly~ edition.

Additive synth help.


Here are some samples of the sonic results.
~MP3 104KB~

And here are the patch files.
~PAT 6KB~

Monday, August 20, 2007

AA2 - Week 4 - Game Engine

Burnout Revenge uses the elusive Renderware engine. After watching Dave struggle to find anything (his game also used Renderware) and an hour or so of my own searching (you would think www.renderware.com would be a goldmine) I emailed the company only to receive no reply. Thus, the following analysis is purely my own stipulation combined with the associated Wikipedia article (which in itself is bereft of content).



Renderware has been used for numerous games, seemingly due to its ease of implementation and consistently successful outcomes. The Grand Theft Auto series are the most popular games to use the software, however this has come under criticism for a relatively unsuccessful implementation. The Burnout series uses the game engine to spectacular effect, particularly with Burnout Revenge on the Playstation 2, which has some of the best graphics ever seen on the (dated) console.



Most Renderware games that I have played (barring GTA) have had a certain Renderware 'sheen', where a crisp resolution combined with minimal anti-aliasing and a smooth framerate being indicative of its use. With the advent of in-game surround sound on the XBOX console, the Renderware engine seems to be able to cater for a range of audio technologies.




Renderware has now fallen out of use, with the last few games using the engine suffering from the engine's limitations on the next generation consoles.

Wednesday, August 15, 2007

CC2 - Week 3 - Polyphony and Instancing

This week's exploration of instancing seems to be a very important topic when it comes to audio rate data; the relief that the poly~ object would allow the CPU would be very beneficial to the smooth use of the program. A bit of a brain teaser this week was ensuring that the output from the poly object did not 'red-line'. Clip on output, in other words. This was solved by dividing the number of instances by 1, then multiplying the output by that number, eg. 8 instances= amplitude is decreased by 0.125. This ensured the optimal volume for any number of voices. I won't beat you over the head with pictures, as most of the patches look almost identical to these.

Inside bkp.cycle.poly.pat

Inside bkp.phasor.poly.pat

Phasor.poly help file.


Here are the patches.
~PAT 8KB~

This sound file is pretty plain, but you get the idea.
~MP3 82KB~

Tuesday, August 14, 2007

Forum - Week 3 - Breadboarding...

...only with a considerable lack of bread.

For something a little different, Jake and I ran my MP3 player through the ring modulator circuit. The result was quite good, however the video and sound was just typical mobile phone quality. If only we had some water and a waterproof piezo, we could have had the whole shebang. During forum, Jake and I used the waterproof piezo as a speaker inside a plastic cup of water, and picked up the (Nine Inch Nails) vibrations with another piezo jammed underneath the cup. We then swapped piezo roles- the waterproof piezo was the microphone and the piezo underneath was the speaker. The latter worked better, as moving the underwater microphone around in the glass created a variety of timbrel effects on the music (particularly when blowing bubbles through a straw). It was only fair that Reznor got a rest and Tool got a try, so this time we played 'Jimmy' through a ring modulator.


Dual potentiometers



Light sensitive potentiometer



Ring Modulated Tool

Monday, August 13, 2007

AA2 - Week 3 - Process and Planning


I have analysed a section from the XBOX game Burnout Revenge by Criterion. The Burnout series is known for its extreme realisation of speed and spectacular crashes, and Burnout Revenge is one of the fastest and most explosive incarnations. The music is inconsequential to the game, as it is simply a culmination of unheard-of pop-rock bands ("EA TRAX"), and as such I will leave it out. The beginning of the recording is main menu orientation sounds, including up/down selection, choosing selection, changing car colour etc. The race itself begins with a running start, so actual player interaction is not occurring until a noticeable change in the severity of acceleration is heard (about 4 seconds in). The ensuing sound consists primarily of engine/boost noises (player, opponents and traffic), car damage/crashes (same again) and extraneous sound (environment whooshing past, slow-motion SFX, timer or score 'dings').



Special mention must be made regarding the sound when using the slow motion feature, which is used after crashing to steer your wreckage into opponent cars. The on-screen sound has a filter placed over it which makes it sound like it has been covered with a blanket, while an avant-guard style screeching metal sound effect is placed over the top. The sound is reminiscent of sharpening a kitchen knife, or perhaps a church bell tolling in reverse. This can be heard after the boost-then-crash that occurs at 1'30". I wish I could have recorded it in stereo, however I do not want to lug my XBOX all the way into uni on a bus for an assignment that isn't actually an assignment. I had to stop writing each different crash sound heard, as there is a ridiculously large variety. I recall that for Burnout 3: Takedown onwards (Revenge is #4) the developers had the backing of Electronic Arts as their publisher, so the in game sound effects are mostly real recordings. They used real cars and smashed them up with bricks and axes and (of course) other cars, building up a library of sounds that could be layered up at convenient points in the game. As most of us know, the perfect sound effect for an object does not necessarily require that object to make the sound (eg. coconuts for horse hoofs), however the use of real cars for the sounds of twisting metal and smashed windscreens etc. is very believable in Burnout Revenge.



The realisation of speed is helped by the 'whoosh'-ing that occurs fairly frequently during gameplay, triggered by passing traffic, opponents or significant landmarks such as bridges, trees or even road signs. In surround sound (or even stereo) the speed at which the whoosh moves past is analogous to the speed the player is moving past the object, which makes the extreme speed seem all the more treacherous.





~MP3 1.77MB~

~XLS 35KB~

I only did a couple file names, as you get the idea pretty quick. Of course if this was real it would have all of the file names.

Wednesday, August 08, 2007

Forum - Week 2 - Piezo Party

I have kowtowed to popular culture- here is my youtube videos. Ugh. I am having most fun with the soldering iron- I fixed a bass guitar and created a lead for it. Huzzah, I hear you say, well my reply to that is I don't appreciate sarcasm. Oh, you meant it? Well aren't you just the model citizen, you've got my vote in the next council election.

Tuesday, August 07, 2007

CC2 - Week 2 - Signal Switching and Routing

..........................*sigh*.

After 6 hours I decided to stop, as I was approaching the point of extreme frustration. To 'add mute functionality' to the patches, the mute~ object needs to be sent into the patch from outside. This would technically not be adding mute~ functionality to the patch, as the mute~ object would need to be placed outside of the patch itself. I ended up just encapsulating the interior of the patch into an embedded patcher object, and running the mute~ into that so all that was required from outside of the patch would be a toggle. I don't believe this was a very economical way of muting a patch, but it seems to be the only way to include mute functionality.

My other time consuming (and as yet unresolved) problem was the use of arguments to a Bpatcher. I am aware that you place #1 etc. on an object within the patch and that gets replaced with the corresponding argument. I just wanted to send a hslider an argument, so I connected a message box to it with #1 written in it. Upon testing, the argument replaced the #1, however the number was not sent to the hslider. I queried Dave on this, and when showing me his patch he had the same problem- the argument not being sent from a message box. I moved on to creating the gui, of which I spent (and will subsequently spend) minimal time on considering the detrimental waste of time it was last semester.

I know we did not have to read the tutorials this week, however I ended up having to read them anyway to even remotely complete the task. I did not complete the two new objects, although the phasor~ etc. tutorials seemed to make sense. Actual implementation of the information into two new objects would add another 2 hours, so yeah. No.

~MSP Files .ZIP 5KB~


Monday, August 06, 2007

AA2 - Week 2 - Game Audio Analysis

Surprise, surprise.

The game I will be aurally exploring is Halo: Combat Evolved (2001), developed by Bungie Studios. Halo was a release title on the original XBOX, and was also one of the first console games to use digital 5.1 surround sound in-game. The level I concentrated on was beginning of level 4, "The Silent Cartographer", as the first 3 minutes of this level has quite intense layering of music and SFX, especially when heard in full 5.1. ~This audio recording (3.1MB)~ is of me playing the PC counterpart, with the sound quality set to maximum so it almost sounds like surround sound (when through headphones). ~This youtube link~ is a very amateur run through the start of the level, probably on the PC version (note the lack of water VFX, and player invulnerability. n00bs). A free PC Demo of the level can be downloaded from Bill himself.

Aerial shot of the start point.


The level starts with a cut scene of dropships flying over an ocean, approaching an island. The music is a full orchestral score, with a female voice over, radio chatter and the sound of the dropship engines also heard. The camera changes to first person, revealing that you are on one of the dropships. As the dropships 'drop' you off on the beach, your allies open fire on the (alien) enemies, often shouting cheers of enthusiasm or shouts of panic and sometimes asking for assistance, depending on the ferocity of the enemy. The enemies are equally chatty, shouting alien obscenities ("Wort! Wort! Wort!") or screaming in fear as they see their squad depleted around them; Halo is noted for having 'over 3,000 unique vocal excerpts' (1).

First person view from the dropship


In this first section of the level there are 7 different weapons being used, all of which have distinctive sounds, such as the predictable combustion sound of the human pistol and assault rifle, or the electronic 'fizz' sound of the alien plasma weapons. The guns have mechanical and muzzle sound, and the bullets themselves also have airborne sounds and impact sounds which change depending on the type of bullet and what the bullet is hitting. Explosions are frequent, with human and alien grenades being used by each side respectively. At the end of this initial fight, the music track finishes conveniently close to when the last enemy is killed, normally with an actual musical 'outro' rather than a simple fade-out. This occurs numerous times throughout the game, where a soundtrack for a particular action sequence will play until the objective is completed, at which point the music moves into the outro of the piece. Composer Marty O’Donnell has said that the score “could be dissembled and remixed in such a way that would give [it] multiple, interchangeable loops that could be randomly recombined in order to keep the piece interesting as well as a variable length.” He also made “alternative middle sections that could be transitioned to if the game called for such a change (i.e. less or more intense)” (2).

In 5.1 all sound effects occur in real-time surround, so plasma bullets can be heard whizzing past, or team-mates shout from behind you and fire off rounds past your head. The use of 5.1 becomes a gameplay mechanism, as identifying enemy positions is much faster when you know not only that they are behind you, but their general direction in a 360ยบ space.

A ground-breaking (at the time) aspect of the sound in Halo is the 'SFX context mapping' (for the lack of the real name), where a sound occurring in one environment will sound different to the same sound occurring in a different environment. The reverberation time and timbrel quality of the reverb changes depending on the size of the space, with small rooms having a very closed-in sound and gigantic rooms having huge reverb and echo qualities. The size and timbre of the reverb is processed in real-time, with actual room dimensions used in the calculations (1). In addition to this, the speed of sound is also coded, such that the further away a sound occurs the longer it takes to reach your ears. Such effects help to increase the spatial realism of the game's environments.

NERD.

1. Ben Probert's memory of Australian XBOX Magazine.
2. Aaron Marks and Martin O'Donnell "The Use and Effectiveness of Audio in Halo: Game Music Evolved" http://www.music4games.net
3. Christian Haines. "Audio Arts: Semester 2, Week 2. Game Audio Analysis." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31st July, 2007.

Here's Some I Prepared Earlier...