Monday, December 01, 2008

Creative Computing Major Project

For my project I endeavoured to make a multiple-sample break-beat cutting program. The idea was that you could use it in a live situation to cut up and mix samples into a real-time composition.

"MultiCutter" ultimately did not reach its final form, however the concepts were all realised in some sense. The program uses GUI controls to change values of BBCut algorithms, which in turn change the playback of samples being cut up by each respective BBCut. I feel that the biggest success of my program was the efficiency of the GUI creation and BBCut instancing. The GUI slider, label and number box are only written once in the code, and are then reproduced for each control parameter for BBCut. This process is then repeated for each BBCut, meaning the slider, label and numbox are again duplicated and assigned a parameter. The BBCut is also only written once, but it is duplicated for each sample. This easily expandable environment would have eventually been used to change GUI elements and number of BBCuts depending on the number of samples chosen by the user.

I had two fatal bugs in my program; the worst involved trying to get each instance of BBCut to read an individual sample source. Each instance was assigned a number (under argument xPos) which I attempted to replace with a letter so that BBCut could use it as a buffer address. Replacing the instace number with a letter was easy (I left that bit of code in comments) however it simply would not work in BBCut as a buffer address. This meant that each instance of BBCut would not load a different sound file. Replacing the buffer address with a constant (“g”) results in each BBCut playing the same sample. This is how I have left the patch, as it proves that each instance of BBCut is individually controllable by its respective GUI.

The other bug involved using a master tempo slider to control every instance of BBCut. Looking back, I would have needed to redesign the code so that the TempoClock was outside of the instace-creating do-loop.

MultiCutter + Sound Samples : 370kB

Audio Arts Major Project

NOSFERATU

Sound and Music by Ben Probert



The segment of Nosferatu I chose to create sound and music for has five scenes: The village talking about the vampire, the first Knock chase scene, Ellen sewing, the second Knock chase scene, and the Orlock and Ellen scene. I based the sound design around the fusing of two approaches; old-sounding piano and modern sound design.

SFX

THE PIANO
To make the piano sound old I recorded a slightly out-of-tune piano and cut off the low and high frequencies using an equaliser, which resulted in a tinny sound likened to an old recording on a vinyl LP. I also recorded the pops and scratches from the start of a real LP so that it sounds like somebody putting on a record at the beginning of the clip. I originally had the LP crackling throughout the entire clip, however it became too distracting.

THE CHOIR
The choir recording originally consisted of my own voice layered several times for each part, however the soprano part ultimately sounded too much like a male falsetto. I asked fellow Music Tech student Poppi Doser to sing the soprano and alto parts, which certainly helped with the falsetto issue. I was very grateful that she could reach the high B flat, as it is the pivotal part of the end of the scene. I filled out the ‘rest’ of the choir using a lot of reverb.


SCENE 1 : VILLAGERS DISCUSSING THE VAMPIRE
The sound effects in the first scene are fairly subliminal. For a later scene I recorded the sound of a table dragging across a hard floor, and for this scene I used some of the accidental sounds from the room, reversed to add an unnatural feel. There is also heavy stereo delay applied to the sound, which gave it a more swirling feel.

SCENE 2 : KNOCK CHASE PART 1
The obvious sound required for this scene is the footsteps. After much experimentation I decided that creating the footstep sounds for a crowd of 70 people is beyond the scope of this project, and the sounds I did create overwhelmed the piano. The sound I used is very much underneath the piano sound, except where there is only a single person running. For a couple of the Knock running scenes where he is the only person in frame I matched up footstep recordings one foot at a time.

SCENE 3 : ELLEN SEWING
When Ellen looks up from her sewing I introduced a raspy scraping sound (a table being dragged along a floor) combined with a deep growl (my own voice), which would later be connected to Orlock. I used the sound here to insinuate that Ellen was somehow connected to Orlock, however this would not become apparent until the final Ellen and Orlock scene.

SCENE 4 : KNOCK CHASE PART 2
Like the previous chase scene there were many footsteps to account for, which ultimately took a back seat to the music. As this chase is in the wilderness, I superimposed a recording of outdoor ambience including bird sounds. The very end of this scene uses a transitional sound effect I created at my work by hitting a large bin with a hammer. This sound was run through numerous reverbs until it sounded like a giant wave of sound. For the scene I only used the build-up of the sound, then cut it off suddenly to emphasise the sudden chance of scene.

SCENE 5 : ELLEN AND ORLOCK
For this scene I went all-out with the sound design, as I wanted to change the feeling from action to suspense. The first sound that can be noticed is the Orlock sound mentioned previously, using a table dragging sound mixed with a low growl. I use this throughout the scene whenever Orlock is on screen, making it louder as the scene progresses. The table dragging sound is also used as a suspense device, as I mixed together many different recordings of the table being dragged at different severities so that the sound builds up over the entire scene. The main benefit from this sound is its sub-harmonic quality, which sounds particularly good through a sub-woofer. I also used a recording I made at work of a bunch of industrial-sized bins being towed by a little car. The rumble that these bins make can be heard from the fourth floor of the building, so it was particularly useful for creating suspense towards the end of the scene. It can be heard easily during the credits. Another sound I used was screams that were recorded in the stairwell of the Schulz building. The natural reverberation that occurs in the 12-storey room meant all I had to do was reverse the sound, so you can hear the reverb before the sound itself. I used this most prominently as the lead-in to the choir section.

MUSIC

SCENE 1 : VILLAGERS DISCUSSING THE VAMPIRE
For this scene I created a heavy, unmetered piano part that uses chromatic lines to mimic speech. This is to supplement the superimposed text that occurs when the villagers are discussing the vampire. The left hand is playing an ostinato between D, G# and D. This tritone hides any true tonality, and when combined with the chromatic ‘vocal’ lines in the right hand it creates a very uneasy feeling.


SCENE 2 : KNOCK CHASE PART 1
The first chase scene uses a low repeated D pounded in a constant rhythm, with a clichéd horror melody played in octaves with the right hand.

There is also a small three-note motif played, which is later established as the Knock theme. The end of this scene finishes with a repeating high D octave, ultimately landing on a low cluster chord to coincide with Knock jumping off of a roof. The high D octave is in fact a segue to the next scene.

SCENE 3 : ELLEN SEWING
The Ellen Sewing scene begins with a haunting melody starting on D. The intervals minor 2nd and minor 3rd (also a diminished 4th from the root) were chosen to express that despite the scene being a fairly relaxing moment, there was still tension regarding events occurring elsewhere.


SCENE 4 : KNOCK CHASE PART 2
The previous scene quickly leads to the introduction of the second Knock chase scene theme. A simple six-finger pattern is played on five notes.


This the fades into the same pattern three octaves lower, forming the start of the chase scene. The 6-note pattern moves across several chords, generally only changing two notes at a time. The Knock theme is also played over the top of the pattern in the second half of the chase scene.

SCENE 5 : ORLOCK AND ELLEN
The final scene is a much more heavily composed piece, and the changes in the composition generally mimic changes in the clip. I included voice for the climax as the piano simply wasn’t able to bring the scene to life. While doing so certainly takes a step away from the ‘old piano’ concept, I feel it was a necessary step to properly realise the scene, and still falls under the banner of ‘modern sound design’. Following is the complete score for the Orlock and Ellen scene, sans the sound effects.

Orlock & Ellen Sheet Music 137kB

Thursday, November 13, 2008

Monday, October 27, 2008

Blog Crash

Finally fixed the template, turns out Ripway.com (where my pictures are stored) was incorrectly registering download bandwidth and maxing out my limit every day. I sent an email to ripway support and within an hour they had fixed the problem. Not bad for a free service!

Thursday, October 09, 2008

Forum | Week 9 | Student Presentations III


Today the remainder of the 3rd Year students presented to the Music Technology horde. I was first, presenting my failed MaxMSP patch from Semester 1 2007, Fantastical Metal. The patch seemed to work well (in it's own way), but I would have liked to present my SuperCollider patch from last semester so as to round off a 'My Failures' theme.

Luke's SuperCollider patch was aesthetically pleasing to listen to, especially the combined organ sounds. The end was a little abrupt, and I think it may benefit from a nice thumpy kick drum throughout the piece.

Will presented his Max patch from last year, which we didn't really get to listen to. His environment sounds for a video game were well done, however they seemed to pale in comparison to the Half-Life 2 SFX.

Dave's patch obviously took a lot of effort to create, and the inclusion of drum beats and random samples helped to 'keep it from being boring'. I like the idea that the patch sounds fairly different each time it is run.

Last was Jake, who played us his SuperCollider patch from last semester. This was a 5.1 channel industrial ambient piece, and was certainly intense and very enjoyable. Jake obviously has a good understanding of the 5.1 sound field, with a 'spawning' effect perceivable throughout.

Thursday, September 18, 2008

Forum | Week 8 | My Favourite Things III


David "Dirty Harry" Harris was back in the driver's seat this week, and steered the bumbling cohort of disenchanted musicians (also known as Mus-Tech Students) towards the insanity of David Lynch. I have seen very little of David Lynch's films, but apparently Eraserhead was a good place to start. Well, so I was told, however I found myself developing a rather shocking headache throughout the movie. I know it's kind-of the point, but I found all of the scenes in which the conversations travel at snail's pace so incredibly, incredibly boring. Silence is only funny for so long, then it just gets old.

From a sound-design aspect, the film was very interesting, but the tedium of the on-screen action (using the term loosely), particularly in the first half hour, was often too overwhelming to maintain my interest in any aspect of the film.


David Harris. "Music Technology Forum - Semester 2, Week 8 - My Favourite Things III." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 18/09/2008.

Thursday, September 11, 2008

Forum | Week 7 | Student Presentations II


This week I had to bail on forum due to an over-active teaching practicum, a disease which I apparently contracted when I first started University. According to other student's blogs, it was student presentations. When I found out what Doug's MaxMSP project last semester was (a chord progression generator), I had a brainwave of what to do for my own presentation - my failed projects. I'll start with my Auto-Piano MaxMSP thingy I made semester 1 last year, then finish off with my craptabulous Rainforest generator I created last semester in SuperCollusion.

Can't wait!

1. Stephen Whittington. "Music Technology Forum - Week 7 - 2nd and 3rd Year Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 11/09/2008.

Saturday, September 06, 2008

Audio Arts | Week 6 | Presentations


This week we presented the work that we had done thus far on our Film Sound project. I was the only one who didn't have something synced to the movie already, mainly because I didn't realise we needed to this early on. I played the musical ideas I have had on the piano, and informed the class of my plans to make the music sound as old as the film, meaning bad quality and inconsistencies. To be honest I believe this will be more difficult than just recording straight piano, as I am having to record to an aesthetic. The new Zoom portable recorders will no doubt come in handy when sourcing an old piano.

John and Dave are taking a more traditional approach to the sound design, using symphonic instruments and/or pounding bass-driven beats. Jake's approach is quite compelling, using mostly ambient sounds interspersed with pulsing rhythms. Luke has approached the project with a theme of 'aggravation', whereby the audience is made to feel uncomfortable at convenient parts of the film. I really hope this works out, as I would love the see (and feel) the results.

Luke Harrald. "Audio Arts 3: Semester 2, Week 6. Student Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 2/09/2008.

Thursday, September 04, 2008

Forum | Week 6 | Electronic Music Organisation (EMO)

Baby crying competitions - Welcome To China!

This week several lucky students had the chance to present their collected Indian Aesthetic sounds that they spent all yesterday evening working on, and the rest of the class had to Guess The Emotion™. Most notable was Freddie's, where he explained how studies have proven that infant humans are able to communicate emotions through different types of crying. While he was playing the well-researched sounds, I, as most of the other people in the room, attempted to 'understand' what the baby wanted, based purely on the way it was crying. It was not until after all the samples were played that Freddie informed us that we were supposed to be writing how the cry made US feel, not what we thought the baby was feeling. I love a good twist at the end of a boring movie!

Mrmmrr mmmrrmr uhhmmmrrr

Sunday, August 31, 2008

Forum | Week 5 | My Favourite Things II


This week Stephen Whittington (1) played us a selection of works from Negative Land. Since 1980, Negativland have been creating records, CDs, video, books, radio and live performance using 'appropriated' sound, image and text. Mixing original materials and original music with things taken from corporately owned mass culture and the world around them, Negativland re-arranges these found bits and pieces to make them say and suggest things that they never intended to.(2) The outcome of this seems far too 80s for my liking, and also has a little too much of a 'cool to hate' vibe. While I can tell a lot of effort went into the creation of their new DVD Our Favourite Things, ultimately I did not enjoy it. Perhaps this is due to my inability to accept pop art as a respectable creative field, but in the end I just did not enjoy the sound or the visuals or the combination of the two.

1. Stephen Whittington. "Music Technology Forum - Week 5 - My Favourite Things II." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 28/08/2008.

2. Negativland biography, http://www.negativland.com/index.php?opt=bio&subopt=neglandbio

Saturday, August 23, 2008

Forum | Week 4 | Classical Indian Aesthetics

Here are some of my interpretations of Indian Emotions in a musical/auditory context.
HASYA (HAPPINESS/JOY/COMIC)
Major tonality
Consonance
Bouncy rhythm
Laughter

ADBHUTA (WONDER/REVERENCE)
Augmented chords
Elongated notes
High pitched notes

VEERA (COURAGE/HEROIC)
Resolution
Major tonality
Loud
Trumpets/horns

KARUNA (COMPASSION/LOVE/EROTIC)
Slow tempo
Saxophone
Sparse instrumentation
Kiss sound (etc.)

KRODHA (ANGER/FURIOUS)
Loud volume
Clustered pitches
Dissonance
Short note length
Yelling/shouting

BHIBASTA (DISGUST/LOATHING)
Uneven rhythmic patterns
Minor tonality

BHAYANAKA (FEAR/TERROR)
Fast/accelerating tempo
Quick succession of notes
Dissonance
Screaming

SHOKA (SORROW)
Minor tonality
Slow tempo
Melodic
Elongated notes
Crying

SHANTA (SERENITY/TRANQUILITY)
Erhu
Sparse notation
Minimal rhythm

Thursday, August 14, 2008

Forum | Week 3 | Student Presentations 1

This week some of the 1st Year students presented whatever they wanted.

The first couple students played their Musique Concrete piece from last semester, which was all kind of the same.

Another guy presented the music he created for a video game. It was a little clichéd, but there was good use of Taiko-style drumming. It would have greatly benefited from a better sound bank, as it was a little General MIDI sounding.

Another one played a recording he mixed and mastered. This was quite impressive, particularly with the recording of violin and sax, which I know from experience are difficult to get sounding good.

The following guy presented his Musique Concrete, which used found sounds from his house. The result was quite good, particularly the 'thunder' from the shed door.

Monday, August 11, 2008

Forum | Week 2 | David Harris and His Contrabulous Fabtraption

David Harris presented all the things that make him warm inside- Terra Rapture (his own composition) and a Schubert work for strings. His own piece made great use of sul ponticello, which requires the player to bow the string very close to the bridge. The sound is scratchy and airy, and is very effective when done by a quartet. I enjoyed Terra Rapture more then the Schubert, as is was more unpredictable and less contrived. This class made me pine for the Forums of old, where DH would play us music from his eclectic collection of LPs.

Stay tuned for a real Photoshoppery.

Sunday, August 10, 2008

CC | Week 2 | GUI (2)

All I wanted is to get some array-built SCSliders to control some things in a Pbind. Sounds easy enough, but none of the examples used SCsliders. It was either EZSlider or SCRangeSlider. Considering how whole-heartedly the EZSlider was disapproved of, I found it irritating that there was no explanation of how to implement the SCSlider into an array-built interface. I tried for ages just to get the example patches to work with SCsliders, but it wouldn't work. The SCSlider Help just uses the copypaste method. I'm sure there is some tiny syntax thing I have to do to make it work, but if I haven't been shown it and it isn't in the Help file then I can't know what it is.

I ended up having to use the EZSlider, which I (eventually) got working for controlling the speed of the Pbind. Then when I tried to control another part of the Pbind (\octave), the speed was reset to a value not even in my code. And now when I change the speed, the octave gets reset. And none of this actually appears on the interface, you can only hear it.

I have to say that MaxMSP sh!ts on SuperCollider for GUI. If making a crap-looking interface takes this long, I don't even want to think about making a good-looking one. Needless to say, I spent no time at all on the prettiness.

80sBass.RTF

80sBass.MP3 1.7MB


Only took a couple minutes...

Saturday, August 09, 2008

Audio Arts | Week 2 | Matrix Lobby Scene

The Matrix has been one of the most influential action movies ever made, most notably with its pioneering use of 'bullet-time'. I have probably watched it over 50 times in my life, and the famed lobby scene will always be one of my favourite moments in any film.

The lobby scene begins with the pile driver sound that started in the previous scene with Agent Smith and Morpheus. The presence of this sound in the previous scene seems to pre-empt the impending character Neo entering the building. The sound is linked to Neo's feet, giving a great weight to his footsteps. This is then joined by a fast hihat rhythm, increasing the tension of the scene as soon as you see the high-level security. There is a strange 'zip' sound as Neo puts his bag on the x-ray conveyor belt, which does not seem to fit the on-screen action.

When Neo opens his coat to expose his personal artillery to a security guard, a large whooshing sound enhances the significance of such a simple action. At this moment a pile driver hit echoes and the hihat stops, mimicking the shock of the security guard. Word count reached. Holy shit!

Tuesday, August 05, 2008

Creative Computing | Week 1 | Graphical User Interface (1)


Oh how I miss MaxMSP. I just want to add a picture for the background! Whatever. This week I attempted to create a GUI that could be used for sound file playback, looping and surround panning. I decided that rather than copypasting the same code several times for each instance, I placed one set of code into a do function and let it populate the window itself. This also allowed for quick customisation of the interface- you can choose how many instances you want with just a single number change. The window automatically resizes to fit the number of instances. This also led to a whole bag of troubles regarding GUI object output, but whatever.

I also ripped off Dave's background code to replace the metal one, but I couldn't get a good mix of colours, and the metal looks so cool anyway.



Considering how much effort is required, the outcome is quite lame.
Surrounder.RTF


Haines, Christian. "Creative Computing - Week 1 - Semester 2, 2008: Graphical User Interface (1)." Lecture presented at Tutorial room 408, level 4, Schultz building, University of Adelaide, 31st of July 2008.

Monday, August 04, 2008

Audio Arts | Week 1 | Nosferatu


Film sound is not an interest of mine, partly due to the 'Sans Video' we had to do in 1st year *shudder*. I understand it takes a great amount of experience to successfully produce the sound for a movie, and I am curious as to where this experience should be coming from for our projects. I have not done the Sound and Media elective or studied composition, so I am hinging on the weekly classes having some digestible/regurgitatible content. I am also bemused about what we exactly have to do, as there is no project documentation yet. It is strange that we have to choose the clip this early on, when we don't know the project requirements. For example, I would like to produce the sound for clip 3, however if we have produce all the foley with found sounds AND produce our own musical compositions then this clip would be ridiculously complex to produce.

Regardless, an idea I have had is to produce the sound as if it were recorded with the archaic sound recording equipment available around the time of the movie. I feel that this 'lo-fi' approach would better suit the film than some grandiose Bourne Identity style production.

Again, I don't know if this would be allowed, so it's really just all speculation.

Also, I haven't ever seen Nosferatu before.

Harrald, Luke. “Audio Arts – Week 1 – Semester 2, 2008.” Lecture presented at the Audio Lab, Level 4, Schultz building, University of Adelaide, 29th of July 2008.

Sunday, August 03, 2008

Forum | Week 1 | Music and Society


Just like old times. Our first forum for this last-for-some-but-not-for-me semester somewhat centred around the role music plays in our society. Personally, music is generally used as a tool for boredom alleviation or experience augmentation. For example, today I spent 2 hours catching buses and trains from Modbury to Gawler, during which I listened to some 80's classics, such as:

Hall & Oates: Out Of Touch
Wang Chung: Dance Hall Days
Michael Jackson: Billie Jean
Go West: Call Me
INXS: Kiss the Dirt
Yes: Owner of a Lonely Heart
Aneka: Japanese Boy
Talk Talk: Life's What You Make It
Joe Jackson: Stepping Out
The Fixx: One Thing Leads to Another

I know what you're thinking: "What are you, gay?", but no, these songs were in fact a part of an hour-long MP3 which I ripped from the Grand Theft Auto: Vice City game disc. You may know it as "Flash FM", and it even features all the adverts and DJ banter. In this instance I was listening to music to alleviate boredom (and it should be noted that this music could not be used for much else). As for 'experience augmentation', this often occurs when in a moving vehicle (eg. RATM), walking (recently Earth, Wind and Fire for some reason) or dancing (Parliament or funk in general). Word count w00t!

1. Stephen Whittington. Music Technology Forum, Semester 2, Week 1. "Technology Theory & Culture." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31/07/2008.

Thursday, July 03, 2008

Creative Computing Major Project

I suppose late is better than never, except at university. My major project for Creative Computing is centred around a concept of environment and weather, in particular rain in a rainforest.

In the main composition, the rain that builds up and then subsides was the original concept for the program, and is the main controller of thematic material, namely drums and bass (both natural inhabitants of a rainforest).

The animal sounds were downloaded from www.a1freesoundeffects.com and http://www.sound-effect.com/ for free. The low quality of the sounds meant I had to change the sample rate (from 8000Hz to 44100Hz) and use Noise Removal in Audacity. Once the files were played in a SynthDef with a reverb, the low quality was barely noticeable. I wanted to use Gverb on every animal sample, however this would crash the server after about 5 animals were initiated. I considered using FreeVerb, which would allow me to pan many sounds within a stereo field without panning the reverb, however the implementation took far too long and I eventually cut my losses and recorded the patch in sections. Due to such CPU/memory issues, I did not implement an automatic play-through code, as the server would inevitably crash.

Oh, and all soundfiles were originally mono (even the thunder).

The drum SynthDef was created by accident while trying to build a flute/pan pipe instrument. Utilising a nested sequence of Pseqs (based on a weekly task), I created a form of drum sequencer, a part of which includes a random order sequencer and a random beat sequencer. I particularly like the effect I created for this drum sound, as I am running a ClipNoise into a klank, meaning the amount of resonance in the drum depends on how hard it is hit (ie how loud the ClipNoise is). Combine this with the drum sequencer which has attenuable amplitudes, and the the result is fairly cool.

By far the most taxing task was trying to create a realistic rain sound. While my result is not yet indistinguishable from the real thing, I feel I brought it fairly close. Dust, Dust and more Dust. Ironically.


My score is as good as they ever are, basically just showing a timeline of events.

If I had more time/better organisational skills, I would have created a cricket and frog synth, as this seems to be the main thing missing in the mix. You can imagine my state of mind after working on it for far too long in one sitting, all while listening to rainforest ambience - relaxing sounds are not the best stay-awake technique. Oh how I wish I still drank coffee.

A big shout out to my main man SuperCollider Help, who pulled me out of a dark time in my coding life and carried me to safety. Did I say shout out? I meant middle finger. David Cottle's book was very useful, as were the weekly tasks, so an actual shout out to them.

My composition is best heard with speakers capable of producing low frequencies well. Or with a subwoofer. Doof doof squawk.
Composition: MP3 4.53MB

My SuperCollider code is in this order: Ocean breeze (background noise), Ambientator (ambient synth), Rain, Conga (feat. Drum Synth, Reverb Synth & Drummer), Bass (feat. Bass Synth & Bassist) and finally Animals (about 9 sections, includes Thunder, the king of all animals).
Rainforest Code: RTF 27KB

You do not have to download the animal sounds to use my patch, unless you actually want to play them. Don't, they aren't very interesting. You can hear them in the above composition anyway.
Animal Audio: ZIP 2.3MB

My patch in layman's terms. John Layman. Hmm, if he exists he should write one of those 'for Dummies' books.
Program Note: PDF 38KB

Thursday, June 26, 2008

AA Major Project

CHOIR

I recorded a choir that is based in Murray Bridge, consisting of 9 singers and a pianist. I had a microphone and set of headphones for every person, which was a feat in itself.
'May It Be' MP3 3MB

I have decided I don't like the sound I achieved through mastering, and will remaster it when creating the choir's album. Overall too much compression, the piano sounds like arse and some singers are too loud (which was on purpose at the time, so as to hide the less talented ones). I think I should've stuck with a music genre I have done before, instead of trying something completely alien to me (Alien as an adjective? And alliteration!? Awesome!). It was still a great experience, and I look forward to doing a more formal choir recording sometime.

UPDATE: Here are the tracks I will be giving to the choir, including a remastered 'May It Be'. I think I've figured out the trick to these sorts of recordings- a light touch is always best. Light EQ, light limiter, light compressor. If you only download one, make it 'Yesterday'.

Sweet Georgia Brown MP3 1.4MB
Splish Splash MP3 2.2MB
Yesterday MP3 3.3MB
May It Be MP3 3MB




BAND

The band I recorded was called Save And Exit- yes, named after the computer task. Despite this they aren't nerds, and perform an interesting take on pop rock, with a slightly heavier edge. The recordings went pretty smoothly, in particular the singer only took a couple of takes to get each track.

The first song is probably the best mix I've done so far, however if I had more time I would turn the vocals down so the UltraMaximiser wouldn't squash the crap out of everything when the vox combined with the backing vox. Also I would like it slightly less compressed, however it was fun seeing how loud I could get it. Overall I didn't spent enough time mastering, but the results are still serviceable.
Song 1 MP3 4.5MB

The second song from the band is a little less well presented, but still 'does the job'. I'm finding that the more I mix, the more I am able to hear what needs to be mixed. Which is a good thing I guess. This track in particular ended up having the bass too loud, mostly due to the use of amplitube over the top to 'meat' it out a bit. I do however like the guitar tone I got for the bridge- it was exactly what I was trying to achieve.
Song 2 MP3 3MB



Just an update, the band took the ProTools sessions and gave them to the guy who produced their last couple tracks. Nice.

Wednesday, May 28, 2008

Fourm - Week 9 - Masters of Music Technology


Seb Tomczak presented his work on electronics and of course his work in the Milkcrate sesions. Something I have found strange about Milkcrate is the seemingly exclusive invitation list, which (perhaps unintentionally) restricts the attendance to the 2005 graduates. It is unfortunate that such an interesting concept will not be passed on to future generations of EMUers.

Darren Curtis updated us on his research into medicinal music, space music, vibrational research and... other stuff. The other stuff involved ancient architecture and their auditory properties, the most interesting being the temple that has a bird call "stored" within its design, which can be recalled with a simple clap. I would have thought this concept had been thoroughly explored already considering the advanced technology available today, however it seems to be a rather untapped field of exploration.

Tuesday, May 27, 2008

Forum - Week 8 - Peter Dowdall and Sound Engineering


This week took me back to the glories of first year, where Forum involved industry professionals presenting their work and experience to us. It was a most welcome flashback, particularly with Peter being so experienced and successful in the industry. His work for Pepsi's Superbowl advertisement exemplified the exorbitant amount of money and effort that can be required for a simple 30 second advert. It was also a little disconcerting, as the complete commercialisation of music apparently results in the removal of many the musical or lexical normalities that we as students have been trained to pay special attention to retaining. In the Pepsi example, there was basically no dynamics of sound, with horrific over-compression present throughout, and some of the singing had the ends of words cut off, just to appease the music. While the money would be lovely, I would feel as if I had sold my soul should I ever participate in such a blatantly non-musical music industry.

Dowdall, Peter. “Sound Engineering and Session Management.” Lecture presented at EMU space, University of Adelaide, level 5, Schultz Building, 8th of May 2008.

Thursday, May 15, 2008

Forum - Week 7 - Tristram Cary

First there was nothing...

This week's forum was devoted to Tristram Cary, electronic artist and founder of the Electronic Music Unit we all know and love. Tristram's contribution to the world of electronic music is immeasurable, so I was glad we paid tribute to him after his passing. I thought if would be fitting to do a photoshoppery featuring the man himself in his rightful place, considering that's all I'm good at. I have since been informed the concept of this picture may be somewhat taboo, but until I'm asked to remove it I think it'll stay up.

Whittington, Stephen. “Tristram Cary.” Workshop presented at EMU Space, Level 5, Schultz building, University of Adelaide, 1st of May 2008.

Wednesday, April 30, 2008

Forum - Week 6 - Vicki Bennett and the Art of Sound Collage



I found Vicki's work quite engaging. Mash-ups are a little old hat nowadays, but her development of the genre shows how no horse should be left unflogged. Despite the length of the piece played, the class remained fixated on the musical progression, listening for a recognisable song, melody of voice. In particular I spotted Marylin Manson's voice, which was somewhat difficult to discern considering it was just a raw recording of his voice with no effects, unlike the majority of his recordings.

It was also good to have David Harris back in the mix (so to speak), and I hope we get some more subjugation to Harrisian values throughout the year.


Forum, Week 6, “Vicki Bennett and the Art of Sound Collage.” Workshop presented at EMU Space, Level 5 Schultz building, University of Adelaide, 10th of April 2008.

Monday, April 14, 2008

CC3 - Week 6 - Something or other.



Interesting that we find out the blog is now worth 40% of our overall grade, 6 weeks into the the semester. Sure, it's extra incentive - next semester. I've got a funny feeling week 2 of these holidays will be jam packed with SuperfunhappyColliding.

Sunday, April 13, 2008

AA3 - Week 6 - Production "Outside The Square" (3)


Sidechaining is not something I have had a use for, perhaps because I never really knew what context to use it in. For this week's exercises, I used some samples from the Mac library. Each MP3 has an untouched sample followed by the... touched... sample...

1. I tested out sidechaining a gate, so I used a percussive sound (trumpet honks) to control the gate on a continuous sound (a bass synth). After boosting the Threshold and extending the Hold time, I had added a pseudo-bassline to the trumpets.

2. I extended this idea to an actual percussion sound (a drum loop) and used the same bass as before. I used the low pass filter on the sidechain controls so only the lower sounds of the drum loop would open the gate for the bass. While this worked, it sounded a bit dodgy.

3. I then used a (good sounding) drum loop to control the gate on a didgeridoo sound. Only sounds below 89Hz opened the gate, meaning only the kick drum had any effect. I had to play around with the ratio, release and threshold, but I eventually made it sound like someone was actually playing the didgeridoo rhythmically. Finally, I swapped the gate for a compressor, and made the didgeridoo turn down dramatically on each kick drum hit. The result is very Daft Punk, and I can't wait to find a use for it.

BenAAwk6.ZIP 712KB

Wednesday, April 09, 2008

CC3 - Week 5 - Sound Generation (0)




This is not a markable entry. This week has thrown me about quite a bit, so I guess I'll have to take another 0. I had spent a while creating ambience codes that "demonstrate (my) understanding of unit generators", then reread the planner only to spot that every patch must use either mouse or envelope. This might have been okay for retrospective implementation if MouseX/Y actually worked in PsyCollider. I doubt I'll have the time to use the Macs until Friday afternoon (MusEd and cello work due Thursday, working Thursday night, MusEd due Friday).

I guess planning and prioritising are the two main skills that one must possess to be successful at University, so I don't doubt it's my own fault. I may have to scale back the amount of (paid) work I'm doing, as I have been working all Saturday, Sunday and Monday, then Thursday nights as well. Also I've been falling asleep around 5-6pm while I'm trying to do homework, which tends to wreck my study time. Right now I'm writing this at 2am after waking up on the couch with PsyCollider still running. Maybe it was the ocean sound I was creating...
<
// Ocean Breeze

(
{ // Main controller
a = LFTri.kr( freq: 0.2,
iphase: 3,
mul: (LFNoise1.kr(0.3, 0.3, 0.4)),
add: 1
);

// Create noise and stereoise
b = PinkNoise.ar( [ a, a ] );

// Change bandwidth according to "a"
c = Resonz.ar(b, a*150+250, 0.8);

}.play
)
>

Saturday, April 05, 2008

AA3 - Week 5 - Production Outside the Square (2)



This week Luke and I teamed up to experiment with the Antares Auto-tune plugin. I have used this quite extensively in the past, so we tried to use it out of its intended context. Note - my MP3s are the same as Luke's.

For this first example I sung a glissando line from low to high, then used auto-tune set to C minor with the retune speed set to fastest. The result is more humorous than it is useful.
OverTune MP3

The next example took the previous concept a step further. I sung a guitar solo, then we added auto-tune with the same settings as before and ran it through Amplitube to add distortion. The result is a fairly believable-sounding guitar solo.
Lead Vocals MP3

Finally we tested out auto-tune on my cello. I purposefully did not tune it beforehand, so the results would be more obvious. Auto-tune was set to D minor with medium retune speed. Pitch-correction can be heard on the second note, which means the G string was probably out of tune.
Celloooo MP3

Grice, David. “Production Outside the Square." Lecture presented at Studio One, Level 5, Schultz Building, University of Adelaide, 1st of April 2008.

Friday, April 04, 2008

Thursday, April 03, 2008

Forum - Week 5 - Pierre Henry



Music Technology Forum has almost always maintained an engaging and thought-provoking medium for technology-based talk, however this week was quite disappointing. Watching a 90 minute video on Pierre Henry is not what I would call a successful use of 2 hours, particularly because all of the second and third years would already know who he is and what he did. I understand that Henry helped to develop Music Technology to what it is today, but that is seriously all we need to know. How he did it, using now outdated technology, is rather irrelevant in the context of Music Technology today.

1. Stephen Whittington. "Music Technology Forum - Week 5 - Pierre Henry." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 3rd April 2008.

Wednesday, April 02, 2008

Forum - Week 4 - Student Presentations 2



Due to the complex nature of my presentation I got mine out of the way first. While segregating myself from the speakers (and thus audience) seemed like a good idea, I wasn't prepared for how dehumanising it is to talk to a computer screen. Hearing everything I say repeated back half of a second later didn't help either. In any case, my patch didn't crash so I shouldn't be complaining. I am glad that no one asked what mark I got, as it would have been a bit disconcerting to the first years.

Luke was up next, showing off his impressive work for the Fringe and his new band. I love the sound of his band, they are obviously heavily influence by the likes of Godspeed! You Black Emperor. Freddy then unleashed his major project from last year. I'm not sure what I did wrong for mine, but if this got a distinction I must have missed the point entirely.

Matt used the opportunity of a captive audience to present his latest Fresh FM job applications. I don't listen to the radio, so I guess it wasn't really my 'bag'. Finally Douglas presented a screen video capture of a live rendition of his major project. Much like Freddy's, the use of a guitar was prevalent.

Stephen Whittington. "Music Technology Forum - Student Presentations - Week 4." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 27th March 2008.

CC3 - Week 4 - Sound Generation

I decided to try my hand at additive synthesis again, this time I created a violin. Considering how long it took to create, the code itself doesn't look too impressive. Luckily the sound itself is quite convincing, although it does sound exactly like a synthesised violin. As is the approach whenever an attempt is made to mimic real life on a computer, realism stems from imperfection. A computer generated face just doesn't look real without hairs, blemishes, discolouration etc. The same can be applied to musical synthesis. What my violin sound needs is imperfection; maybe a scratchy start, shaky volume as the string is bowed, less instant vibrato, the sound of a finger pressing a string against a fingerboard, and so on. The inherent problem with computers is that we have built them to be so mechanical, so inorganic, that forcing them to imitate real life requires vast amounts of human input. Is the outcome worth the effort? Couldn't I just learn to play a real violin instead?



CCwk4.RTF (No promises)
CCwk4.SC

Haines, Christian. "Creative Computing - Week 4 - Sound Generation." Lecture presented at Tutorial Room 408, Level 4, Schultz Building, University of Adelaide, 27th of March 2008.

Sunday, March 30, 2008

AA3 - Week 4 - Production 'Outside The Square'


This week we were to use unorthodox methods of production to create interesting and original sounds. Audacity was my weapon of choice, as most of you would have already guessed. The first sound (1n0te.mp3) I created uses a single pluck from a violin, which I then (in this order):
Compressed,
Amplified,
Slowed down,
Time stretched (a lot),
Duplicated and time-shifted so that the time stretching artifacts are filled in,
Noise removal,
Original pluck added to start.

The second sound (GuitarShimmer.mp3) started as a snippet from a band that I had recorded in first year. First I ran it through numerous reverbs, both normally and in reverse. I then duplicated it and pitch shifted one up an octave, adding a bit more 'meat'.

The final sound (RevMetal.mp3) is something I have been meaning to try for a while but never got around to it - reverse delay metalisation. The idea behind metalisation is that a delay set to a delay time of less than .2 of a second adds its own pitches, so I wanted to try this in reverse, much like reverse reverb. I recorded myself talking, then reversed it and added two lots of delay, then rereversed it. For something so simple it certainly sounds cool. I had to layer the original recording over the top quietly, as the clarity of the words was somewhat lost.

BenAAwk4.ZIP 907kB

David Grice. "Audio Arts: Semester 1, Week 4. Production 'Outside The Square'." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 25th March 2008.

Tuesday, March 25, 2008

CC3 - Week 3 - SCOSC

I had originally been a part of the Tuesday night shenanigans for this week's CC, however I was unfortunately too busy becoming an uncle for the first time to my new niece Caitlin.

I still plan to complete the tasks, however not in the graded time frame.

Just give me 0 if you have to, I won't regret it regardless.

Forum - Week 3 - Student Presentations 1

This week we kicked off the student presentations with Sanad, Jake, Peter, Dave and Johnny "Kick A Hole In The Sky" Delany. Sanad presented a fantastic composition that used Iranian singing and found sounds to create a very cool piece. I wouldn't mind a copy of it, as it would certainly become a staple song on my MP3 player.

Jake probably presented something interesting, however Dave took the opportunity of a captive audience to unleash the beast, more commonly known as "aquaDementia" (at least that's what I would have called it). His MaxMSP patch, although complex, showed a promising amount of depth, perhaps even more than the average Reason sampler.


I have always been a fan of John's Perspectives composition from last year, so it was great to hear it again. If there was a subwoofer set up I would like to have heard his other piece "Towards The Centre", however the enforced stereo set up would not have done it justice. Finally, Peter kept us up-to-date with his latest Plogue Bidule experiments. Just the sight of those wavy cables gave me shivers.

Thursday, March 20, 2008

AA3 - Week 3 - Recording "Outside The Square"


This week I had originally organised to record some Tibetan bowls, however while setting up for the recording I realised I had forgotten to send a reminder email to the... bowlist. So instead I recorded myself on acoustic guitar, as I wanted to try out the dual figure 8 mic setup in the Michael Stavrou reading.

So obviously my first recording was exactly this. I played a blues scale and sung it (!) at the same time, with two Nuemann U87s placed strategically so that the dead spot was aimed right where the respective unwanted sound source was. For demonstration's sake I have panned each mic hard left and right. I was impressed at the outcome, and I will no doubt use this many times in the future.

(word count!)

Recording 2 : My attempt at finding the 'tip of the flame' with a U87. Somewhat successful. Now realise the mic was still set to Fig8.

Recording 3 : Leaving the 'flame' mic in position, I placed another U87 behind the guitar body, Fig8. Did the Midside trick on this mic (invert etc.). Very nice and warm result. (Now called "Rearside".)

Recording 4 : (Pictured) Placed two Fig8 U87s directly in front of guitar, with dead spots aimed at guitar body on perpendicular angles. Panned left and right respectively, interesting result. Would work well as backing rhythm guitar. Apologies for the excess noise and breath sounds (something to consider in future recordings).

BenAAwk3.ZIP 454kB

1. David Grice. "Audio Arts: Semester 1, Week 3. Recording "Outside The Square"" Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 18/03/2008.

Tuesday, March 18, 2008

Forum - Week 2 - Blogging


Not the best way to break in the first years, I must say. I recall when I first started my course I seriously doubted the integrity of the Music Technology department, what with all the 'weird' electronic music. I didn't even know what 'avant-guard' was. While knew that Stephen was taking the proverbial piss, I could also see that the first years didn't know whether to laugh at the jokes or cry because of the money they spent on the course. Regardless, once the conversation turned from urbandictionary to the ethics of the weblog, the room once again resembled a typical Tech Forum.

An issue that I would have raised had the conversation not changed direction, was the dilution of the purpose of the weblog. I imagine before teenagers got their grubby little hands on it, a blog would have been quite a respectable place for people to log their adventures in their chosen field so like-minded people could share the experience. I can imagine a scientist researching something, and other scientists in the field keeping up with the progress of the research and offer thoughts and tips. Alas, the ravages of youth have disseminated much of the merit that was once possible, and now a scientist doing any such thing would be devaluing his/her work. IMO (sic).

The world's greatest blog. Press the 'stop' button before it loads all the pictures. But let it load some, otherwise nothing will make sense.

1. Stephen Whittington. "Music Technology Forum - Blogs, Assessment & Student Discussion." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 13th March, 2008.

CC3 - Week 2 - Introduction to Supercollider (2)


This week was slightly easier than last week. Exercise 1 involved embedding an if function within the 'no' answer of another if function. Exercise 2 meant adding in an additional equation to discern the frequency deviation. Exercise 3 and 4 required copypasting from the class notes. Ex. 5 meant changing the for from 4 to a forBy, allowing customised step size. Ex. 6 used the switch control, and Ex. 7 was just 6 with an added modulo and different names.

As I completed this in PsyCollider, I saved it in .sc format, which I know is recognised by Mac.

BenCCwk2.sc

Haines, Christian. “Creative Computing – Week 2 – “Intro to SuperCollider (2).” Lecture presented at Tutorial Room 408, Level 4, Schultz building, University of Adelaide, 13th of March 2008.

Friday, March 14, 2008

AA3 - Week 2 - Multi Micing (2)


Working with surround sound is something that has interested me for a while, and considering Gricey can only listen to our results in stereo I attempted to mix the recording in 'Virtual Surround'.

I teamed up with Luke this week, first to record in the Space using the infamous Decca Tree setup. For the recording, I entered the room from 'behind the tree', walked past it on the left and sat down at the piano directly in front. After playing a bit, I get up and walk past the tree on the right, and exit the room out the back right. The left, centre and right channels were panned respectively. For the rear left channel, I duplicated the track, inverted one and panned them left and right. The phase effect is somewhat analogous to hearing a sound behind you, so I simply eased off the pan on one of the tracks which moved the sound to the rear left. I then repeated this for the rear right channel.

The second mic setup was a Digance endeavour, in which we placed the microphones evenly around the room facing into the centre. I then proceeded to run in circles around the space, playing Aphex Twin on my mobile phone.

Over all I think the concept of recording in surround is flawed, however my virtual surround experiments seemed to work alright. If the recordings were made normally (close micing) and then mixed in virtual surround I believe the effect would be a lot more... effective.

AA Week 2 .ZIP 1.0MB

[1] David Grice. "Audio Arts: Semester 1, Week 2. Overview of 5.1 microphone configurations." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 11/03/2008.

Wednesday, March 12, 2008

Forum - Week 1 - "The Synergy Project"

"Synergy" to me carries the same weight as "Synesthesia" - not much. I have found such blatant word-dropping to be a tactic that the more weird art nuts use to try to establish themselves as pioneers of an emerging industry. I find a striking similarity to this kind of faux self-actualisation within the musical genre of death metal, where bands use track titles like "Lavaging Expectorate of Lysergide Composition" in an attempt to seem educated, whereas the track itself will probably only have two notes in it. I also intentionally avoided spelling it Synaesthesia - ooh, let's add an 'a' so people with think of encyclopaedia!


All in all the theme of collaboration was touched upon several times, mostly when the artists were not self-promoting. Ross Bencina was definitely the most engaging, yet ironically spoke the least about collaboration. While his life story was illuminating, it dimmed in the bright lights emitting from the Reactable videos. Oh, if only we could get access to one. That would be absolutely maaad.

"Her name was Deborah Polson..."



Bencina, Ross. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Gardiner, Matthew. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Polson, Deb. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Plumley, Fee. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Rackham, Melinda. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Wilde, Danielle. "The Synergy Project." Forum presented at University of SA – City West Campus, Sir Hans Heysen Building, Level 3, Lecture Theatre HH308, Friday 7th of March.

Tuesday, March 11, 2008

CC3 - Week 1 - Introduction to SuperCollider (1)


An inauspicious start to the year. With such a vast amount of computer language terminology and logic to absorb in one (very busy) week, I'd be lying if I said it was fun. So-called "help" files aside, I have managed to gain enough understanding to somewhat fluently create mathematical equations using SuperC, to the extent that was covered in class. As for the other set tasks, "putting words into sentences" was a little vague so I assumed the sentences could span across lines in the post window, and "printing an input string multiple times" was seemingly out of reach, despite the 45 minutes spent.

My first impression of the actual sound-creating example codes was surprise, as such a small amount of code is required for some very interesting sonic results. Unfortunately, after I tried writing my own 'functions', I can now see that even a small amount of code can take hours, and I assume creating something that is aurally pleasing would add even more hours of trial and error to the mix.

BenWk1.rtf

Haines, Christian. "Creative Computing - Week 1 - Introduction to SuperCollider (1)". Lecture presented at Tutorial room 408, Level 4, Schultz Building, University of Adelaide, 6th March, 2008.

Sunday, March 09, 2008

AA - Week 1 - Stereo Micing


This week was stereo micing, and rather than show off what I know I decided to venture into uncharted territory (ie. make it up as I go along). Jake and I recorded piano due to the ease of access of the Steinway Grand. All the results and pictures are in the .zip at the bottom of this entry.

We started with an XY positioning of two Neumann KM84s, the difference being we placed them above the pianist's head. The result was fairly analogous to the sound a pianist would hear, but is pretty boring to listen to. Note: one of the KMs had considerably more noise than the other- I have sent a support form already.

Next was what I considered the complete opposite of the previous recording, a spaced pair of KM84s placed at either front corner of the piano and pointed into the body with the lid closed. The result is much nicer than the last, with a much healthier stereo field represented.

We then moved on to using Neumann U87s. First we simply used a spaced stereo pair as I had never actually tried it before. The outcome ended up a little lacklustre, particularly in comparison to some of...*

Week 1 Audio Arts (.ZIP, 1.35MB)

[1] David Grice. "Audio Arts: Semester 1, Week 1. Overview of stereo recording for live performance." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 4/03/2008.

______________________________________

*(200 word count reached, feel free to disregard the following)... the more tricky micing techniques possible. I find it does not really capture the sound of the piano accurately, and feels somewhat fake.

We then elaborated on the Midside technique with the aptly-named "Bumside". This involved a U87 placed in the body of the piano, and another underneath the piano. I originally performed the 'duplicate, invert and pan' on the U87 in the piano body, however this made the mic underneath more prominent, meaning a loss of high end. Thus I simply recycled the process for the mic underneath, and the result was quite nice. One complaint I have is that having a mic underneath means the clunk of the pedal is much more prominent, so I probably would not use an omni polar pattern underneath again. (You can also hear my phone get a message right at the end of the recording- whoops!)

Finally we used the Midside technique for a room mic recording. I am surprised at how well this came out, the added ambience of the room really brightens up the sound. I imagine this would work very well when combined with the spaced stereo pair of KM84s, or even on its own in a more acoustically piano-friendly environment.

Tuesday, March 04, 2008

2008

New year, new blog template. I went with 'Green' apparently. In case anyone cares, the transparencies were done using 24-bit PNG pictures; easily the best thing since the automatic bread slicer.In a similar vein to Weimer, I have tried to keep clutter to an absolute minimum, and thus done away with the profile crap. Previous blog entries can be accessed at the bottom of the page- I might move this to the top at Christian's request.

I don't know about y'all, but I had totally radical holidays. That's right, radical.

Here's Some I Prepared Earlier...