Saturday 10 November 2007

2007 - Sem 2 - CC1 Major Project - Electroacoustic Performance

Project Plan
After doing the Integrated Setup assignments, I fell upon the idea of doing a version of a cannonic piece. Whether it is a real cannon, I’m not entirely sure, but I would like to attempt using Live to record a guitar to a click (so it is in time), this loop will in turn be placed in Live and played against the guitar while it is still playing. The guitar will have its effects automated in Bidule by various ‘links’ that will be made from various oscillators. The looping process will continue, say after every eight bars, and placed in Live and played against the former loops in real time. Eight bars will hopefully be enough time to get the loop into the right place in Live and have it play on time. A kind of rhythm between myself and the performer will need to be established to keep everything going for 3 minutes. By having Bidule save the guitar loops straight to the Live folder in use should keep things going smoothly and will have the loops quickly accessible for Live. Two separate record outs from Bidule will need to be used. One for the guitar itslef straight to Live and the other for the main mixer that will record the whole piece. I’m thinking a score might be better, but since I am running out of time for this, I am thinking sticking to a simple key like C Major and playing quarter notes (and maybe going to 16th notes as it progresses) will keep it simple, few notes should clash and this way will not complicate things.
-
Research
Creating “electro acoustic music” on a computer kind of defeats the purpose I think. Using pieces like John Cage’s Williams Mix as inspiration was rather difficult. The difficult part wasn’t creating an electro acoustic piece, but rather the creating of chance events or ‘flukes of sound’ that occurs naturally while creating pieces like this. A computer is rather locked in, we can adjust things rather easily after the event, we end up looking at a screen more than hearing sound, it is mostly keyboards and mouse clicking and sound events of ‘chance’ sound rather faked.

To get around this ‘fake’ sound, I approached this recording a little differently. I had hoped to create a canonic piece in the classical veine by grabbing loops from a performer while he played and placed them in Live in real time while he performed against it at differing intervals. This proved difficult as the loops were never in time as the computer (in Bidule) takes half a second or so to start recording thus making all the loops out of time. Also the drum beat I was using to keep time for the player (and to keep my loop captures in synch) just sounded very laborious, monotonous and well too in time.

Cage and his use of eight tracks of tape cut up into six themes, would not have the same effect done today on a computer. This for me is the challenge of making electro acoustic music on a computer. The challenge to create music that dos not sound faked.
Electro acoustic music is more of a type of aural variables than music and often seems to have some sort of visual element to it. If I was to sit down and simply listen to John Cage’s work, I would get bored of it within five minutes (and that is stretching it) but if it was a live performance, seeing him loop, splice and set up and actually perform his piece that would be much more interesting and entertaining. I guess the word entertaining is one that does not go too well with electro acoustic music. It is more art I guess with a focus more on the performance aspect (as it is fun to create) rather than audience approval.
-
Composition Analysis
As explained in the Project Plan, my initial concept was to creat a canonic piece based on playing loops back against the performer in real time. This proved difficult as latency was an issue as well as the basic timing in general. The latency was dropped down to 32 in Plogue Bidule but made little difference. I did not want presequenced instruments for a soloist to simply come in and play a track for. The main aim was to have the musician playing guitar while I captured sections of his performance, manipulated it in either Live or Bidule and play it back against his ongoing playing of his instrument and thus building the piece into a creshendo of timbral intensity.The first few times we used a pre sequenced MIDI loop in Ableton Live, but the rigidity of it was offputting. It was originally there mainly for me to count to for capturing loops. Although it utilised and demonstrated the MIDI sequence in Live made from cut up pieces of guitar from the CC week 11 lecture, I decided it was best to drop it. It sounded good but the loose feel I was after was not happening. A much more relaxed and sonically pleasing outcome was achieved when I decided to scrap the MIDI sequence and play with a frestyle feel. So after the eighth take we had something that both of us smiled about. The full take goes for half an hour. We were in a world of our own. I can’t send you the whole thing so I just took the first five minutes and made that the piece for marking. Listening back to it, if I wasn’t there recording it I wouldn’t believe that this is one guitar ‘technically’ playing by itself being recorded in one take.
-
Program Note
Mushrooms Aplenty
Freddie May
5 ‘ 02

This was recorded with one acoustic guitar in one take. By using a simple score of “play the instrument in an unusual way every two or so minutes but make sure it is different than the two or so minutes before it” I was able to capture loops and manipulate them texturally via adjustments to pitch, frequency or timing and play them alongside the instrument all in real time using a sequencer. The piece starts with the one guitar playing alone and eventually builds with intensity and grows sonically as more loops are taken and added to the original lone guitar. It is a spatial ambient piece that grows over a period of time into a densely layered audio experience that creeps up on you. Acoustic guitar playing provided by Chris Coventry.
-
-
-
Haines, Christian. 2007. “CC1 – Electroacoustic Performance.” Seminars presented at the University of Adelaide.

Live. 1999-2007. Ableton AG.
Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

2007 - AA1 Major Project - Environment Analysis and Spatialisation

The process. Well it was a hassle with a capital HOFF! I had so much grief making artificial versions of these birds. The trouble I see now is that they are organic creatures. I believe mechanical noises would be easier to replicate. Stupid birds. Oh well, it is done now, sort of. I am not happy with the results but something has to be handed in. In the end I took Christians advice and took existing bird noises out of context from the recording I made. In the end I settled for a goose. It kind of worked, but the sound still has 'goose qualities.' I used Pro Tools to edit it and twist it up into a poor facsimile of the birds on my recording. I even tried revisiting the torn paper and Spear without success. Bleh! Hmm, Bleh? Isn’t Bleh that racist hot chicks retarded cousin that whats-his-name bet he could screw for $50 on Drawn Together? Anyway, when I finished making sound effects I made a number of separate tracks in Pro Tools and lined up the SFX with the original recording. This would give me eight tracks with the sound effects on them separated ready to mix in Cubase. I bounced them as individual mono tracks and also through some mild reverb. This is why there is nothing filled in time wise on the assets list. All the files are 1:38. I didn’t bounce them as short, individual files. Although I recorded outside, reverb is still needed to make something sound natural. I tried to make it sound as if it was outside by removing early reflections and making the decay under a second. I think it worked. It sounds like it is at night now to me. Kinda cool.
Time to mix in surround. Since all the tracks were pretty much done, it was a simple matter of panning and using different volume and filters to simulate distance, ie rolling of the top end to make certain sounds more distant than closer sounds with no tops rolled off. I am reluctant to use the term hyper real to describe this work. Although I am not keen on using the term I guess I could use it as the sounds made are exagerated versions of the sound albeit not exagerated versions of the real thing and hyper real is a sound that over emphasises the sonic effect. They could possibly be foley as the sounds are made from other existing items that are taken out of their original context although that would imply vision is accompanying it, but perhaps not as I am pretty sure the sound effects people working on radio dramas were called foley artists. Perhaps I will just stick with the term sound effects. It is a simple but vague term that can mean a lot of different things. For this piece the term will mean ‘sounds created artificially using synthesis or original sounds taken from their original context and edited and/or manipulated for an unrelated specific sonic purpose.’
-
-
-
Haines, Christian. 2007. “AA1 – Environment Analysis and Spatialisation.” Seminars presented at the University of Adelaide.

Cubase. 2003. Steinberg Media Technologies

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Monday 5 November 2007

2007 – Sem2 – AA1 – Week 12 – Sound generation – Spacialisation

It may sound like a contradiction, but adjusting the time delays of a similar signal not only adjusts the time of that signal in space, it adjusts the stereo sound field when it reaches the ears. This ‘bending’ of sound is a direct result of the HAAS, dummy head mic and ORTF and NOS techniques and IAD (interaural amplitude difference) and ITD (interaural time difference) theorums. Although we are using one signal to create a 3D effect and these techniques use near coincedent mic placement, the same result (or very similar. I’m of the mind that if you want a stereo spatial effect use two mics.) should theoretically be achiveable.
So, let’s have a look. Each sound has a left and right channel. First I delayed the left channel by 34 ms while keeping them both panned centre. The delayed sound naturally sounded quieter so I didn’t roll off any tops. There are two distinct sounds in this part. The next three are single sounds. Next I tried an Interference Pan. I pitch shifted the left channel down by half a semitone. This sounds more ‘filtered’, kind of more comb filtered sounding. The third sound is a Forward Sound version. This had the right channel fader lowered 3dB to make the sound go more to the left and finally the Stereoised version used two methods. I panned them both at 3 and 9 o’clock respectively, deleayed the right channel by 34 ms and also cut its dominant frequency at 3k.


-
Audio
-
Haines, Christian. 2007. “AA1-Spacialisation” Seminar presented at the University of Adelaide, 25th October.

Wednesday 24 October 2007

2007 – Sem2 – CC1 – Week 11 – Integrated Setup (3)


This function in Live is surprisingly cool, once Edward showed me how to add the Impulses which I seemed to have missed during the lecture that is. This is a rather simple little tune created from random snippets from the supplied guitar file, but it demonstrates the MIDI Impulse function well. I can see a tune like this being used in an adventure game. Again, the surprisingly ease of use and the fact that one sound can be morphed this easily into something different great and is totally practical in soundscapes and effects. I’ll be using this to create my rhythm track for the major assignment.
-
-
Haines, Christian. 2007. “CC1 – Performance Sequencing(1).” Seminar presented at the University of Adelaide, 16th October.

Live. 1999-2007. Ableton AG.

2007 – Sem2 – AA1 – Week 11 – Sound generation – Spectral Synthesis



FFT for me will certainly have its use in the future, but at the moment I am lost in the thought of knowing how to make an exact sound I want from scratch, ie not by randomly fiddling about. I would like to get to a point where I know the exact frequencies and shapes to create for the sound I need at that particular time. At present, it all seems a little random and whatever sounds we get seem to be by pure chance. Most of the sounds I ended up with when experimenting with white noise were variations of the “raindrops” that were made in class. I did expect to be able to make pretty much anything imaginable with the white noise since it contains every frequency. I got better results by importing real audio and changing the window parameters, varying the frequency of the filter and speeding and slowing down the audio itself. If I was to manipulate audio at its sinusoidal level, then Spear would be quicker and easier, but of course we could not add envelopes and such to it. Speaking of which, I tried adding an ADSR on an audio file and realised it only accepted a gate input and so far all I can see with gate outputs is a MIDI bidule such as Note Grabber. Can anyone tell me how to add an ADSR to affect audio. Pretty please.
-
-
Haines, Christian. 2007. “AA1-FM Synthesis” Seminar presented at the University of Adelaide, 18th October.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Monday 15 October 2007

2007 – Sem2 – Forum – Week 10 – Instrument Show And Tell

Well we got to show off our instruments we have been building/making/destroying. It was interesting. I particularly liked the square wave generators a couple of students had made that were performed by placing the hands over light sensitive resistors. I would like to make one of these for myself some time down the track as I think my kids would get a kick out of playing with something like this. Unfortunately some students instruments didn’t work or were being temperamental on the day, including mine, but hopefully it will be sorted by next week. All in all pretty cool and I will no doubt explore the art of circuit bending more in the holidays. I admit I was one of the more vocal skeptics, but I must admit this exercise has opened up more ideas and avenues for musical creation. I don’t look at toys the same way anymore. There is always a thought of how it could be bent into something cool.
-
Whittington, Steven. And all us Tech dudes. 2007. “Circuit Bending Forum” Seminar presented at the University of Adelaide, 11th October.

2007 – Sem2 – CC1 – Week 10 – Integrated Setup (2)

Things went rather well with this. Since I took up the last blog with AA stuff I thought I would just put everything in this blog. Last week was to use Live and Bidule together and link a few variables for automation. This week we expanded on that and used more links to automate different things. I worked with Bradley in setting up an oscillator to control panning of channel 1 on the mixer, this also controlled the wet/dry amount on a reverb and Tristan played guitar that was recorded into Live. This was a rather simple setup, but the fact that loops were recorded straight to Live immediately and placed into the slots easily and quickly opened up a possible idea for my CC major project that I’ll send to you now.


-
-
Haines, Christian. 2007. “AA1-FM Synthesis” Seminar presented at the University of Adelaide, 11th October.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Saturday 13 October 2007

2007 – Sem2 – AA1 – Week 10 – Sound generation – Additive Synthesis

I was quite pleased with my recent understanding of Bidule. I found making this bidule patch easy. I ended up making an additive synth with ten waveform possibilities. I was going to add an LFO today, but the trial has expired. So instead of starting from scratch at Uni, I’ll blog what I have. I am confident I have this right as I was able to reproduce the ‘pumpkin smile’ with the first two waveforms, one starting at 0 and the other at +0.75. It’s not even Halloween yet.






The waveform below I posted because I thought it was interesting. Essentially a waveform in a waveform was produced by panning the first wave left and the other centred while mucking around with frequency settings.



Although my soundfile does not include sounds I will use for my final project, (as the trial period expired) I will definitely be able to make some chirping sounds when I remake the patch at Uni.

-
The audio is basically me mucking around with the settings so there isn’t really any parts of significance so I didn’t include a pic of it with markings. Until I add the filters and the LFO it is just a lot of different frequency tones and not synthesised sounds based on anything real.
-
Haines, Christian. 2007. “AA1-Additive Synthesis” Seminar presented at the University of Adelaide, 9th October.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Monday 8 October 2007

2007 – Sem2 – Forum – Week 9 – Bent Leather workshop

I was quite interested in this. I thought the unusual instruments were extremely cool and made great sounds. That is until they said the instrument sounds were generated by VST plugins. What a letdown. After weeks of making, programming and bending our own instruments I thought they had done a similar thing but on a larger and more complicated scale, but really it was just a homemade bassoon and MIDI light harp. No different really than me bringing my homemade bass to Uni and playing through plugins to create a bunch of weird sounds. The only thing really to take from this was the fact that mic placement had a great effect on the timbral quality. For example, the wind instruments were basically just the same as any other wind instrument except the pickup was placed near the reed so as to pick up higher harmonics to create scratchier sounds to be manipulated with plugins. Also, the next time you record say a bassoon or sax just bang your finger on the contact mic while they’re playing. It was entertaining, but from an instrument building perspective, meh. Personally, I found Reed Ghazala's work more interesting. He even looks more interesting.

Look at me. I'm a crazy circuit bender that collects mushrooms.




Now this is a home made instrument.

2007 – Sem2 – CC1 -Week 9 – Integrated Setup (1)

Ok, I’m a bit behind. Edward and I got it all to work in Studio 5, but it did keep crashing quite a few times so we gave up and decided to record our own tunes at home. Well I’m having trouble again as I can’t seem to automate panning and such on their own. When I link things together, the panning for example only moves when I physically change the wave type on my oscillator with the mouse. So I’m obviously missing something so I’ll ask Edward tomorrow and hopefully have something to blog then.
The good thing is that after talking with Edward about my FM patch that didn’t seem to work, the problem seemed to be in the frequency settings. The amplitude was too low and the frequency too high to affect the carrier (doh!), so I lowered the frequency setting and made the amplitude setting larger and hey presto, it works. Yay!!! It’s the wrong blog but here’s the FM patch and a pic. Since I was on a roll, I also added an LFO and played around with that. Interestingly, and obviously in hindsight, the waveform is an exact replica of the waveform selected in the LFO. The lower pic (in which I used the LFO) starts with a sine wave and then I flicked it to a sawtooth. I am smurt.
-




-
Haines, Christian. 2007. “AA1-FM Synthesis” Seminar presented at the University of Adelaide, 4th October.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

2007 – Sem2 – AA1 – Week 9 – Sound Generation – FM Synthesis

Sigh. Well I can’t get any sound out of my FM patch. It all seemed simple enough in theory – have a carrier wave modulated by another wave. How can this possibly be difficult??? It doesn’t seem to work for me and after everything else stuffing up, I guess I should have expected this. As for sideband calculating it might as well be written in Japanese. I don’t understand it at all.
After some more mucking around I seem to have sound, but it doesn’t seem to make the same various sounds as in class. As for part 3 of the exercise, this patch just makes the one noise, so it is probably wrong and therefore it seemed pointless to continue.
-
-
Haines, Christian. 2007. “AA1-FM Synthesis” Seminar presented at the University of Adelaide, 2nd October.
Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Wednesday 3 October 2007

2007 – Sem2 – Forum – Week 9 - The Instrument

Judging by the other blogs, we're supposed to post a write up about our instrument. Ok, it worked until a day ago until the circuit board broke. So I've wasted weeks of work and about $60 on parts, tools and paint for a piece of plastic shit that no longer works. I have no instrument so I guess just put me down for another fail. Have a nice day.
Ok, I figured I might as well put up my progress shots of the instrument getting painted. It seems painting all those model X-Wings, Warhammer figures and Klingon Birds of Prey finally came in handy.


A bit of a sand and it is ready to be painted.


Applying of undercoat.


A bit of paint to spruce it up.




All done.


Thursday 20 September 2007

2007 – Sem2 – CC1 -Week 8 – Performance Sequencing (2)



Geez, what can I say. Why didn’t I ever install this? Again, I’ve been amazed at the simple way of creating songs and equally amazing that, apart from the obvious bass and brief guitar part, all of this is drums?! The guitar I switched back off pretty much straight away. It jumps out too suddenly and besides, I was having too much fun warping the drums.
As far as the subject of limited functions go, I have fallen into the trap of recording track after track of an instrument part in the past only to be left with a nightmare to edit and the track really just ends up as lifeless nonsense anyway. Limited tracks or slots forces you to commit.

-
-
Haines, Christian. 2007. “CC1 – Performance Sequencing(2).” Seminar presented at the University of Adelaide, 13th September.

Live. 1999-2007. Ableton AG.

Wednesday 19 September 2007

2007 – Sem2 – AA1 – Week 8 – Sound Generation – AM Synthesis



Again, not being the sharpest tool in the shed when it comes to programming, this is going straight over my head. I didn’t understand how to make the LFO in Bidule so I’m way behind with this program and understanding synthesis. As far as I can gather, AM synthesis is pretty much the same as intermodulation distortion except we actually want the added artifacts present in AM synthesis.
I think I’ve done it right. I followed my little sketch from class I made and after reading the articles I think I’m making the right noises. Both the sine and the triangle waveform (on the right hand side collection of bidules that is connected to the main out) seem to be combining to create weird tremolo, beating tones as I adjust the sliders. The bidules on the left are my first attempt (which isn’t connected to the main outs). That works too, but I’m not sure if they’re connected right as they are probably beating due to the variables being ‘out of tune’ as the beating stops when both variables are the same value. How I’m meant to make anything sound natural from this is beyond me.
-
Audio
-
Haines, Christian. 2007. “AA1-AM Synthesis” Seminar presented at the University of Adelaide, 11th September.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Sunday 9 September 2007

2007 – Sem2 – Forum - Proposal for Forum Instrument

Well my limited brain function isn't going to create the next Toncolarium anytime soon, so the base instrument is a Wiggle’s toy squeezbox I picked up at an Op Shop that I plan to circuit bend. It plays various songs and noises at the push of various buttons. I aim to use most of the things in the “ReadingsComp” handout in my instrument including the addition of potentiometers for pitch and speed variations, 1 to X bend, a panic button, line output, etc. There is another small circuit board inside that is connected to the ‘squeezbox’ part of the toy. I’ve disconnected the cord which can’t be seen in the pic, but I plan on bending that too. I’ve considered making a new box for it, but I kind of like the quirkyness of a squeezbox.

Saturday 8 September 2007

2007 – Sem2 – Forum – Week 7 – Physical Computing (2)

2007 – Sem2 – CC1 -Week 7 – Performance Sequencing (1)


Even though this is a Degree in the understanding of sound and its applications in different technological contexts, it is especially nice to do some work with music from time to time instead of simply making noise in the guise of music. It is interesting that we’ve just spent the last two weeks trying to program MIDI tracks to make them sound like naturally played real instruments and now we’ve gone to looping real audio to sound robotic and repetative. Just thought I’d mention that observation. :)
Live is bloody awesome. I’ve had it at home all this time (an LE version came with the 002) but never installed it as I figured it was ‘some toy’ like Frooty Loops. Wow, how wrong was I. It is just so quick and easy to create a great sounding song. The only problem, as I mentioned, is the repetativeness of music created this way, but that is easy to rectify. In future, I’ll be creating my loops in Pro Tools, importing into Live, creating the basic song, exporting and importing the song back into Pro Tools to record live playing with the track to get away from the robotic sound. This may seem completely obvious but I don’t think enough bands do this when creating songs via looping, NIN being the obvious one. Every rhythmic track, (and melodic for that matter) doesn’t need to be created with loops. It’s interesting that the Scissor Sisters recorded their latest album this way[1], although Scott Hoffman uses Battery not Live, (along with recording digital tracks back onto 2” tape and mixing on a Neve but we won’t go there) and even though it’s an electronic album it doesn’t sound overly robotic like electronic music of the past, or even some that have been released recently.
-
-
I mucked it up around 1:35 as I missed the beat so it stuffed up for about 7 seconds, but after trying about 30 times to get it spot on I settled for it as is. I assure everyone this is all done live without automation, even though there are automation dots in the pic. They’re from playing it live then rendering the file after. This mix would have been easier to do with more tracks but I thought I’d keep it to four with four slots so we’re all even. Nice me eh. ;)
-
Haines, Christian. 2007. “CC1 – Performance Sequencing(1).” Seminar presented at the University of Adelaide, 6th September.

Live. 1999-2007. Ableton AG.

[1]Tingen, Paul. “Recording The Scissor Sisters.” In Audio Technology – Issue 51. A. Stewart and C. Holder eds. Alchemedia Publishing Pty Ltd 2006. pp37-40.

2007 – Sem2 – AA1 – Week 7 – Sound Generation – Basic Synthesis



Although this was a bit of trial and error, I managed to get some understanding of a synthesiser. I did this on the Juno 6. The LFO was one thing I never really understood but I think it’s clearer now how it works and what it does. I really don’t know what to say about this really. I managed to get a couple of ‘unique natural sounds’ I think although I did include the ever popular wind sound.
-
-
Haines, Christian. 2007. “AA1- Basic Synthesis.” Seminar presented at the University of Adelaide, 4th September.

2007 – Sem2 – Forum – Week 6 – Physical Computing (1)

Wednesday 5 September 2007

2007 – Sem2 – CC1 – Week 6 - MIDI Sequencing (3)


Well I’m guessing that this assignment was to add eq and effects to our song to ‘naturalise’ it some more. Thing is I naturalised it as much as I could last week by fiddling with the synth settings of the A-1 and you can’t use VST eq’s on MIDI tracks so I’m confused. Anyway, I fiddled about with the track some more and bussed out to an external effect. It’s pretty cheesy. What have I learnt? Apart from finding my way around Cubase, not much. Sorry, but I just feel that this is akin to a surgeon having his scalpels taken away and trying to perform brain surgery with a screwdriver and a rubber mallet and people expecting the same result. MIDI will always sound computerised, fake, robotic and just plain cheesy while using synthesisers. I believe an audibly realistic outcome of a non computerised instrument such as a guitar (or anything with such a complex waveform and crucially relying on resonance from the material the instrument is made from and the performers actions. ((And yes, before anyone says anything, I know we are supposed to be trying to replicate performer actions with this assignment, but my point is a guitar is a little more complex than a violin.)) Another instrument that springs to mind that is virtually impossible to replicate convincingly with MIDI is the harmonica.) can only be achieved with the MIDI data hooked up to a sampler. A guitar is so complex, much like the human voice. A computer is never going to talk like a human anytime soon.
Unless there are eq plugins and such at Uni that work on MIDI tracks, I’ll need to suss that out tomorrow. This blog needs to up now so, here’s my music. Yay.
-
Audio
-
Haines, Christian. 2007. “CC1-MIDI Sequencing.” Seminar presented at the University of Adelaide, 30th August.

Cubase. 2003. Steinberg Media Technologies

2007 – Sem2 – AA1 - Week 6 – Interaction Design and Sound

I like the thinking behind the difference between ‘product design’ and ‘experience design.’ This, to me, is along the lines of an alarm clock for example. In years gone by, a digital alarm clock’s alarm would be quite harsh, squarewaved, sharp and abrasive sounding in that attempt to wake you up. In recent years it’s apparently not the tone of the sound, but how loud it is so a softer, rounder and less harsh sound is used in digital alarm clocks. Hopefully microwave manufacturers and the like will follow suit and get rid of the harsh piezo beep.
Legibility and musicality is an important aspect to keep in mind. It’s pretty pointless to have a great soundtrack or audio ‘gimmick’ if it’s completely unintelligible through a particular medium just as most music people listen to involve a lot more than just simple piezo beeps.
-
I know this blog is incomplete. Sorry.
-
Lord, Max. 2004, Why Is That Thing Beeping? A Sound Design Primer, http://www.boxesandarrows.com/view/why_is_that_thing_beeping_a_sound_design_primer. Accessed 1st September 2007.
Tannen, Rob. 2006, Acoustics and Product Design: An Introduction, 2007,
http://humanfactors.typepad.com/idsa/2006/01/acoustics_and_p.html#more. Accessed 1st September 2007.
-
Haines, Christian. 2007. “AA1-Interaction Design and Sound.” Seminar presented at the University of Adelaide, 28th August.

Saturday 1 September 2007

2007 – Sem2 – Forum - Week 5 – Circuit Bending (2)

This seems pretty straightforward. Find some bends and add some switches in the circuit. The bend I chose was in between the resistor. By removing the resistor and soldering in a potentiometer, I was able to adjust the pitch of the sound by turning the pot. A line output could have been soldered in between the speaker wire path to enable an audio output suitable for a PA for example. I didn’t bother doing that with this but I’ll probably do it for my main instrument. A DI would be a safe method of using the output.

Some good sites I've been looking at for ideas are here:

http://www.anti-theory.com/soundart/circuitbend/

Haines, Christian. 2007. “Forum-Circuit Bending.” Seminar presented at the University of Adelaide, 23rd August.

Tuesday 28 August 2007

2007 – Sem2 – CC1 - Week 5 – MIDI Sequencing (2)

To get a MIDI file to even come remotely close to sounding like a guitar you need to program the file to play the same notes as the guitar to begin with. As with most MIDI files, this file has the guitar parts obviously played in on a keyboard without even acknowledging the chord inversions that occur from playing a guitar. Guitars rarely play root position triads and playing a guitar part on a keyboard won’t have the same inversions without carefull thought and planning. This is why I believe it is always better to manually write the notes in on a scorewriter of some sort and exporting the MIDI file. So after looking around for a note accurate version of any song at all with open guitar chords, I’ll use this pretty good version of Tonight Tonight by the Smashing Pumpkins, although it sounds different now I’ve taken out the strings.

As far as drums go when you hear MIDI versions, it is usually a case of “there’s a kick’ and “ there’s a snare” or whatever, but never a case of “there’s the drums” as a whole. Without recording them in a live environment and mixing the room sound back in (or bussing them all to a reverb unit), the drums never become a cohesive unit.
I had a bit of trouble getting around Cubase, but after chatting with Darren I found out how to take Snap off and where the velocity window was. It was covered by the Transport bar. Arghhh. I could have saved myself about four hours of frustration, but got there in the end.
Before (I forgot to change the tempo before I bounced it so it's a little fast, but that's how the file was originally)

This is the guitar track before.


After

-
This is the finished drum track. It was all just straight across before.

-

Haines, Christian. 2007. “CC1-MIDI Sequencing.” Seminar presented at the University of Adelaide, 23rd August.

Cubase. 2003. Steinberg Media Technologies

Monday 27 August 2007

2007 – Sem2 – AA1 - Week 5 – Sound Art

Butterfly Piano – Percy Grainger and Burnett Cross, 1951.

George Percy Grainger was born on 8th July 1882 at Brighton, Victoria. He had his first concert tour when he was twelve and soon after travelled to Germany with his mother to further his training as a pianist and composer. Between 1901 and 1914, Percy and his mother lived in London where his talents flourished. During this time, Colonial Song and Mock Morris were published. Towards the end of his life he worked on his ideas of Free Music. This type of music is not limited by time or pitch intervals. The
Free Music machines he created in association with the scientist Burnett Cross may be regarded as the crude forerunners of the modern electronic synthesisers. On 20th February 1961, he died at New York, and is now buried in the family grave at Adelaide, South Australia.

http://www.percygrainger.net/

http://www.obsolete.com/120_years/machines/free_music_machine/index.html

This piece is called Butterfly Piano. The piano has been retuned in a particular way to create “6th tones or three divisions to the half tone” so I’m assuming that means 1 semitone has now been divided into 3 notes. This kind of music will always get my eyes rolling, but I can appreciate that these artists have broken rules layed down by classical composers to realise their musical styles. Without these people I highly doubt that diverse film scores such as from the film Wolf Creek or sound creation and sound designers like Ben Burtt would be where they are today.
-
-
Haines, Christian. 2007. “AA1-Sound Art.” Seminar presented at the University of Adelaide, 21st August.

2007 - Sem 2 - Forum - Week 4 - Circuit Bending (1)



This is a Wiggles Squeeze Box toy I picked up in an op shop for $3. The red points would pause the music and then continue when released. It would stop completely if held too long, about 3 seconds. I thought I’d broken it at first, but removing a battery brought it back to life. A reset switch soldered into the on/off button might be in order. The blue points changed the volume. There was a huge volume drop when these two points were touched. The yellow points seemed to control the ‘squeeze box’ part of the toy. It would trigger that sound. Most of the others just created static and scratchy noises. I found the trigger points for all the band members, but I don’t see any real point to that as I can trigger all those separately and together via the buttons on the front. The resistors I’ll take out and add some potentiometers of course. I’m thinking I’ll try to find a sliding version to put under the thumb to control while playing.

Haines, Christian. 2007. “Forum Workshop – Circuit Bending.” Seminar presented at the University of Adelaide, 16th August.

Seb Tomczak. 2007. “Forum Workshop – Circuit Bending.” Seminar presented at the University of Adelaide, 16th August.

Wednesday 22 August 2007

2007 – Sem 2 – CC1 – MIDI Sequencing(1)

“Apply the concepts presented in the tutorials to create your own sound track.” Well it seems I missed that part of the text, although ‘own’ could be interpreted as ‘own sounds’ which I think I’ll go with as I have run out of time for this. Besides the fact of being a crap keyboard player, I have always sequenced manually in a score writer and exported the MIDI file then imported it into the sequencer. I did try doing it the manual way of playing the keyboard and recording the MIDI data, but it was terrible and no amount of quantizing was going to fix it. Also, the latency is horrendous. It must have been at least 2 seconds and yes, I turned Delay Compensation on and off with no difference. Completely rediculous. So in the end I got a MIDI file of a popular song and applied the VI’s to it. For some reason two instruments cannot be heard, but they were definitely playing. I couldn’t check the bounced file as I couldn’t hear audio from iTunes at Uni. Weird. Cubase is cool in some regards. This has nothing to do with MIDI, but I like the fact that you can record straight in and record at 16bit 44.1 so it’s ready to burn on CD. Also there’s no ‘bouncing’ of audio like in Pro Tools. As far as MIDI goes, I still prefer Guitar Pro and if I had a choice, would create my MIDI files in that and not go anywhere near Cubase or ProTools (or Logic for that matter) for MIDI file creation.
Haines, Christian. 2007. “CC1-MIDI Sequencing.” Seminar presented at the University of Adelaide, 16th August.

Cubase. 2003. Steinberg Media Technologies

Monday 20 August 2007

2007 – Sem2 – AA1 - Week 4 – Scene Sound

Ad 1 - UltraTune Kids

This ad uses play on words to get the message across. This ad may have had the main actors voice ADR’d as it’s such a direct sound and there's no background ambient noise at all from the surroundings, but it may have just been a boom mic. The sounds are all diagetic. I’ve only put two tracks on the chart even though the knocks and bumps from the kids could be considered another track, but I included it in with the kids.






Ad 2 – SAAB. Born from jets

The ad is filled with direct sound consisting of a lot of sliding and shifting of metal plates that originate from the car itself. I didn't mark each detail in this 'sliding metal soundscape' as it was so detailed in it's structure, but it is worth noting and being aware of the fullness and complexity of the finished sound which would no doubt have been layered with various different elements. There is a background music track playing the entire time. The whole ad, apart from the narration, is produced sound. This isn’t sound a normal car makes. Again, this is diagetic sound but is it hyper-real. The doors slamming could probably have been foley work.



I chose these two ads for their completely differing styles and sound content. The first is very simple, but to the point in it’s message while the second is very produced and has a lot of sound production in it.

Haines, Christian. 2007. “AA1-Scene Sound.” Seminar presented at the University of Adelaide, 14th August.

Friday 17 August 2007

2007 – Sem 2 – Forum – Breadboarding

Well the Transformer joke went over really well. The look on Christian’s face was actually priceless even though the joke backfired on me a bit. His face was stunned with a mix of ‘you’re kidding right?’ + ‘I’m lost for words’ + ‘How old are you?’ + ‘You are a bloody idiot Freddie.’ Anyway, on with the assignment.
This was very straightforward and quite fun. Although I didn’t get the third part working properly (it was modulating it’s own static but not the guitar itself) I was pleased I could get this done rather well after my escapades with Bidule. Instead of the standard resistor, I played around with the light sensitive one. I’m thinking I might incorporate these into my own instrument.

-

-

-

-
Haines, Christian. 2007. “Forum Workshop – Electronics, Instrument Building and Improvisation.” Seminar presented at the University of Adelaide, 9th August.

Seb Tomczak. 2007. “Forum Workshop – Electronics, Instrument Building and Improvisation.” Seminar presented at the University of Adelaide, 9th August.

Wednesday 15 August 2007

2007 – Sem2 – Week 2 – CC1 – Modular programming (3)

After talking to Christian I thought things were clearer, but after fiddling about with Bidule again, no. Personally I think the tutorials are way confusing and don’t tell, say or explain why things are going where they go. Maybe it does explain, but it's just going over my head. If that was made clear to me, I think I would understand how to make LFO’s and such from scratch on my own. Until then, I’m in the dark. Anyway, here’s the tutorial.



-
Haines, Christian. 2007. “CC1-Modular Programming” Seminar presented at the University of Adelaide, 9th August.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Sunday 12 August 2007

2007 – Sem2 – Week 3 – AA1 – Sound Scene


I know there’s a Terminator picture on the week 3 handout, but I was going to do that film anyway as soon as I discovered what the assignment was. I was going to focus on a film with non-diagetic sound in it and Terminator 2’s narrative intro by Sarah Connor (Linda Hamilton) was the first film to spring to mind.
The general soundscape is composed of mechanical noises with no sounds of human civilization which in turn depicts a cold, sterile environment. There are no natural sounds captured from the location. It’s most likely been filmed in combination with a soundstage, on location outside and with miniatures. All the sounds have been added in post. There is a lot of reverb in these opening scenes. The panning isn’t overly hard left or right for the sound effects although the music is panned further out. Except for a couple of people screaming as they die, there is no dialogue. I chose this because it was a scene full of action and therefore has a lot of sound and effects, but halfway through I was regretting it. Analysing works in this manner reveals what can be faked and left out, what is absolutely necessary to be in the soundtrack and the skill of working sound (or silence) to enhance what is happening on screen.

Haines, Christian. 2007. “AA1-Sound Scene” Seminar presented at the University of Adelaide, 7th August.
“Terminator 2: Judgment Day” Universal Studios. 1991.
-
Ok, in case I don't get to turn all this into a picture, I'll just post it as text. I know it's rediculously huge. Sorry.
Terminator 2: Judgement Day. Chapter 1.
16 seconds in. Scene involving children on a swing in an LA park before Judgment Day.

Non-diagetic: Music starts. Deep rumbling grows louder followed by a reverse sound effect, assumed to simulate an explosion.

Diagetic: Sounds of children playing on a swing.

47 seconds in. The same park in LA after Judgment Day.

Non-diagetic: 1:13. Narration by Sarah Connor begins.
1:40. Music starts. It’s dramatic without any real melody and filled with droning, mechanical sounds but even though it’s mechanical it still seems like a living pulse. The intent was obviously meant to imply the machines were actually alive and unstoppable. It grows in intensity with a sense of impending doom through out this whole scene.
1:56. Spot SFX (single synth note) as a soldier is shot.

Diagetic: Roof of a car squeaks as it wobbles in the breeze. Wind and dust is heard blowing. There are different sounds for the wind. Sometimes it’s a low sound, other times it’s a higher pitch whistle. The main wind sound is a boxy, hollow sounding drone. Rustling sounds can be heard behind Sarah’s narration.
1:19. Sound of skull being crushed as a terminator steps on it. Explosions and laser cannons start.
1:40. Sounds of multiple skulls being crushed as an HK rolls over them. The high pitch whirring sound of HK’s can be heard.
1:56. Sounds of man being shot.
1:58. Close up of cannon on ‘tank HK.’ As it moves a lower pitch whirring sound is heard.
2:00. High pitch whirring sound of ‘flying HK’ is made more prominent in the mix as it flys toward the camera. More sounds of people getting shot.
2:07. Rocket launcher is fired by a soldier from the back of a car.
2:13 to 2:19. The car that the rocket was fired from is now heard. Interestingly it wasn’t heard in the earlier shot. There was no doubt more important action to worry about. Another interesting point would be the shots changing back and forth from the car to the flying HK. The car is only heard when the car is on screen and the HK can only be heard when the HK is on screen. In reality, we’d be hearing both those things continuously at the same time.
2:20. Car is shot, explodes and goes into a roll. Suitable effects are used for this.
2:28. Sounds of the terminator guns are heard as they walk past the camera. The closer terminator turns and fires at the camera. The gun sound is louder for this terminator.
2:33. The flying HK is shown again and it can be heard as it flys in front of the camera and fires on another car. Another rocket is fired from this car at the HK. This is heard. The HK is hit and it’s left engine explodes. This is a pretty good sound as instead of a simple explosion, it sounds more like twisting, bending metal and there’s a medium pitch drone that gets pitch shifted down as the HK goes down and explodes in the ground.

2:40 Switch to interior of the human soldiers base of operations.
Non-diagetic:
Music style changes to a more heroic theme as John Connor walks down a hallway. The style is similar but this has an distinct melody and is filled with strings. The strings soften the music so it can be identified as the ‘heroes theme.’
2:44. Sarah continues with more narration.
3:27. The main Terminator theme song starts.

Diagetic:
2:40 The footsteps are heard as John Connor and his men walk down a hallway. The battle can still be heard muffled in the background.
2:44. As John emerges from the hallway into the open, the sounds of the distant battle are raised slightly louder in the mix.
3:21. The screen is filled with fire. As Sarah continues her narration, a deep rumble starts as the flames roll out.
3:35. A metal ‘clang’ is heard as the “Terminator 2 Judgment Day” logo is slammed together in two halves. Fire is heard burning.
3:55. More rumbles and sounds of flames burning as the camera pans in front of a burning playground.
5:19. The same metal slamming sound is heard as the front grill of a truck slams down in two halves just as the Terminator 2 logo did at 3:35. Film starts.
-
The music really pushes this whole chapter. The way it grows, moves, rises and dips makes this chapter come alive. Without the music, it’s just another war scene. After all this analysing though, I think analysing the music and why it works will be too much.

Wednesday 8 August 2007

2007 – Sem2 – Week 2 – CC1 – Modular programming (1)


Well I followed the tutorial and got the monosynth to work, but I still don’t have any clue what I did. It was just a case of paint by numbers for me. I tried changing a couple of the sounds (which really did nothing much at all) but if I was asked to make a synth from scratch I’d have no idea. The analogy I’d use is; it’s not knowing how to use a compressor, but knowing how to make one. I’m guessing this is our intro to plugin making and it’s a no brainer that I’ve got a lot to learn. I didn’t bother uploading the noise as it is a rehash of the tutorial and that’s exactly what Christian didn’t want.

Haines, Christian. 2007. “AA1-Environment Analysis” Seminar presented at the University of Adelaide, 2nd August.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Saturday 4 August 2007

2007 – Sem2 - Week 2 – AA1 – Environment Analysis


Well I stuck the mic out the back yard and recorded away. Yes, pretty much straight away a police car can be heard wailing down Main North Road, but after Fridays Aural class I’m not in the mood for Elizabeth jokes so just leave it. This soundscape has mostly distant sounds in it although one pigeon seemed interested in the mic at the start of the track. This is helpful in identifying individual sounds that make up an environment ‘soundscape.’ Identifying and including the appropriate diagetic sounds in film will make the film sound natural.

Soundscape

Haines, Christian. 2007. “AA1-Environment Analysis” Seminar presented at the University of Adelaide, 31st July.

Monday 30 July 2007

2007 - Sem2 - Week1 - CC1 - Modular Programming(1)


This program seems pretty cool. The only thing I can see myself having is patching things in any old way. I’m not really used to that (unless I make a mistake) as I’ve always been yelled at if I patch something wrong. I did patch things in ‘wrong’ but it didn’t do anything so I patched everything in a normal chain.
The ‘tune’ is fairly random. I played around with the 16 step sequencer and particle arpegiator and got some quirky notes happening. I put some low level distortion in via the Deconcrisseur and two band distortion plugs.
Overall my soundscape isn’t anything to write home about but I can see this program being a usefull application.

Soundscape
Bidule file
Haines, Christian. 2007. “CC1-Modular Programming(1)” Seminar presented at the University of Adelaide, 26th July.

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Sunday 29 July 2007

2007 – Sem 2 – Week1 - Forum – Electronics, Instrument Building and Improvisation

Circuit Bending: the art of rumaging through bins to make ‘instruments’ from other people’s junk. Good Lord! After todays bombardment of buzzes, pops and zaps I can only imagine what the Ode to Headache Symphony No.1 will sound like in week 8. What can I say about the Victorian Synth? I don’t know, but I do know I was doing this exact same thing with speakers as a kid to bounce marbles off of. They’ve just put a name to it now. When you’re actually making noise yourself it is fun I guess, although it gets boring rather quickly, but from an audience perspective I found it completely boring. It takes no musical ability at all and just solidifies the old adage that computer geeks are trying to be musicians and producers. Once upon a time one could buy a computer and suddenly they’d rudely call themselves a Mastering Engineer. Now they can rumage through a bin and call themselves a musician or maybe even a Musical Instrument Engineer? Good grief. And judging by a few YouTube videos this really just seems to be a hobby for the unemployed to waste their waking hours on but that’s just my cynical opinion. ;)

Buzz Buzz Zap Bzzzztt

Haines, Christian. 2007. “Forum Workshop – Electronics, Instrument Building and Improvisation.” Seminar presented at the University of Adelaide, 26th July.

Whittington, Stephen. 2007. “Forum Workshop – Electronics, Instrument Building and Improvisation”. Forum presented at the University of Adelaide, 26th July.

Seb Tomczak. 2007. “Forum Workshop – Electronics, Instrument Building and Improvisation.” Seminar presented at the University of Adelaide, 26th July.

Friday 27 July 2007

2007 - Sem2- Week 1 – AA1 – What is Sound Design?

Sound design, as we discussed in class, can cover a fair range of work from film work to toasters. A manufactured sound should have a purpose or function but does that sound faithfully reflect that function. The form of the sound may deliberately be non complimentary to the image or device that it will be attached to. Perhaps it would be acceptable for the latest version of Unreal Tournament to have a baby burp each time a gun is fired instead of a huge machine gun sound or perhaps each time your microwave sets of the annoying piezo ding it could instead say “Oi dickhead, get your crap out of this oven!!” Hmm, perhaps not.

I was going to jump right into talking about Ben Burtt but as far as film sound is concerned I couldn’t possibly have a blog on the subject without mentioning Jack Foley. So there you go. I mentioned him and so I don't go too far over the word count you'll have to look him up yourself.



The ‘lightsaber effect’ created by Ben Burtt for the 1977 film Star Wars is no doubt one of the most recognisable sound effect there is and people immediately recognise it and associate it with George Lucas’ epic saga. It has been sampled and used in a few other films since then, but people more often than not make a comment like “hey that’s a lightsaber noise.” The whole idea of the sound was to be unique while still sounding organic. Lucas apparently wanted to get away from the simple electronic sounds and effects of science fiction films of the time for his used universe. A lightsaber obviously doesn’t exist in the real world so Burtt was able to use artistic ideas to represent the form of the sound, but still kept a certain functional element to it by recording a doppler effect (which involved waving a microphone in front of a speaker while it played the buzz/humm noise) to create a sense of movement. For anyone interested in knowing anything further about the work of Ben Burtt, there is an excellent documentary on the DVD Star Wars Episode II – Attack of the Clones named “Films Are Not Released; They Escape.”


And while I’m in full nerd mode, a free cookie goes to the person who guesses what this is from.

Haines, Christian. 2007. “Audio Arts-What is Sound Design?” Seminar presented at the University of Adelaide, 24th July.

The Art of Foley. Philip Rodrigues Singer. “The Story of Jack Foley” http://www.marblehead.net/foley/jack.html (26th July 2007)
FilmSound.org. Sven E. Carlsson “Sound Design of Star Wars” http://www.filmsound.org/starwars/ (26th July 2007)

Tuesday 26 June 2007

Week 15 – AA1 – Final Recording

I recorded a young three piece band named Unknown Truth. An eight hour recording session and everything got done. Apart from a little mishap with the routing of the headphone sends, everything went well. The recording itself went very smooth. Four takes were done and the final one was judged as having the best vibe. Only one quick drop in each was required for the guitar and the bass. The vocals were recorded in the live room. The guitar ended up being double tracked. Overall I’m pretty pleased with the recording, although I still feel I have a long way to go. The snare especially sounds flat and I probably should’ve used baffles more to tame the room sound. The band was extremely happy with everything so I am happy about that. They got to take a rough mix on CD home with them at the end of the session and got yelled at by some old man from Scott Theatre telling them to turn it down when they blared it in the parking lot. Ahh rock n roll.

The initial mix took around 15 hours. Much of this time was spent mucking about with the snare drum and the vocals. The snare doesn’t have any ‘crack’ to it and no amount of equalising, as I found, was going to help. The vocals (and the drums in general) have the room sound imprinted on the recording. Baffles around the back of the kit and around the vocalist would have helped. In the end I scrapped the first mix and started again with a fresh outlook. This time I used the room sound instead of fighting against it. This kind of thinking certainly helped and the mix went smoother the second time around and got it done in six hours.
-
-
-
Fieldhouse, Steve. 2007. “Audio Arts 1.” All seminars presented at the University of Adelaide from Feb to June.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Sunday 24 June 2007

Week 15 - CC1 - Final Composition

The idea for this came from a collection of sounds I had recorded a couple of years ago. The original sounds were recorded on analogue tape via a Marantz tape recorder using a RODE NT3. The sounds are scrapes recorded from a rusty metal louvre on a pig pen and a metal ashtray from a car being slid in and out. A small section from this collection of sounds are used in this composition. A section with as much sustain as posible was found. The longest ‘note’ was around 1 and a half seconds so a lot of time stretching was going to be done.

The piece itself is made from four tracks which represent a string quartet. Each track comprises of small sections of recorded scrapes pitch shifted to create musical notes. A short piece was written in Bm. This was used as the template for the scrapes. I think the idea of sounds of nature replacing traditional orchestral sounds has potential although this piece didn’t really represent it well. With more work and time I’m sure this could get to a level that I would find acceptable.

This piece could easily have been put together via MIDI, but it would have had a different result. If the sound was triggered by a sampler, the sounds would no doubt have artifacts from being pitch shifted and also the side effect of automatically getting the tempo of each sample altered up or down depending on the keyboard note. Having full control by choosing the sound which most closely resembles a particular note gives much better results although doing it that way is a very time consuming method.
-
-
-
Haines, Christian. 2007. “Creative Computing.” Seminars presented at the University of Adelaide between Feb and June 2007.
Carrol, Mark. 2007. "Perspectives in Music Technology 1A." Seminar presented at the University of Adelaide, Week 7.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Thursday 31 May 2007

Week 12 - Forum - Composition and Improvisation

If a musical piece has only chords written and no melody, does adding a random melody ‘on the fly’ during a performance constitute improvisation? Maybe. I thought so before this forum but now I realise it could be considered a composition since a framework, ie the chords, were already in place. I still like to think the idea of randomness is separate from improvisation. Certainly related, but separate nonetheless. This is all an interesting paradox, but one my brain can do without at the moment. My idea of improvisation is being given a task to do and making it happen with whatever limited means are available at that given time. That could mean to play a musical piece without any sheet music, to record a performance with limited equipment or to break out of prison with no more than a paper clip and a drinking straw.

Whittington, Stephen. 2007. Forum Workshop “Composition and Improvisation”. Forum presented at the University of Adelaide, 31th May.

Harris, David. 2007. Forum Workshop “Composition and Improvisation”. Forum presented at the University of Adelaide, 31th May.