Saturday 10 November 2007

2007 - Sem 2 - CC1 Major Project - Electroacoustic Performance

Project Plan
After doing the Integrated Setup assignments, I fell upon the idea of doing a version of a cannonic piece. Whether it is a real cannon, I’m not entirely sure, but I would like to attempt using Live to record a guitar to a click (so it is in time), this loop will in turn be placed in Live and played against the guitar while it is still playing. The guitar will have its effects automated in Bidule by various ‘links’ that will be made from various oscillators. The looping process will continue, say after every eight bars, and placed in Live and played against the former loops in real time. Eight bars will hopefully be enough time to get the loop into the right place in Live and have it play on time. A kind of rhythm between myself and the performer will need to be established to keep everything going for 3 minutes. By having Bidule save the guitar loops straight to the Live folder in use should keep things going smoothly and will have the loops quickly accessible for Live. Two separate record outs from Bidule will need to be used. One for the guitar itslef straight to Live and the other for the main mixer that will record the whole piece. I’m thinking a score might be better, but since I am running out of time for this, I am thinking sticking to a simple key like C Major and playing quarter notes (and maybe going to 16th notes as it progresses) will keep it simple, few notes should clash and this way will not complicate things.
-
Research
Creating “electro acoustic music” on a computer kind of defeats the purpose I think. Using pieces like John Cage’s Williams Mix as inspiration was rather difficult. The difficult part wasn’t creating an electro acoustic piece, but rather the creating of chance events or ‘flukes of sound’ that occurs naturally while creating pieces like this. A computer is rather locked in, we can adjust things rather easily after the event, we end up looking at a screen more than hearing sound, it is mostly keyboards and mouse clicking and sound events of ‘chance’ sound rather faked.

To get around this ‘fake’ sound, I approached this recording a little differently. I had hoped to create a canonic piece in the classical veine by grabbing loops from a performer while he played and placed them in Live in real time while he performed against it at differing intervals. This proved difficult as the loops were never in time as the computer (in Bidule) takes half a second or so to start recording thus making all the loops out of time. Also the drum beat I was using to keep time for the player (and to keep my loop captures in synch) just sounded very laborious, monotonous and well too in time.

Cage and his use of eight tracks of tape cut up into six themes, would not have the same effect done today on a computer. This for me is the challenge of making electro acoustic music on a computer. The challenge to create music that dos not sound faked.
Electro acoustic music is more of a type of aural variables than music and often seems to have some sort of visual element to it. If I was to sit down and simply listen to John Cage’s work, I would get bored of it within five minutes (and that is stretching it) but if it was a live performance, seeing him loop, splice and set up and actually perform his piece that would be much more interesting and entertaining. I guess the word entertaining is one that does not go too well with electro acoustic music. It is more art I guess with a focus more on the performance aspect (as it is fun to create) rather than audience approval.
-
Composition Analysis
As explained in the Project Plan, my initial concept was to creat a canonic piece based on playing loops back against the performer in real time. This proved difficult as latency was an issue as well as the basic timing in general. The latency was dropped down to 32 in Plogue Bidule but made little difference. I did not want presequenced instruments for a soloist to simply come in and play a track for. The main aim was to have the musician playing guitar while I captured sections of his performance, manipulated it in either Live or Bidule and play it back against his ongoing playing of his instrument and thus building the piece into a creshendo of timbral intensity.The first few times we used a pre sequenced MIDI loop in Ableton Live, but the rigidity of it was offputting. It was originally there mainly for me to count to for capturing loops. Although it utilised and demonstrated the MIDI sequence in Live made from cut up pieces of guitar from the CC week 11 lecture, I decided it was best to drop it. It sounded good but the loose feel I was after was not happening. A much more relaxed and sonically pleasing outcome was achieved when I decided to scrap the MIDI sequence and play with a frestyle feel. So after the eighth take we had something that both of us smiled about. The full take goes for half an hour. We were in a world of our own. I can’t send you the whole thing so I just took the first five minutes and made that the piece for marking. Listening back to it, if I wasn’t there recording it I wouldn’t believe that this is one guitar ‘technically’ playing by itself being recorded in one take.
-
Program Note
Mushrooms Aplenty
Freddie May
5 ‘ 02

This was recorded with one acoustic guitar in one take. By using a simple score of “play the instrument in an unusual way every two or so minutes but make sure it is different than the two or so minutes before it” I was able to capture loops and manipulate them texturally via adjustments to pitch, frequency or timing and play them alongside the instrument all in real time using a sequencer. The piece starts with the one guitar playing alone and eventually builds with intensity and grows sonically as more loops are taken and added to the original lone guitar. It is a spatial ambient piece that grows over a period of time into a densely layered audio experience that creeps up on you. Acoustic guitar playing provided by Chris Coventry.
-
-
-
Haines, Christian. 2007. “CC1 – Electroacoustic Performance.” Seminars presented at the University of Adelaide.

Live. 1999-2007. Ableton AG.
Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

2007 - AA1 Major Project - Environment Analysis and Spatialisation

The process. Well it was a hassle with a capital HOFF! I had so much grief making artificial versions of these birds. The trouble I see now is that they are organic creatures. I believe mechanical noises would be easier to replicate. Stupid birds. Oh well, it is done now, sort of. I am not happy with the results but something has to be handed in. In the end I took Christians advice and took existing bird noises out of context from the recording I made. In the end I settled for a goose. It kind of worked, but the sound still has 'goose qualities.' I used Pro Tools to edit it and twist it up into a poor facsimile of the birds on my recording. I even tried revisiting the torn paper and Spear without success. Bleh! Hmm, Bleh? Isn’t Bleh that racist hot chicks retarded cousin that whats-his-name bet he could screw for $50 on Drawn Together? Anyway, when I finished making sound effects I made a number of separate tracks in Pro Tools and lined up the SFX with the original recording. This would give me eight tracks with the sound effects on them separated ready to mix in Cubase. I bounced them as individual mono tracks and also through some mild reverb. This is why there is nothing filled in time wise on the assets list. All the files are 1:38. I didn’t bounce them as short, individual files. Although I recorded outside, reverb is still needed to make something sound natural. I tried to make it sound as if it was outside by removing early reflections and making the decay under a second. I think it worked. It sounds like it is at night now to me. Kinda cool.
Time to mix in surround. Since all the tracks were pretty much done, it was a simple matter of panning and using different volume and filters to simulate distance, ie rolling of the top end to make certain sounds more distant than closer sounds with no tops rolled off. I am reluctant to use the term hyper real to describe this work. Although I am not keen on using the term I guess I could use it as the sounds made are exagerated versions of the sound albeit not exagerated versions of the real thing and hyper real is a sound that over emphasises the sonic effect. They could possibly be foley as the sounds are made from other existing items that are taken out of their original context although that would imply vision is accompanying it, but perhaps not as I am pretty sure the sound effects people working on radio dramas were called foley artists. Perhaps I will just stick with the term sound effects. It is a simple but vague term that can mean a lot of different things. For this piece the term will mean ‘sounds created artificially using synthesis or original sounds taken from their original context and edited and/or manipulated for an unrelated specific sonic purpose.’
-
-
-
Haines, Christian. 2007. “AA1 – Environment Analysis and Spatialisation.” Seminars presented at the University of Adelaide.

Cubase. 2003. Steinberg Media Technologies

Plogue Bidule. 2001-2007 Plogue Art et Technologie, Inc

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Monday 5 November 2007

2007 – Sem2 – AA1 – Week 12 – Sound generation – Spacialisation

It may sound like a contradiction, but adjusting the time delays of a similar signal not only adjusts the time of that signal in space, it adjusts the stereo sound field when it reaches the ears. This ‘bending’ of sound is a direct result of the HAAS, dummy head mic and ORTF and NOS techniques and IAD (interaural amplitude difference) and ITD (interaural time difference) theorums. Although we are using one signal to create a 3D effect and these techniques use near coincedent mic placement, the same result (or very similar. I’m of the mind that if you want a stereo spatial effect use two mics.) should theoretically be achiveable.
So, let’s have a look. Each sound has a left and right channel. First I delayed the left channel by 34 ms while keeping them both panned centre. The delayed sound naturally sounded quieter so I didn’t roll off any tops. There are two distinct sounds in this part. The next three are single sounds. Next I tried an Interference Pan. I pitch shifted the left channel down by half a semitone. This sounds more ‘filtered’, kind of more comb filtered sounding. The third sound is a Forward Sound version. This had the right channel fader lowered 3dB to make the sound go more to the left and finally the Stereoised version used two methods. I panned them both at 3 and 9 o’clock respectively, deleayed the right channel by 34 ms and also cut its dominant frequency at 3k.


-
Audio
-
Haines, Christian. 2007. “AA1-Spacialisation” Seminar presented at the University of Adelaide, 25th October.