Friday, 30 March 2007

Week 5 - Forum - "Collaborations (2)"

The first presentation was by Luke Digance on the collaboration between Radiohead and Murse Cunningham. Yawn. More collaborations of people. Ok, that may be a bit harsh, but I believe there is a lot more to the term “collaborations in music” than simply musicians getting together. There are artists obviously, but there’s also producers, engineers, distributors, equipment and devices being combined resulting in new types of technology and even animals and the weather. Having the music play during his presentation was not a good idea. At the end of the presentation, David Harris askes Luke the question, “How do you feel of the result of the collaborations? Is it negative or positive?” An “I don’t know” answer is the reply. O...K...

The second presentation was by Daniel Murtagh about Mike Patton and his dozens of different collaborations with other artists. Another one that insists on talking with the CD playing at the same time. Roll off the tops, turn it down so we can hear you or TURN IT OFF! I’m sure it was a lovely talk, I just couldn’t hear all of it. The presentation, from what I could hear, was again about the result of the collaboration and not the process of the collaboration. It was more a presentation about his opinion on how great Mike Patton is. C’mon. If fast drums and distorted guitars suddenly make music ‘death metal’, then the inclusion of orchestral instruments on albums by Pink or Gwen Stefani obviously makes their albums classical music? Mike Patton was described as “interesting” by Daniel. “Interesting” is certainly one word for the ear raping I received. What I heard, in my opinion, was a poor attempt at various musical genres and is obviously the result of too much drugs and an out of control mental illness. At last!! David Harris saves the day yet again and coaxed an actual collaboration statement from Daniel about Mike and Norah Jones collaborating via the internet sending audio files to and fro to create a single.

Darren Slynn was third. Holy Crap!! Great presentation. Clear, strong voice. Confident and direct without his face being buried in a handfull of notes. He walked the floor and had good presence. His enthusiasm made me sit up and listen to him. The best presentation so far in my opinion. I hope I can muster up a presentation half as good. He makes the point of discerning between a facilitator and a collaborator referencing Frank Zappa.

Alfred Essmyr was last. His presentation was focused on what I mentioned earlier. There was no mention of any particular artists collaborating as such, he focused on collaborating in various areas to get your music heard by the general public. He explained that collaboration can be much more than just two people combining ideas. I loved his innocent statement “collaborate with yourself.” It makes no sense at all but at the same time makes complete sense.

In hindsight of the topic “Collaborations” the majority of presentations have really just been a show and tell on the style of music one likes. It’s always been a one sided view on the collaboration with the presentation often focusing on the result of the said collaboration, and very little on the working methods and processes during the actual collaboration itself.
-

Whittington, Stephen. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Digance, Luke. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Murtagh, Daniel. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Slynn, Darren. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Essemyr, Alfred. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Tuesday, 27 March 2007

Week 4 - Forum - "Collaborations (1)"

It was interesting to see differing opinions on the concept of collaborations. The first presentation was by David who talked about Metallica and Michael Kamen joining forces for the album S&M.[1] In my opinion these sort of collaborations are always a one way street. By that I mean the orchestra has to adapt to playing the bands way. There is often very little compromise from the band. For example, during the presentation I learned that the orchestra had to learn to play with in ear monitoring, were separated by clear baffles, all the instruments were mic’d and had to get used to playing at huge volumes. Apart from the technical aspect, there was no compromise in the song arrangements. They are exactly the same as the album versions except with an orchestra jammed in there. Metallica weren’t the first and won’t be the last to do this, but honestly I never saw the point in it all and thank goodness the trend seems to be over.
Vinnie’s presentation on Trilok Gurtu and the collaboration of culture, religion, beliefs and music was interesting. I’d never heard his music before but I found it appealing. A lot of world music hangs on cultural balance only and this was a good blend of culture and ‘commercial’ sounds. If only Vinnie knew what was to come in the final presentation.
William presented collaboration between music and sound design and the game development industry. I must apologise because a lot of this presentation fell on deaf ears as I actually started falling asleep.
And then there was Sanad. Yes, well.
I was glad I didn’t have to present in the first group. Not only was it good to see what was expected from the presentations it also demonstrated what not to do.
-
[1]Metallica. 1999. S&M (album). DVD. Elektra Records.

Monday, 26 March 2007

Week 4 - CC1 - Sequencing (1)

I agree completely with Christian's comments about "decoupling the brain from technology."[1] Computers create the trap of 'looking' at sound instead of listening to it. Although I had done MIDI sequencing before, this was a new experience for me. I was intrigued by the idea of 'sequencing without using MIDI.' This experience has brought a new and positive approach to sequencing for me.

This week we needed to imagine a soundscape which would include our twelve sounds we created over the last few weeks. Instead of doing the "imagining part" I did something a little different. I felt I needed a set goal or else I'd be there "imagining" forever. I made little icons and shapes to represent each sound. I took the two longest sounds and used those as the main track over the 45 seconds and layed their respective shapes (a bunny and a long piece with a spikey section) at the bottom. I then took handfulls of random icons and dropped and flicked them on the space. Where they landed is where their respective sound would appear in the mix. I lined them up vertically so they were neat and decided to add a few extra "beats" for some sounds where I thought it might sound good. At this point I had no idea what it would sound like. I then sequenced the audio files in Pro Tools so they would faithfully represent the sequence on paper. All done.

This track has no panning or fader automation. How the icons landed is how they got represented so some sounds are louder than others but I think it gives the track some depth. The only tracks that were altered were the two long ones at the bottom which had Time Expansion applied to stretch them out closer to 45 seconds.

-
[1] Haines, Christian. 2007. "Perspectives in Music Technology 1A." Seminar presented at the University of Adelaide, 21 March.

Friday, 23 March 2007

Week 4 - AA1 – Input, Output and Monitoring

Today we listened to a number of topics. The wallbay, the Mackie inputs, the patchbay and the final step in planning out the session, floor plans. I figured it best to sort that wallbay out first. The wallbay is set up so Studio 2 can use the Dead Room and the Live Room and Studio 1 can use the Dead Room, the Live Room and Studio 2 as recording rooms. So with the wallbay sorted I can concentrate on signal flow.








Patchbays are fairly simple devices. It’s basically two rows of plugs that utilises the top row as outputs and the bottom row as inputs. Patchbays are a remnant of the old telephone operator days. They are used to conveniently patch outboard audio devices like effects and compressors into the signal path or take the signal path elswhere. The settings for patchbays that we discussed were normalised and half normalised. That means a signal can flow directly from output to input without a patch lead and, a signal can be split if a patch lead is inserted in the output while it still flows into the input respectively.

For the floor plan, signals 1 –12 (drums) and 14-15 (Bass) are patched into the Live Room’s wallbay through inputs 1-14 and sent to Studio 2. Signals 13 and 16 (Guitar 1 and guide vox) are sent to Studio 2 through the Dead Room wallbay via input 1 and 2. Guitar 1 is sent through input 1 in the Dead Room and sent through to the Live Room (or EMU space as it’s written on the chart above) to feed Amp 1.


Well there was a lot more discussed today like auxilliaries, busses and headphone sends but I am already over the 300 word count. Oh yea. And remember to book the studio rooms with a pencil or fear the wrath of the roaming pompousaurus.

Tuesday, 20 March 2007

Week 3 - Forum - "There's no noise or sound that isn't music."

This forum involved the students performing a 45 minute piece written by David Harris called “Compossible with Nineteen Parts.” It involved the use of different instruments and voices playing set notes, chords or spoken words freely within different set timeframes as written on the score. It created a random mingling and overlap of notes and sentences that, as David put it, creates the “intentional removal of beat.” The way I understand it, it introduces randomness by if it were played a hundred times the composition would be performed differently a hundred times thus sounding completely different a hundred times. If that is a good thing or a bad thing I guess is up for debate. From a compositional and musical perspective, it was great. It was interesting and fun. Dinging the bell was a highlight for me. Some of us probably had a little too much fun judging by the murmers and giggling that regularly occurred from different corners of the room throughout the performance. With so much laughter it was difficult to distinguish whether this was to be taken as a serious musical piece. Personally, for me it was more like playing a game than a performance.
When the piece concluded we were invited to ask questions or comment on the experience. This is where the forum was spoiled for me. I got the distinct impression he was not impressed by me asking how the performance was relevant to music technology. I was slightly annoyed as the only “answers” that I received seemed to question my intelligence to understand why we were doing it in the first place and that “the use of traditional instruments to make noise was to open our minds as to what music can be.”[1] To use the argument that it was to open our minds was a stretch as I strongly believe that as tech students we should already have open minds as to what music can be. My other comment, which was ignored, is that instruments always make music. It would only be noise if an individual wanted it to be. I didn’t appreciate someone telling me how I should discern between noise and music. That is for the individual to decide. This piece was not noise to me. It was music in a random fashion, but music nonetheless.
-
[1] Harris, David. 2007. "Forum - Music Technology" Seminar presented at the University of Adelaide, 15 March.

Monday, 19 March 2007

Week 2 - AA1 - Studio Quickstart

Ok. What fun to be had. After going through the power up phase, which involved the girls name DORA, Laura, Amy and I thought we'd be on track for a recording. BU DUM! After fiddling about for half an hour, I figured we should get some help. We then found out that the inputs on the 192 interface was set to +4 instead of -10 in the Setup menu. With that problem solved we could go ahead and record the wonderfull song emanating from the radio. The song was some band howling away. Good grief!
Anyway, back to DORA. We have been instructed to power up the system in a certain order using that name as an acronym. D is for desk. O is for outboard. R is for recorder and A is for amps for startup in that order whenever using a large studio. Since studio 2 is a little different than that we were instructed to use a startup procedure like this:
1. Turn on mains power on wall
2. Turn on Phonic Power Conditioner
3. Turn on the 192 interface
4. Turn on the Mac
5. Turn the amps on.
Since the speakers are active, these are the amps.
We set up a 24 bit 44.1 session and recorded with a Neumann pencil mic. The mic was plugged into the lead, which was plugged into number 1 on the wallbay. That was feeding channel 1 on the Mackie which was recorded onto track 1 on Pro Tools. With a good strong signal from the radio we set up a gain structure with the Gain pot on the Mackie so we were metering just under clip. We didn't test the headphone send as they were all apparently broken and taken away from the storeroom. We made our tracks with the shortcut "Shift + apple + N" and flicked between the Mix and Edit screens using "apple + =". After finishing our recording we followed the DORA procedure but in reverse, ie amps, recorder, outboard then desk but in Studio 2's case as listed earlier 5,3,2 then 1. Curiously they prefer you to skip step 4 and leave the computer on until it's automatically turned of with step 1. Neutralising a desk involves putting all faders, knobs and switches back in their "zeroed", flat or off state so the next people to use it can start on a clean slate and not have to worry about switches that may be left on or off or knobs that are panned oddly, etc. After shutting the system down, turning off the radio, neutralising the desk and putting away the mic and lead we were done. Lovely.

Week 3 – CC1 – Desktop Music Environment

This week we were introduced to a small application named Sinusoidal Partial Editing Analysis and Resynthesis or SPEAR for short. SPEAR is an application for audio analysis, editing and synthesis.[1] As far as editing goes not only can you cut along the vertical plane as most other editing programs do along the wave form, this allows you to edit along the horizontal plane thus eliminating specific harmonics or overtones from the sound. I mentioned wave form just then, but SPEAR doesn’t represent it’s information as a wave form. It displays hundreds of little lines which represent the individual partials hence the word “sinusoidal” in it’s name. It transforms a complex waveform into it’s sinusoidal parts by a process called Fourier analysis named after Joseph Fourier.[2] I’d never seen an editor function like this or even heard of Fourier analysis before and I was impressed by such a small, and free I might add, program.

Different levels of amplitude can be identified by the darker or lighter shading of the partials. Darker for louder, lighter for quieter. There is also a pallet with editing functions that allow you to select vertically, select horizontally, timestretch, adjust the amplitude and frequency and also a lasso tool that allows you to select individual areas to edit.
We were asked to take our paper sounds that we had recorded last week, stick them in SPEAR and experiment with it’s functions. This is what I came up with.
Just some cutting and hacking away to start with.



Sound 2
I imported this at both 100 Hz and 8000Hz. At 100Hz I couldn't hear any difference but at 8000Hz (which took forever to import by the way) a lot of bottom end was missing. I didn't end up saving the 8000Hz version, but after chopping and playing around with the control sliders came up with this.



Sound 3
I cut holes in this one, pitched it down and cut some mids. It ended up like some weird howl. Pretty cool though.
I copied square shapes from the file and pasted them back on randomly. I stretched these squares up and down. It sounds a bit 'staticy' like a tv.
Instead of doing random things, I thought I'd do a cross thatch pattern all over it. That didn't really sound very interesting at all so I selected sections along the time scale and pitch shifted sections up and down. I also randomly selected parts with the lasso tool and deleted small sections.


I removed a lot of partials from this one. I listened for 'weirdness' and kept those that I thought were interesting.

Overall it's a pretty cool application. Now that I've downloaded it, I'm sure I'll come up with other weird and wonderfull sounds in the future.
-
[1] Michael Klingbeil 1995-2007 “SPEAR Sinusoidal Partial Editing Analysis and Resynthesis” http://www.klingbeil.com/spear/ (19 March 2007)
[2]paraphrased from: Wikimedia Foundation 2001 “Sine Wave” http://en.wikipedia.org/wiki/Sinusoidal (19 March 2007)

Tuesday, 13 March 2007

Week 2 - Forum - Music Technology

I must admit I was slightly confused by this forum. Perhaps that was due to my own interpretation of originality. I am always open to new concepts and thinkings, but the lecture did seem rather “black and white” to me. Although we are meant to reflect on the lecture content and express our understanding of it, since after a week I’m still confused by it, I think I’ll talk about my interpretation of originality.
To me originality is something completely new or something old that has been put into a completely new context. I hear you say “What is new?” In the first interpretation then nothing is new. That would mean nothing in history ever has been original. Not even the Biblical character Eve is original as she was apparently made, or cloned if you will, from Adam’s rib but if we go with my second interpretation of originality then that rib has been used in a new context and she has become the original woman. So 'new' could be considered something that hasn't been around very long and is unfamiliar.
The merging of old ideas with new approaches of thinking in my opinion creates the evolvement of new ideas, hence ‘originality’ is born. Outside influences are often introduced into so called original thinkings thus creating originality. Perhaps it’s not a stretch to think that imitative counterpoint and chordal function being introduced to Renaissance music spurred the creation of a new musical style that happened to become Baroque music.
Pink Floyd’s 1973 album The Dark Side of the Moon is considered by many as the benchmark of what depth in a mix is all about. The music may or may not have been original, but the mix was since it is the first album considered the bench mark by so many people. The separate elements in music and it’s production should perhaps be considered and identified as new before it can be labelled as original.
After saying all this though, I found the discussion on Eric Satie interesting. I was inspired by the fact that he had no prior formal knowledge on composition and was not only recognised for his music, but created a new style that has been emulated over the years.
Hopefully after writing this blog I won’t end up like the poor soul that failed his exam for fourty years straight, but as I said I was a little confused by the forum and to be quite honest, I still am.

Sunday, 11 March 2007

Week 2 - CC1 - Editing(1)

Well that was fun. Saturday night and an hour later and I still had not managed to record anything. Grrr. Everything seemed correctly patched. The software seemed to be set up correctly. The monitoring is working, but no signal is being recorded. Grrr. In the end I gave up and opened a saved session that was on the computer. I don’t know who made the session, but it recorded audio. Yay. I obviously missed something in the program setup so I’ll have to follow that up later but for now, I can record. Yay. Grrr.
With an NT3 resting on the desk and a piece of paper waiting to be tortured in my hands, I hit record and proceeded to twist, rub, tear, fold, scrunch and rip the once proud tree. With around two minutes worth of noises recorded I set forth listening back and creating separate regions of the sounds I liked.



Those regions were then copy/pasted into separate sessions for manipulation.



Sound 1
Tried the Gain Envelope plugin. Pretty basic manipulation.




Sound 2
What would a sound collection be if at least one sound wasn’t reversed. I also used the Mda Ambience plugin on it.




Sound 3
The Boomerang plugin. Interesting.




Sound 4
I used the Gain Envelope again. This time I actually made some random changes to the presets. I also couldn’t resist using theWunderverb3 plugin.



Sound 5
I Normalized this one up 10% as it was very quiet. I liked it better without the normalize, but at least this way the volume on the computer won’t need to be turned up as loud. *Sigh* I couldn’t get the Pencil tool to draw amplitude changes so again with the Gain Envelope.




Sound 6
Fade Ins. An exciting manipulation I know, but I made it slightly interesting by stating the fade ins at selected points along the sound wave.




I was going to cut all six samples up into 1 second long samples and randomly put them together end to end as one of the finished sounds, but since I ran out of time I didn’t end up doing it. In hindsight the sounds probably aren’t manipulated enough for this exercise and, resisting the urge to manipulate them further at home, went with the more natural sounds. I actually liked the natural sounds and if used in a different assignment could easily be manipulated further when a more specific visual or audio requirement is needed. For example, a slow scrunching sound that I recorded could be used to simulate fire. Sound editing/design is sort of like magic in a way that a sound is taken out of its original context and used in a new and exciting way. I find it very interesting to find out what sounds were used and manipulated for different films. For example, in George Lucas' 1977 film Star Wars, the sound of Imperial TIE fighters is an elephant roar mixed with the spray sound of a car driving past on a wet road. I did find it frustrating since I am used to Pro Tools, but I can see the benefit of using Peak as an introduction to digital editing.

Wednesday, 7 March 2007

Week 3 - AA1 - Session Planning and Management

Planning. Whether it’s a space shuttle launch, the latest slugging match between Rudd and Howard or Grandma’s pot roast, they all need some sort of planning to have a remote chance of a successful outcome. So too is it with a recording session. This week we learned that a good deal of time could potentially be wasted from fumbling about aimlessly plugging and unplugging microphones and outboard gear without having some thought put into the upcoming recording session. A session plan does not necessarily make the mic selection set in stone. It is not just a plan on mic selection. It is a plan including the musicians instruments, outboard gear, floor plans and on signal flow in general. It should also be written so that it can be quickly referenced later for troubleshooting by anyone in the session if something goes wrong with any part of the signal flow. Habits can form like using the same old mics on the same old sources which can lead to the same old predictable sounds being recorded and a session plan can help identify these recording patterns and inspire the experimentation of new ideas. Session plans can therefore also be used to reflect on past recordings.
Often, there are many items of outboard gear patched into the one signal and the same signal can be routed to other places. For example, a kick drum could be patched directly into an outboard preamp, straight into a compressor and into the line in on a channel. It could then have a gate inserted over it. It could also be split and routed out of a spare bus and into a spare channel. This could have another compressor inserted over it to parrallel compress the kick, and both channels bussed to a single track. Then there could be an auxilliary being used for an outboard reverb effect on the kick for the headphone mix. Good Lord! That was just to record one signal with one mic!? After 24 tracks there would be more leads than Inspector Morse would care to hear about. It is easy to see how quickly things can get confusing without some sort of template or plan to fall back on for reference. This would all be written down on the session plan before even getting to the studio, preferably after a couple of preproduction sessions with the musicians.
A good plan can save heaps of time at the start of a session thus allowing time for some quick experimentation in regards to final mic selection and placement. It saves time if something goes wrong and the signal flow needs troubleshooting. All the signal flow and equipment used for each instrument can be clearly seen at a quick glance. It saves headaches and stress of wading through mounds of leads to find out what is going where. Quite frankly, I think you’d have to either be really, really on the ball or completely nuts to go in the studio without some sort of session plan. Besides, when I’m sitting there calm and the recording session is roaring full steam ahead because we sorted out a problem in two minutes that potentially could have stopped the session for half an hour or more, I’ll smile and look at my session plan because I love it when a plan comes together.
-
Here is a basic session plan for a typical rock band that I put together from the information from Tuesdays tutorial.



Hannibal also loves it when a plan comes together.

Picture taken from the DVD "The Ultimate A-Team." Universal Pictures (Australasia), 2004.

Week 1 - CC1 – Systems

Apparently not much happened during this lesson, which is a good thing because I was too busy in enough pain to warrant laying in bed on morphine for four days.

Sunday, 4 March 2007

Week 1 - AA1 - Facilities Introduction

Recording studios. Some studios make you feel warm and cosy. Others feel like hospital operating rooms. Either way, they’re designed with one thing in mind. Recording noise. EMU is no exception. From my early observations it’s not only designed to record noise, but it’s designed to encourage the recording of great noises. It’s very well laid out with computer labs with M-Boxes and G5’s lined up, several smaller recording rooms, a large control room decked out with ProTools HD and a Control 24, top quality microphones and a large floating room for use as the main recording space all just crying out to be used creatively. Speaking of the floating room, it is completely isolated except for the roof. Since the roof is not isolated, the stomps of dance practice leaks down to the studio from Floor 6. (Mental note to self. Check the CASM timetable for their dance rehearsals.)
A problem I see is not wanting to ever go home. Well, maybe not that extreme, but I’m sure the alocated studio time will just fly by when the recording session is underway. Since I can’t comment on the acoustics of the room as of yet I’ll comment on the aesthetics. My first impressions of the studio spaces were clean lines, openess, but a little sterile. Some nice wood panelling would warm it up, but it is a class environment after all and the small nitpick on my behalf is probably unwarranted. Especially since I haven’t heard any sounds recorded in there yet. One area that may be problematic is the focus on primarily digital recording. Yes, digital recording has taken over everywhere in the world although I think it might have been nice to have the opportunity to learn a little bit of the old analogue skills. I'd just love to record through an old Neve one day. After saying that though, I do have a lot to learn about digital recording so I should probably shut up now. It’s unfortunate for current students that a lot of EMU’s analogue gear has been put away for historical reasons, but it is good that these pieces of equipment are being put away for future generations to appreciate.
Another thing I did notice is that I keep stumbling on an incline leading into the doorway of the main recording space. Even though I know it is there, I keep tripping on it. Perhaps some sort of marking like yellow tape could be put in the corners on the floor. Perhaps that has been considered. Perhaps it is not needed. Perhaps I’m the only idiot that has managed to consistently trip on it. Who knows. Recording studios. Here’s looking forward to recording some noise that makes me feel warm and cosy.