Friday, 30 March 2007

Week 5 - Forum - "Collaborations (2)"

The first presentation was by Luke Digance on the collaboration between Radiohead and Murse Cunningham. Yawn. More collaborations of people. Ok, that may be a bit harsh, but I believe there is a lot more to the term “collaborations in music” than simply musicians getting together. There are artists obviously, but there’s also producers, engineers, distributors, equipment and devices being combined resulting in new types of technology and even animals and the weather. Having the music play during his presentation was not a good idea. At the end of the presentation, David Harris askes Luke the question, “How do you feel of the result of the collaborations? Is it negative or positive?” An “I don’t know” answer is the reply. O...K...

The second presentation was by Daniel Murtagh about Mike Patton and his dozens of different collaborations with other artists. Another one that insists on talking with the CD playing at the same time. Roll off the tops, turn it down so we can hear you or TURN IT OFF! I’m sure it was a lovely talk, I just couldn’t hear all of it. The presentation, from what I could hear, was again about the result of the collaboration and not the process of the collaboration. It was more a presentation about his opinion on how great Mike Patton is. C’mon. If fast drums and distorted guitars suddenly make music ‘death metal’, then the inclusion of orchestral instruments on albums by Pink or Gwen Stefani obviously makes their albums classical music? Mike Patton was described as “interesting” by Daniel. “Interesting” is certainly one word for the ear raping I received. What I heard, in my opinion, was a poor attempt at various musical genres and is obviously the result of too much drugs and an out of control mental illness. At last!! David Harris saves the day yet again and coaxed an actual collaboration statement from Daniel about Mike and Norah Jones collaborating via the internet sending audio files to and fro to create a single.

Darren Slynn was third. Holy Crap!! Great presentation. Clear, strong voice. Confident and direct without his face being buried in a handfull of notes. He walked the floor and had good presence. His enthusiasm made me sit up and listen to him. The best presentation so far in my opinion. I hope I can muster up a presentation half as good. He makes the point of discerning between a facilitator and a collaborator referencing Frank Zappa.

Alfred Essmyr was last. His presentation was focused on what I mentioned earlier. There was no mention of any particular artists collaborating as such, he focused on collaborating in various areas to get your music heard by the general public. He explained that collaboration can be much more than just two people combining ideas. I loved his innocent statement “collaborate with yourself.” It makes no sense at all but at the same time makes complete sense.

In hindsight of the topic “Collaborations” the majority of presentations have really just been a show and tell on the style of music one likes. It’s always been a one sided view on the collaboration with the presentation often focusing on the result of the said collaboration, and very little on the working methods and processes during the actual collaboration itself.
-

Whittington, Stephen. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Digance, Luke. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Murtagh, Daniel. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Slynn, Darren. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Essemyr, Alfred. 2007. Forum Workshop “Collaborations Pt 2”. Forum presented at the University of Adelaide, 29th March.

Tuesday, 27 March 2007

Week 4 - Forum - "Collaborations (1)"

It was interesting to see differing opinions on the concept of collaborations. The first presentation was by David who talked about Metallica and Michael Kamen joining forces for the album S&M.[1] In my opinion these sort of collaborations are always a one way street. By that I mean the orchestra has to adapt to playing the bands way. There is often very little compromise from the band. For example, during the presentation I learned that the orchestra had to learn to play with in ear monitoring, were separated by clear baffles, all the instruments were mic’d and had to get used to playing at huge volumes. Apart from the technical aspect, there was no compromise in the song arrangements. They are exactly the same as the album versions except with an orchestra jammed in there. Metallica weren’t the first and won’t be the last to do this, but honestly I never saw the point in it all and thank goodness the trend seems to be over.
Vinnie’s presentation on Trilok Gurtu and the collaboration of culture, religion, beliefs and music was interesting. I’d never heard his music before but I found it appealing. A lot of world music hangs on cultural balance only and this was a good blend of culture and ‘commercial’ sounds. If only Vinnie knew what was to come in the final presentation.
William presented collaboration between music and sound design and the game development industry. I must apologise because a lot of this presentation fell on deaf ears as I actually started falling asleep.
And then there was Sanad. Yes, well.
I was glad I didn’t have to present in the first group. Not only was it good to see what was expected from the presentations it also demonstrated what not to do.
-
[1]Metallica. 1999. S&M (album). DVD. Elektra Records.

Monday, 26 March 2007

Week 4 - CC1 - Sequencing (1)

I agree completely with Christian's comments about "decoupling the brain from technology."[1] Computers create the trap of 'looking' at sound instead of listening to it. Although I had done MIDI sequencing before, this was a new experience for me. I was intrigued by the idea of 'sequencing without using MIDI.' This experience has brought a new and positive approach to sequencing for me.

This week we needed to imagine a soundscape which would include our twelve sounds we created over the last few weeks. Instead of doing the "imagining part" I did something a little different. I felt I needed a set goal or else I'd be there "imagining" forever. I made little icons and shapes to represent each sound. I took the two longest sounds and used those as the main track over the 45 seconds and layed their respective shapes (a bunny and a long piece with a spikey section) at the bottom. I then took handfulls of random icons and dropped and flicked them on the space. Where they landed is where their respective sound would appear in the mix. I lined them up vertically so they were neat and decided to add a few extra "beats" for some sounds where I thought it might sound good. At this point I had no idea what it would sound like. I then sequenced the audio files in Pro Tools so they would faithfully represent the sequence on paper. All done.

This track has no panning or fader automation. How the icons landed is how they got represented so some sounds are louder than others but I think it gives the track some depth. The only tracks that were altered were the two long ones at the bottom which had Time Expansion applied to stretch them out closer to 45 seconds.

-
[1] Haines, Christian. 2007. "Perspectives in Music Technology 1A." Seminar presented at the University of Adelaide, 21 March.

Friday, 23 March 2007

Week 4 - AA1 – Input, Output and Monitoring

Today we listened to a number of topics. The wallbay, the Mackie inputs, the patchbay and the final step in planning out the session, floor plans. I figured it best to sort that wallbay out first. The wallbay is set up so Studio 2 can use the Dead Room and the Live Room and Studio 1 can use the Dead Room, the Live Room and Studio 2 as recording rooms. So with the wallbay sorted I can concentrate on signal flow.








Patchbays are fairly simple devices. It’s basically two rows of plugs that utilises the top row as outputs and the bottom row as inputs. Patchbays are a remnant of the old telephone operator days. They are used to conveniently patch outboard audio devices like effects and compressors into the signal path or take the signal path elswhere. The settings for patchbays that we discussed were normalised and half normalised. That means a signal can flow directly from output to input without a patch lead and, a signal can be split if a patch lead is inserted in the output while it still flows into the input respectively.

For the floor plan, signals 1 –12 (drums) and 14-15 (Bass) are patched into the Live Room’s wallbay through inputs 1-14 and sent to Studio 2. Signals 13 and 16 (Guitar 1 and guide vox) are sent to Studio 2 through the Dead Room wallbay via input 1 and 2. Guitar 1 is sent through input 1 in the Dead Room and sent through to the Live Room (or EMU space as it’s written on the chart above) to feed Amp 1.


Well there was a lot more discussed today like auxilliaries, busses and headphone sends but I am already over the 300 word count. Oh yea. And remember to book the studio rooms with a pencil or fear the wrath of the roaming pompousaurus.

Tuesday, 20 March 2007

Week 3 - Forum - "There's no noise or sound that isn't music."

This forum involved the students performing a 45 minute piece written by David Harris called “Compossible with Nineteen Parts.” It involved the use of different instruments and voices playing set notes, chords or spoken words freely within different set timeframes as written on the score. It created a random mingling and overlap of notes and sentences that, as David put it, creates the “intentional removal of beat.” The way I understand it, it introduces randomness by if it were played a hundred times the composition would be performed differently a hundred times thus sounding completely different a hundred times. If that is a good thing or a bad thing I guess is up for debate. From a compositional and musical perspective, it was great. It was interesting and fun. Dinging the bell was a highlight for me. Some of us probably had a little too much fun judging by the murmers and giggling that regularly occurred from different corners of the room throughout the performance. With so much laughter it was difficult to distinguish whether this was to be taken as a serious musical piece. Personally, for me it was more like playing a game than a performance.
When the piece concluded we were invited to ask questions or comment on the experience. This is where the forum was spoiled for me. I got the distinct impression he was not impressed by me asking how the performance was relevant to music technology. I was slightly annoyed as the only “answers” that I received seemed to question my intelligence to understand why we were doing it in the first place and that “the use of traditional instruments to make noise was to open our minds as to what music can be.”[1] To use the argument that it was to open our minds was a stretch as I strongly believe that as tech students we should already have open minds as to what music can be. My other comment, which was ignored, is that instruments always make music. It would only be noise if an individual wanted it to be. I didn’t appreciate someone telling me how I should discern between noise and music. That is for the individual to decide. This piece was not noise to me. It was music in a random fashion, but music nonetheless.
-
[1] Harris, David. 2007. "Forum - Music Technology" Seminar presented at the University of Adelaide, 15 March.

Monday, 19 March 2007

Week 2 - AA1 - Studio Quickstart

Ok. What fun to be had. After going through the power up phase, which involved the girls name DORA, Laura, Amy and I thought we'd be on track for a recording. BU DUM! After fiddling about for half an hour, I figured we should get some help. We then found out that the inputs on the 192 interface was set to +4 instead of -10 in the Setup menu. With that problem solved we could go ahead and record the wonderfull song emanating from the radio. The song was some band howling away. Good grief!
Anyway, back to DORA. We have been instructed to power up the system in a certain order using that name as an acronym. D is for desk. O is for outboard. R is for recorder and A is for amps for startup in that order whenever using a large studio. Since studio 2 is a little different than that we were instructed to use a startup procedure like this:
1. Turn on mains power on wall
2. Turn on Phonic Power Conditioner
3. Turn on the 192 interface
4. Turn on the Mac
5. Turn the amps on.
Since the speakers are active, these are the amps.
We set up a 24 bit 44.1 session and recorded with a Neumann pencil mic. The mic was plugged into the lead, which was plugged into number 1 on the wallbay. That was feeding channel 1 on the Mackie which was recorded onto track 1 on Pro Tools. With a good strong signal from the radio we set up a gain structure with the Gain pot on the Mackie so we were metering just under clip. We didn't test the headphone send as they were all apparently broken and taken away from the storeroom. We made our tracks with the shortcut "Shift + apple + N" and flicked between the Mix and Edit screens using "apple + =". After finishing our recording we followed the DORA procedure but in reverse, ie amps, recorder, outboard then desk but in Studio 2's case as listed earlier 5,3,2 then 1. Curiously they prefer you to skip step 4 and leave the computer on until it's automatically turned of with step 1. Neutralising a desk involves putting all faders, knobs and switches back in their "zeroed", flat or off state so the next people to use it can start on a clean slate and not have to worry about switches that may be left on or off or knobs that are panned oddly, etc. After shutting the system down, turning off the radio, neutralising the desk and putting away the mic and lead we were done. Lovely.

Week 3 – CC1 – Desktop Music Environment

This week we were introduced to a small application named Sinusoidal Partial Editing Analysis and Resynthesis or SPEAR for short. SPEAR is an application for audio analysis, editing and synthesis.[1] As far as editing goes not only can you cut along the vertical plane as most other editing programs do along the wave form, this allows you to edit along the horizontal plane thus eliminating specific harmonics or overtones from the sound. I mentioned wave form just then, but SPEAR doesn’t represent it’s information as a wave form. It displays hundreds of little lines which represent the individual partials hence the word “sinusoidal” in it’s name. It transforms a complex waveform into it’s sinusoidal parts by a process called Fourier analysis named after Joseph Fourier.[2] I’d never seen an editor function like this or even heard of Fourier analysis before and I was impressed by such a small, and free I might add, program.

Different levels of amplitude can be identified by the darker or lighter shading of the partials. Darker for louder, lighter for quieter. There is also a pallet with editing functions that allow you to select vertically, select horizontally, timestretch, adjust the amplitude and frequency and also a lasso tool that allows you to select individual areas to edit.
We were asked to take our paper sounds that we had recorded last week, stick them in SPEAR and experiment with it’s functions. This is what I came up with.
Just some cutting and hacking away to start with.



Sound 2
I imported this at both 100 Hz and 8000Hz. At 100Hz I couldn't hear any difference but at 8000Hz (which took forever to import by the way) a lot of bottom end was missing. I didn't end up saving the 8000Hz version, but after chopping and playing around with the control sliders came up with this.



Sound 3
I cut holes in this one, pitched it down and cut some mids. It ended up like some weird howl. Pretty cool though.
I copied square shapes from the file and pasted them back on randomly. I stretched these squares up and down. It sounds a bit 'staticy' like a tv.
Instead of doing random things, I thought I'd do a cross thatch pattern all over it. That didn't really sound very interesting at all so I selected sections along the time scale and pitch shifted sections up and down. I also randomly selected parts with the lasso tool and deleted small sections.


I removed a lot of partials from this one. I listened for 'weirdness' and kept those that I thought were interesting.

Overall it's a pretty cool application. Now that I've downloaded it, I'm sure I'll come up with other weird and wonderfull sounds in the future.
-
[1] Michael Klingbeil 1995-2007 “SPEAR Sinusoidal Partial Editing Analysis and Resynthesis” http://www.klingbeil.com/spear/ (19 March 2007)
[2]paraphrased from: Wikimedia Foundation 2001 “Sine Wave” http://en.wikipedia.org/wiki/Sinusoidal (19 March 2007)