Friday 21 November 2008

CC2 Major Assignment

A microphone is placed against a wall facing towards a speaker placed against an opposite wall. The feedback is controlled via a filter and the volume raised sufficiently until resonance occurs in the room. An audio file “CCfreddie2008” (included in zip file) was recorded and played through the audio illusion part of the patch in the two separate playback patches that can be seen in the bottom right. The resonance unfortunately was too loud to hear it properly in the recording although sounded quite cool live in the room where I was sitting as I was closer to the speaker than the mic was. The audio illusion uses the resonant wave length to determine speed and delay time so as to create an overlap in the playing of the two files. I made some manual adjustments on the fly near the end of the recording to add some variety.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Thursday 16 October 2008

CC2 Week 8 MIDI & MSP

I focused on getting the $1 to work. I figured this was going to give the most versatility. In the end, it just became more complicated. The patch itself is very simple, but creating the GUI was more work than I anticipated. I think it is a little over the top for what is needed for a patch like this but it works as far as I can tell.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.
1990-2005 Cycling 74/IRCAM

Wednesday 15 October 2008

Forum Week 9

Who can you see in Mr. Do-Bee’s Magic Mirror?
I was going to write a comparison of music versus noise and how people interpret the context of the two in certain circumstances, but it appears my blogs are construed as mere rants of late so I’ll focus my efforts this week in other projects. I will say however that Luke and David’s Sound Collider projects were quite impressive.

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 9th October.

Tuesday 30 September 2008

AA2 Game Sound week 7

My aesthetic analysis of this particular genre is the need for dramaticism in the soundscape. It is a game comprising elements of war and a soundtrack or effects on par with something like the film “The Sound of Music” simply will not cut it. I would imagine a soundtrack of deep droning sounds, percussive mechanical sounds, a quick marching feel to the tempo, something with a darker edge, etc. Some sort of music that draws on the emotion of war itself and represents conflict.
The sound effects need to bring a sense of reality to the game bringing the on screen images ‘alive’. To a certain degree, they must match our own physical reality to make a player connect to it. The vehicle and creature movements can be a bit imaginative in their sounds but the explosions themselves should remain modest in imaginative sound design to give the player something sonically recognisable.

Edward: (sound designer VO)

Doug: (composer):

Sanad: (sound designer FX):

Freddie: (sound engineer Effects and Mixing - Team Leader)





-
Haines, Christian. 2008. “AA2 –Game Audio.” Seminars presented at the University of Adelaide.

Sunday 28 September 2008

AA2 Game Sound week 8


This is basically a penguin version of Super Mario Brothers. An adventure game of this type usually requires the creation of ‘cutesy’ type sounds and a ‘happy’ soundtrack. The assets consist of single sounds. There won’t be multiple versions of a jump sound for example in this type of genre. The assets list will consts of the usual sounds: jump, fall, explode, kick, squish, skid, lifeup, there are 20 in all. I will be using foley and sound editing software to create the sounds and placing them directly into the game overwriting the original sounds.

Haines, Christian. 2008. “AA2 –Game Audio. Asset Integration.” Seminars presented at the University of Adelaide.
“SuperTux” Bill Kendrick. Accessed 21st August 2008. <>

Thursday 25 September 2008

Again, this was rather straightforward this week. I did run into a small problem when I found having two separate buttons to operate the basic functions cumbersome. One button to open a destination place for a file/open a file for playback and then another to actually start the recording/playback was overcome by banging the two stages at once. This ended up opening up a save window each time the recording/playback was stopped so a select object was used so the second instance of pressing the keyboard shortcut of ‘r’ or ‘p’ when the user wanted to end recording/playback was only sent to the sfrecord~ or sfplay~ and bypassed the ‘open’ bang.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.
1990-2005 Cycling 74/IRCAM

Friday 19 September 2008

Forum week 8

Get a haircut
After a good few weeks of interaction at Forum we reverted back to the old format of insomniac curing movie viewing. It is of my opinion that the ‘favourite things’ format needs reviewing. One could say you could take from this whatever you wanted and I would say yes. You could take absolutely nothing and yet you could take absolutely everything, and then some. Are we to be analysing the film? Meh. Everyone will no doubt do this in their blogs anyway. Are we to be analysing the presenter on their choosing of presented favourite thing? This could go pretty deep. Are we to be analysing students? This could go even deeper. Are we to be analysing the score? I wonder if anyone even blogs this. We are music students after all. Are we to be analysing anything at all? Are we to be awake? Or are we merely being entertained with a show and tell which brings me back to my review comment.

If something is being presented as their ‘favourite thing’ I would much like to hear about why it is their favourite thing. I am much more interested in hearing about what makes this particular thing appeal to the presenter than listening to a history lesson about this ‘thing’ or random tidbits and trivia. It would be much more entertaining to hear if this ‘favourite thing’ has inspired the presenter in some way directly or indirectly in their work or life. Most importantly, why is this a favourite thing of theirs? Was it life changing? How it influenced them. Did it change their career? Did it change musical choices, writing or listening? Simply stating that they like the sound of it, talking about some history and then pressing play and sitting back is a pretty easy way out if you ask me. Why could we not have focused on a few chapters only? We could have viewed those, perhaps one at a time, received a commentary on why this is significant and then possibly discussed them in more detail with the class.
Presenting something personal as a ‘favourite thing’ such as an own composition or artpiece is different as people, well most would anyway, already acknowledge the presenter as having a close connection to it and an audience would respect it as a favourite thing. A video, song, painting or whatever by some other composer/artist with no real personal connection other than perhaps sentiment does not pull the same automatic respect from an audience. Without an articulated explanation as to why the film Eraserhead is one of David’s favourite things, the mind is allowed to wonder as to the reasonings why, which in itself could be dangerous given the nature of the film.

Harris, David. 2008. “Forum.” Seminar presented at the University of Adelaide, 18th September

Sunday 14 September 2008

Forum week 7

Bombings and a bit of boob wobbling? All on September 11? Again? It really seems of little point to merely repeat here what was presented in Forum or voice an opinion on each presentation with a couple of sentences so I shall explore a slight ‘trend’ that appears to be happening lately. Maybe it is just my imagination, but it appears the boundaries of humour have been tested in recent weeks. I just hope we are not stepping too close to the edge with what is perceived as humour. I guess as long as we all laugh together it is funny, if people are offended or the class goes silent it is not.

Quoting good old Wikipedia (which may be humourous in itself), humour occurs when:
1. An alternative (or surprising) shift in perception or answer is given that still shows relevance and can explain a situation.
2. Sudden relief occurs from a tense situation. "Humourific" as formerly applied in comedy referred to the interpretation of the sublime and the ridiculous, a relation also known as as bathos. In this context, humour is often a subjective experience as it depends on a special mood or perspective from its audience to be effective.
3. Two ideas or things are juxtaposed that are very distant in meaning emotionally or conceptually, that is, having a significant incongruity.
4. We laugh at something that points out another's errors, lack of intelligence, or unfortunate circumstances; granting a sense of superiority.

Well there you have it. People can do with that as they see fit.

Anyway (and not intended to be related to the above paragraphs but it will appear to be so)…this appears to be evidence that Sanad should have perhaps played this composition in last weeks Forum instead of what he then improvised. This piece seems to have gotten a lot more emotional response. :)

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 11th September.

MSP shakes head in disbelief. LOL!!




After all these errors coming up, I guess in the end MSP gave up on me. Hilarious.

CC2 Week 6 Sampling 1

This seemed a lot more straightforward than previous weeks. This is more like routing and patching of real audio equipment so was more familiar to me in my mind. I am still a little lost as to the actual difference between groove~ and wave~. Judging by the reading, wave~ seems to be more usefull with waveshaping. I am probably wrong in this thinking. It would not be the first time. Happy smile.
-
MSP
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Monday 8 September 2008

AA2 Game Sound week 6


Well I am now officially confused with all of this. I am well aware that an asset is merely a timeframe taken from one particular sound file but here we are looking at modifying a games audio files with about seven separate audio files representing one weapon all being referred to as assets as the example. It seems that the term ‘assets’ is a term used loosely and I shall take that onboard.
I have used three separate texture sounds of walking on various substances (wood, metal and swamp). I assume from the readings that these will all operate under the same global volume settings and will be triggered by the programming code as to what surface is being walked on and at what speed. This is about my understanding of this, and I know it is not very much. What a shame I am not a mathematical genius also. To be honest there are far too many letters and numbers floating about in our classes this semester for my peanut to take in. Give me lateral over logic anyday. So…when are we moving from programming and going back to sound and music again?

-
MP3
-
Haines, Christian. 2008. “AA2 –Game Audio. FMOD.” Seminars presented at the University of Adelaide.

FMOD, 2007, 2006, .

Friday 5 September 2008

Forum Week 6

Today was the most fun and enjoyable forum I have been involved in to date. A lot of this had to do with the direct interaction between presenters and listeners and the relaxed nature of class discussion, although some may just call this clowning around. Instead of listing my opinions on each presenters choices I will now elaborate on my own choices and waffle on a bit more.
The third year students will probably recall my presentation last year on so called formulas in music and generating particular emotional responses with sound. Since psychoacoustics is already an area of interest to me I found this forum very enjoyable.
It is my understanding that the Indian emotions are to trigger a direct emotional response in the audience and not simply to project an aural cliché* of this emotion, therefore I made a conscious decision to not simply play musical sounds or manipulated sound effects. I wanted my audio choice to be a single subject with many possible interpretations based on timbre, pitch, frequency and generally on how it made me physically respond. I wanted the sound to be organic so as to relate to humans more. I considered various animal noises, and then realised what the creature was that spurs the most emotional response from other human beings on the planet. Babies.
Instead of repeating what I said at forum, I will move on at this point. It would appear that to get a ‘true’ emotional response from a sound and not a mere cliched memory of a sound, one would need to be in a Theta state. In other words, you would not need to think about it. It would just happen. You have suddenly gotten goosebumps while hearing something for example. Quite often during todays presentation we needed to do a lot of thinking or simply guess as to what emotion the sound was representing. This is sending us up into Beta states. So does that mean a piece of the puzzle of emotional response in sound lies somewhere around 5 to 8Hz? So emotion in sound is based on sub frequencies? Possibly, but I am only speculating. If there is a way to identify what makes us respond a particular way to certain sounds, could it be extracted and included in non cliched 'off the wall' music to create the same intended emotional response as the cliched music? One thing I forsee in a future such as this is that music would lose all melody and harmony. Would this be a good thing for music? Perhaps deep down we all like the cliches and familiarities in music and would not want it any other way.
-
For anyone interested in the video files I used today, here they are in the order I played them in flash format.
bhayanaka (fear/terror)
-
Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 4th September.

* Edward Kelly had stated that we [the public] had grown used to Hollywood’s cliched use of the same chord progressions and sounds to express a particular emotion.

CC2 Week 5 Synthesis 2

I do not know why this does not work. I divided the modulator with the carrier as the formula stipulates. The ‘FM test’ works without the harmoniser as the sideband pics in the earlier blog demonstrates but I fail to see where the error lies now. Perhaps in future I should just copy the tutorials into my patches as other students are suggesting.
-
Although this is now late, this patch is now working if anyone is interested. Apparently there was conflicting patchcords sending extra data to the same place.
-
Patch
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Monday 1 September 2008

Forum Week 5


Negativland. Do they have a point?

What
the!? Negativland is a group that rebels against corporate America and its social and political influences (or simply because they feel like being a pain to a particular celebrity on any given day) by taking videos, music or advertisements owned by these corporations and editing them to reflect Nativland’s own personal and/or political views instead? What the!? Who’s trying to brainwash who? Pfft. Honestly, if a satirical approach on copyright protected works is what they were simply trying to accomplish I think they missed the mark as artists such as Weird Al Yankovich has satirical pieces with just a tad more humour. And I do mean just a tad. I think Nativland has no point at all much like films such as Flying High but I guess I could find people ready to queue up and tell me all about the cinematic genius of films such as these. I am certainly not offended by them, I simply do not find them amusing. I guess I am hard to please.

And the use of the above pic is in no way meant to reflect these opinions back on the presenter. The forum was about Stephens favourite things, and each to their own. I just didn't want to waste the pic to be honest.

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 28th August.

AA2 Game Sound week 5 Audio Engine Overview


“Most new users simply want to add a set of wave files and start building things. However, this is not how the FMOD designer workflow is structured.”[1] This statement literally made me moan out loud. I really hate unintuitive programs. The tutorial is difficult to follow in places. The section “Using the Auto-pitch feature” stated to select the option ‘sound definition properties.’ Unfortunately this menu is actually named differently and again I wasted time looking through dropdown menus until I found the right one. The Simulating Distance tutorial was completely confusing. There did not seem to be an undo function which was also annoying. Maybe there was one but was simply unintuitive in its location. I got there in the end and played around with the sounds slightly to recreate an old plane dogfighter game. I did find the FMOD Flange always distorted the cross the fades so I reduced the overall levels. Once again it boils down to myself getting familiar with these technologies so they can simply be used as creative tools. I did find this a lot easier than learning MAX/MSP which still continues to cause stress.
-
-
Haines, Christian. 2008. “AA2 –Game Audio. FMOD.” Seminars presented at the University of Adelaide.

[1] “Overview and Workflow Philosophy” Chapter 2: Getting Started. Creative 2007, FMOD DESIGNER:
USER MANUAL, page 19, Creative Labs, 2007, .

FMOD, 2007, 2006, .

Thursday 28 August 2008

sideband art

This has nothing to do with any assignments perse but I just had to blog these pics after discovering the sonogram object. I came up with these psychadelic sideband artpieces that are quite Dr. Who'ish.




Wednesday 27 August 2008

CC2 Week 4


I don't know if this is any good. It may be too simple. I think I understand the concept, but I am not confident. Making the patch seems difficult but I seem to be able to see how to modify it after the fact. What I was thinking of doing was adding an audio in and split it into the poly to create a reverb. I assume delays could be set between each function window, say 5 ms, to determine space. This could also be manually adjusted with a single knob connected to a series of multipliers between each function essentially making a logarithmic decay. I would assume there would need to be at least 500 function windows at least to make this work half decently. One thing I could not do was bang a list directly into the function to act as presets.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Tuesday 26 August 2008

Forum Week 4

After searching around the best source seemed to be at www.rasas.info as none of the sources related to sound itself. It was all about spiritualism, cult movements or yoga.

Hasya appears to be about laughter but separated from forced laughter that simply becomes sarcastic. An inner joy that cannot be self created such as a direct connection with God.

Adbhuta -“The key to wonder is to remain open-minded toward the miracle of life, which can be experienced in everything.”[1] Very cryptic as most of these spiritual cult leaders are.

Veera explains that apparently the desire for freedom and independence is an illusion as everything and everyone is interdependent. Eh? More religious phylosophical gobbledeegoop.

Loneliness in general is the main cause of karuna.

What has this got to do with sound? Nothing as far as I can tell so I will have to make my own interpretations of this as will everyone else. A copy of the Natya Shastra would be handy.


Whittington, Steven. 2008. “Forum. Composition Workshop” Seminar presented at the University of Adelaide, 21st August.

Harris, David. 2008. “Forum. Composition Workshop” Seminar presented at the University of Adelaide, 21st August.

[1] “9 Rasas: The Yoga of Nine Emotions” www.rasas.info. Accessed 25th August 2008. http://www.rasas.info/wonder_mystery_curiosity_ashtonishment_adbhuta_adbhut_rasa.htm

“Rasa (aesthetics)” Wikimedia Foundation Inc. Accessed 25th August 2008
http://en.wikipedia.org/wiki/Navarasa


“Natya Shastra” Wikimedia Foundation Inc. Accessed 25th August 2008. http://en.wikipedia.org/wiki/Natya_Shastra_of_Bharata

Monday 25 August 2008

AA2 Game Sound week 4 Game Engine Overview


Command & Conquer 3: Tiberium Wars

Game engine: SAGE (Strategy Action Game Engine) highly modified
Electronic Arts, June 14 2001.

Audio Engine: PATHFINDER is a proprietory audio engine of EA although I could not find evidence if it is used on C&C3 or not
.

After hours of searching this is as much as I could find on this game for this weeks exercise. I am not sure why we are doing this anyway as a company would surely give this information in a brief. I believe we need to know how to do the work and the delivery requirement of the assets should be spelled out by the game developers. The capabilities and limitations of a particular audio engine is something that should be addressed right at the start by the sound designers. It is no different than us learning to record a band but ignoring the fact 192k SR will not work on CD or the recording was supposed to be done on vinyl. We need to know the final medium as early on as possible and not start Googling or looking in the library to find out what type of engine an employing company will use for any given game. Either way, I would be pretty confident that a company would state what the engine is in a brief so the sound designers can work within the particular parameters and requirements needed for the game.

“SAGE Engine” Wikipedia. Accessed 24th August 2008. http://en.wikipedia.org/wiki/SAGE_engine

“SAGE” Mod DB. Accessed 24th August 2008. http://www.moddb.com/engines/sage-strategy-action-game-engine

Haines, Christian. 2008. “AA2 – Game Audio Analysis.” Seminars presented at the University of Adelaide.


“Command & Conquer 3: Tiberium Wars” EA games, 2007.

AA2 Game Sound week 3 – Process and Planning


Command & Conquer 3: Tiberium Wars
Electronic Arts, 2007

Without a direct audio folder to open, I could not examine the files directly. DirectX is used so I assume they are wave files. I am not sure of the meta data or file names the designers used so I applied my own based on obvious choices. It is probably not 44.1 audio either. Spoken dialogue is a large component of this game. There are many character classes. The main class is not an in game character but an emotionless voice (male or female depending on faction chosen: NOD or GDI) from the command centre telling the player certain things have been done or are being done. Each vehicle, soldier and device has its own characteristic sounds or dialogue. The ‘driver’ of a vehicle speaks different dialogue when commanded around the battlefield. I noticed in the “Almost too quiet”[1] article there is a column named ‘chance.’ I assume this is to show the programmers how often this sound is to be triggered as to which parameters are in effect in the game at that particular moment in time. The GDI faction has been chosen for this analysis.
-
-
[1] Lampert, Mark. 2006, IT’S QUIET … ALMOST TOO QUIET, Bethesda Softworks, 2006,

"Chapter 2 - Sound Database". Childs, G. W. 2006, Creating Music and Sound for
Games, Thomson Course Technology.

Haines, Christian. 2008. “AA2 – Game Audio Analysis.” Seminars presented at the University of Adelaide.

“Command & Conquer 3: Tiberium Wars” EA games, 2007.

Wednesday 20 August 2008

CC2 Week 3 Polyphony and Instancing


Basically it appears that any mono audio signal path can be easily made polyphonic by inserting the poly~ object into the signal path. Now I understand what Christian meant when he referenced a reverb effect. One can create the initial algorithm and then use poly~ to easily create all the instances of delay. I am unsure why “args” now has to be placed in the object to identify my second argument (being the division of the voice numbers to prevent overloading) as anyone that uses the object has to know to put that in to make it work, but I guess that is the way it has to be.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Grosse, Darwin. 2006, The Poly Papers (part 1), 2007, http://www.cycling74.com/story/2005/5/3/135756/4001

Monday 18 August 2008

Forum Week 3

It seems that the double booking or lack of booking has overshadowed this weeks forum somewhat. I’m not so disapointed with the booking hiccup, what annoys me is the unfortunate disruption of chatter that occured by the classical students. Did they think they were walking into the canteen? The kick in the teeth continued when a lecturer even interupted proceedings by barking instructions over a students presentation. Of course we, and even Stephen, litererally had to grin and bear it. Anyway, as much as everyone loves to hear my opinions on such matters we’ll move along. Josh’s work on the animated film Toothbrush Moustache was a highlight for me. It was very well mixed and completely added to the vision. The other piece, and sorry I don’t know his name, was the last one where I think a soccer table game was recorded. The stereo track was separated and one side moved along. This is truly in the essence of musique concrete. Time and time again it seems out of necessity we hear editing, cutting pasting and effects added to create the finished track. The editing tends to guide the finished track too much and this kind of approach is no different really than editing a ‘normal’ song. Simply misaligning one side of a stereo track creates the unexpected sonic surprises and quirkiness of musique concrete without actually being heavily digitally edited. Brilliantly simple. I wish I’d thought of it for my project last year.

Whittington, Steven. 2008. “Forum: 1st Year Presentations” Seminar presented at the University of Adelaide, 14th August.

Friday 8 August 2008

CC2 Week 2 Signal Switching & Routing

I was not entirely sure on what to do with the phasor and cycle objects, so I just experimented a bit. I ended up with a cheesy, but very cool, old school science fiction sound generator. At the risk of losing marks, I did include a pic of a UFO for the nostalgia effect. I do believe this is a relevant choice however. I also made the patch completely stand alone with the dac included, which again I was unsure about. It would be quite simple to alter it later if need be. In hindsight, it would probably have been simpler and more convenient if I had only made a mono fader for now instead of trying to get fancy with a pair of stereo linked faders.
-
MSP files
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Forum Week 2

Program notes: The for and against.

As stated in my earlier blog, the composer often invites the recollection of memories and imagery to influence our mindset for the enjoyment of a piece. Often such a piece is ambiguous with its lyrics or has none at all thus allowing the listener to interpret the story their own way. Sometimes the composer would prefer the program notes be a ‘guide’ to the setting and influence the memories recalled so as to directly place them inside that particular environment. This is often true where the composer has a distinct story to tell and would prefer the listener directly understand the composers meaning of the song. The composers story is told and understood, but at the same time memory recall is still in effect, but now the memories are guided and placed into the composers environment.
This was particularly true for me while listening to David Harris’ composition “Terra Rapta.” Memories of my visit to the Pitjantjatjara lands back in 2005 came back to me. For example, Section I reminded me of a thunderstorm that came rolling through the area, which in turn reminded me of the good humour and joking of the situation in the camp. Section L and Section Q reminded me of bird sounds, which in turn reminded me of wandering around a huge dry river bed with a condenser recording gallahs on my tape recorder. The whole song placed me back in the Pitjantjatjara lands, an environment where my mind could make a connection of what Australia used to be like. A memory of a thunderstorm rolling over me or a recording of bird sounds could be recalled from anywhere, but these particular memories from Fregon and Ernabella were directly recalled by influence of David Harris’ program notes, thus he was directly influencing my memory recall while I listened to the piece.

Harris, David. 2008. “Forum: My favourite things” Seminar presented at the University of Adelaide, 7th August.

AA2 Game Sound week 2 - Game Audio Analysis

TOMB RAIDER
Core Design 1996
Sony Playstation

Analysis of game in sequence.

SFX
Two differently pitched footstep sounds randomly alternating when walking. Always the same no matter the surface. Landing from a jump has a deeper sound, similar to dropping a large coat. Animal noises use volume changes to give a sense of distance. Levers sliding and doors opening use a grinding sound although utilising different pitches for each.

MUSIC
There are two distinct soundtracks. One comprised of ambient sounds using wind, synth tones, a reverberant gong and percussive noises creating an atmosphere of space and emptiness.
When opening a door or entering a secret area, brief string music starts giving the signal one is on the right track. Orchestrated music also starts when in danger or nearing the end of the level. These are the only times music is used. All other times the spacious, ambient track is used.

VO
Lara makes different grunting noises when hurt, climbing or jumping, although not made every time she jumps. There is also a vo of “aha” when she finds a secret area or item.

What was significant about 1993 in regards to video games? The arrival of the First Person Shooter and the dawn of multiplayer on-line gaming: Doom.

MP3

Haines, Christian. 2008. “AA2 – Game Audio Analysis.” Seminars presented at the University of Adelaide.

“Tomb Raider Lvl 1.” You Tube. Accessed 8th August, 2008.
http://www.youtube.com/watch?v=3-cr_1UlW10&feature=related

“DOOM” You Tube. Accessed 6th August, 2008. http://www.youtube.com/watch?v=yr-lQZzevwA

Sunday 3 August 2008

Forum Sem2 Week 1

The 'art' of listening. Is the listener the artist? Does the listener become the canvas, the final masterpiece, and musicians are merely the paintbrush? So who then is really creating the art?
Is the listener, based on their own life experiences, translating what they hear into something completely different than the artist intended? Is this form of subconscious reconstruction of elements similar if not the same as an artist making a piece of work from various sources to create something new? Is this then not art?
Musical genres that come across as ‘loud’ seem to suggest the musician does not want the song to be misinterpreted or retranslated, not only leaving the ear fatigued from volume bombardment, but leaves no room for the listeners mind to wander thus preventing memories gained from other senses to impair the musicians message. It would then appear that more ‘softer’ music allows, perhaps even invites, the listener to utilise other memories collected from the other senses. The listeners mind can then add the final pieces of the work to create the final masterpiece.
If the listener is the art, why then would so many musicians ignore this final element?

Whittington, Steven. 2008. “Forum: Listening” Seminar presented at the University of Adelaide, 31st August.

“Types of Listening” AIR University. Accessed 1st August 2008. http://www.au.af.mil/au/awc/awcgate/kline-listen/b10ch4.htm

“Attentitive and Critical Listening: Description.” 1999, Strategic Communication. Accessed 1st August 2008. http://www.chass.ncsu.edu/ccstm/SCMH/morelisten.html

Saturday 2 August 2008

CC2 week 1 Introduction to MSP


I had problems understanding the “volume changer” part of the assignment. I was still in the mindset that if a sample rate is faster than control rate, then the control rate should not be an issue as it can’t keep up. It dawned on me later that my thinking on this was back to front. The control rate is indeed too slow to keep up with the sample rate therefore we must alter the audio wave to compensate. I was also thinking the ramp time for volume changes would be constant, ie it should ‘duck to zero’ whether I make an adjustment from 45 to 46 or 2 to 96. Once I realised that the ramp was only noticable over large volume changes and could only work like this anyway due to smaller steps being too subtle did I feel comfortable with my understanding of the task.
-
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 2.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Monday 14 July 2008

...and they all lived happily ever after

A few people at Uni are aware of my drama concerning a Transformer I bought off eBay America so I thought I would blog an update. After nearly five months of phone calls and letter writing to various government departments, I walked out of SA Customs today with a Masterpiece Megatron. This was seized back in February and it seemed highly unlikely that I would be able to get it, so I am glad the money wasn't wasted. Woohoo! The customs firearms official said I was the only person in SA (that she is aware of anyway) that has managed to get a B709A for a Megatron since the change in the law regarding replica firearms. I was pretty wrapped so I took a 'victory photo' outside customs.



Friday 27 June 2008

Semester 1 is DONE



Now more effort can be spent examining the phylosophical ideals of time and space, which I think this photo is a fine example of.

Thursday 26 June 2008

Is this really necessary? Do people really find this amusing? Am I overreacting? I may be overreacting to this particular image in itself, but I (and others) have tollerated Doug's racist slurs and backstabbing jokes directed at various students for the last eighteen months and at this point I have had a gutfull.
I can only assume it is a retalliation for my post on his blog because as far as I can tell, this image has nothing to do with any theme in his CC2 patch and as usual he has done this to deliberately incite a hostile reaction. The catchphrase in this picture is ironic, as very soon he will no doubt be a little bit blue too.

http://loudmandoug.blogspot.com/2008/06/cc-assesment-sem1-2008.html

Wednesday 25 June 2008

AA2 Semester1 Major Assignment

Band1: White Light
Practice, practice, practice. After the numerous preproduction meetings where I mentioned the bass playing was not up to scratch and more practice was needed, it was evident on the day of the recording that the bassplayer had only ever practiced during band rehearsals. That makes three practice sessions in three weeks. Not good enough. After all my effort to help these guys get to a level where they may actually get a half decent recording was pretty much wasted because of lack of practice by both the lead guitarist and the bass player. The majority of my time mixing has been spent attempting to make the bass player sound decent. It was as if a three year old had gotten drunk and at a spur of the moment decided to start bowing a double bass. It was that bad. Nothing was played in time. Very little notes were even played properly. And don't get me started on the lead guitarists girlfriend offering me her 'production' advice all night. The rhythm guitarist and lead singer, brilliant. For people that have never performed before, they practiced hard and it showed. From where they were a couple of months ago to where they are now is a huge improvement, but yes after you hear the recording they still need improving. Now don't get me wrong, I really like the song. It has so much potential but there would need to be some rerecording of parts. Also the percussion did not get recorded, but it wouldn't be hard to imagine where it goes. This is not only a demonstration of my engineering skills but since I was friends with members of this group, and they trusted me, I had the opportunity to expand on my producing skills as well. This song ended up being edited and mixed rather different in regards to structure and instrumentation arrangement than how it was recorded on the day.
-
-
Band2: CASM allstars
This went very smoothly apart from the fact the drummer could not make it in the end, but we survived with a little ingenuity as described in the documentation. There's a couple of bum notes here and there, but hey, once is a mistake twice is jazz. It all adds to the style. It was all recorded in three hours and mixed the next day. If only everyone could come in and lay a track down in only three goes and let the engineer do his job. They took my suggestions well and tried a few different things, and when I said "let's do another take, because I think you can do it better" they did without questioning anything. Great stuff.
-

Sunday 22 June 2008

CC2 Sem1 Major Assignment

For some reason during this past week this project has had trouble working properly on a Mac. It has stumped me as it worked fine till a few days ago and everything except the GUI was done at Uni. The GUI was also operating fine at Uni till this week. It works fine on XP. The problem is that the febonacci scaler stops adding to itself after 47 times. As I said it works fine and continues counting till the end of time on XP. Strange.

Warning: Operation of this Max patch may induce a trance like state.

MAX patch
Documentation
-
Haines, Christian. 2008. “CC2 – Creative Computing Semester 1.” Seminars presented at the University of Adelaide.

1990-2005 Cycling 74/IRCAM

Reason. 2008 Propellorhead Software

Wednesday 11 June 2008

Forum Week 12

I’ve had to endure a bit of rap and scratching in my time, notably the experience of monitoring for DMX and putting up with a mate of mines stories of operating FOH for the Hill Top Hoods national tour (which he raved about mind you. Nothing is more rediculous than seeing a fourty year old carrying on like an eighteen year old rapper. “Hey yo!”), but nothing compared to the moron we had to endure at the last forum from the video “How to rock a party.” At least HTH aren’t complete morons. Even the turntablist that opened for DMX got a thunderous roar when he concluded his act, although it was probably the fact that he had actually finished that the crowd were applauding. The only thing different in monitoring scratchers rather than real musicians is that their desks are set on a separate auxilliary so they can control it themselves. They know this and are told this about five times before they go on, but the amount of times I got strange looks and the sign of a finger-pointing-up from the performer asking to get the foldback turned up was rediculous. Morons. I told them numerous times I will have no control over the level. They adjust their own monitoring themselves fed from a feed from the monitoring console. That is how all turntablists have the monitoring set up. It is not a band situation where the levels are fairly constant and need minor tweaking during a show. The records they use to scratch have widely differing volumes and it is easier for them to mix the monitoring levels as they see fit. Well it looks like that is the end of my ranting for this semester.

Oh wait! One more thing I stumbled upon. Are they serious?! This has to be right up there with knitting and origami on the list of worlds most useless courses.


Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 5th June

Saturday 31 May 2008

Forum Week 11

After my brain got over the initial shock of listening to rap music for two hours at last weeks forum, I was able to recollect my neurons and focus on the subject of this week. Are humans being integrated into technology? I would say yes, but I have the naïve thought that it should be the other way around. Integration of technolgy and humans reminds me of a possible future I imagined about fifteen years ago. It is quite possible that there will be tech/human hybrids competing at the Paralympic Games. These games, filled with technological ‘freaks’ with artificial limbs created for the sole purpose of breaking records, will overshadow and be more popular than the regular Olympic games. Another thought is that if we can replace limbs with so called better ones, what will the outcome be for humans? Will humanity go down the path of replacing healthy limbs for the sake of an ‘upgrade’? It could simply be considered an expansion on cosmetic surgery for some people which basically becomes an ethical issue. An issue I think I will leave alone for now.

As for a greener University we could adopt this:
We should only wash half our face on Monday.
Perhaps even a ‘tits out’ Tuesday.
Smoke weed on Wednesday.
Eat only tofu on Thursday.
And wear flares and tiptoe through tulips on Friday.

On a final note is technology attractive, seductive and sexy as Steven pointed out? I guess it is.
















Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 29th May.

Harris, David. 2008. “Forum.” Seminar presented at the University of Adelaide, 29th May.

AA2 Week 11


Mastering

I mastered songs from Edward[1] and Sanad’s[2] major assignments from last year.

With VU’s calibrated at –15 (RMS) I used EQ to bring out the vocals more and to add some air in the top end. I also used EQ to reduce the precussive sound from the acoustic guitar. Compression set at 4:1 was also used. The main thing I found was that when the second time “folly” is said, the sound goes dull then bright again. I don’t know if it had to do with mic technique or frequency masking in the mixdown. I am guessing it was the latter. The mix had a bit of out of phase content but nowhere near as troublesome as the next song.

Sanad’s song basically sounded better the minute I flipped one side out of phase, or in this particular case, back in phase. Again the VU was calibrated at –15, this time the song was off the scale with the VU completely pegged all the way through the song. Calibrated at –6 it still went over +1. This time the mastered version is actually quieter than the original, but at least now it should translate well on a variety of different stereo systems.
Personally I am not interested in ‘loudness wars’ so I focused on the tone of the sounds rather than simply making them louder.
-
-
Sanad Mastered
-
Grice, David. 2008. “AA2 – Mixing.” Seminar presented at the University of Adelaide 27th May.

[1]Louth-Robbins, Tristan “Tragic.” 2007

[2]Sanad & Co. "The song with no Tech students." 2007

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Thursday 22 May 2008

CC2 Week 10

Well it seems I am out of time with this one. Unfortunately the patch is still a mess. The main thing I can take from this is to create all the ‘add ons’ in a separate patch to start with. I had issues with the menubar. For some reason the text would not save in the menubar object window. Basically a disappointing blog entry.
-
patch
-
Haines, Christian. 2008. “CC2 – MIDI and Virtual Instrumentation.” Seminar presented at the University of Adelaide, 22nd May.

1990-2005 Cycling 74/IRCAM

AA2 Week 10

This is another song I revisited from a couple of years ago. The original recording was done by someone else, but I thought the song was pretty cool so I offered to give it a second mix. The first song is not the original mix but it sounds close to it. I recall it had no instrument separation, sounded quite dull, was mostly mono and had no pizzaz if you know what I mean. It sounded rather flat and deserved so much more. It had nothing to make the listener want to actually listen to it. The main point of the song to me was the bass line. Why that was not prominent in the initial mix is beyond me. A rebalance of the lead vocals vs backing vocals was definitely needed also as I recall the backing vocal overshadowing the lead. The rest I simply cleaned up as best I could with filters and compression to give it a little lift and to provide some much needed vibrance and energy.
-
before
after
-
Grice, David. 2008. “AA2 – Mixing.” Seminar presented at the University of Adelaide 22nd May.

Kartinyeri, James. "Land Rights." Ruwini. Kruize Kontrol Entertainment, 2005.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Forum Week 10

I am always surprised by the lack of spelling ability when it comes to the actual letters of the genre referred to as rap. People always seem to forget the C at the beginning.

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 22nd May.

Tuesday 20 May 2008

CC2 Week 9

I created an object named “infograbber” that contains the coll. I added the coll section as an effect of sorts. The data from coll can be collected on the fly when desired by the user and sent out and activated when felt appropriate. Each data list can be selected and the data sent to different MIDI channels or control devices. Logically everything seems to make sense. I get stuck sometimes remembering which objects do what at times though. All in all when a patch is thought through in small sections rather than one big finished assignment, I find that it is just a matter of logical thinking.
-
-
encapsulated object2 from week7
-
Haines, Christian. 2008. “CC2 – MIDI and Virtual Instrumentation.” Seminar presented at the University of Adelaide, 15th May.

1990-2005 Cycling 74/IRCAM

Reason. 2008 Propellorhead Software

Friday 16 May 2008

Forum Week 9

It was extremely interesting to hear about how the notes were chosen by Darren Curtis for his works. The Giza pyramid set up to create 0, 90, 180 and 360 AC amplitude, the fact that the sarcophagus happened to be red referencing 440Hz (out of interest I have always thought the key of A ‘sounds’ red) and the discovery of F#, A and C# in his observations were intriguing. I could not help but think that these ‘discoveries’ are a lot of the time made to fit inside theories in an effort to create fact. I also think that a lot of effort was put into what essentially at the end of the day was simply playing an F# minor chord. I do think I hit the nail on the head when I asked him whether he believed his work was a “sonic reiki” opening chakras and such which is fine if you believe that sort of thing, but from my perspective all the ‘discoveries’ could have been implemented into researching more real health benefits such as cell realignment, fat loss or even a cure for cancer. It also seemed to me that this was made into another musical art show to justify all the reseach.
Seb, get down to IP Australia and patent your Water-chimes. You are sitting on a gold mine.


Tomczak, Sebastian. “Masters Student Presentation.” Workshop presented at the University of Adelaide 15th of May 2008.
Curtis, Darren. “Masters Student Presentation.” Workshop presented at the University of Adelaide 15th of May 2008.

Thursday 15 May 2008

AA2 Week 9 Mixing


I dug up an old drum recording I forgot I had from about five years ago. The first mix is simple level adjustments, panning and a bit of eg and compression on the kick. The second has eq and compression on all the tracks. The third has eq, compression, a 27ms delay on the snare to thicken it, the kick drum has a 58hz sine wave gated across it and finally some reverb added to the snare and overheads. The reverb has a quick decay with a 20ms predelay to try and steer clear of the 80’s sound. I am pleased that I can not only start to get better sounds, but can actually imagine a sound and create what I hear in my head.
-
Mix1
Mix2
Mix3
-
Grice, David. 2008. “AA2 – Mixing.” Seminar presented at the University of Adelaide 13th May.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Thursday 8 May 2008

CC2 Week 8


It is no secret that I always struggle to create music this way. I do prefer a set format in a composition. I did find this much easier than last weeks exercise. I controlled the mod wheel in MAX via metro that is going into a counter. That is set to 2 so it goes up and down and then goes into scale set at 0 127 so it goes across all 128 data bits. Two number boxes can manually control the range of the mod wheel. I also made a MAX controller for the ADSR section in Subtractor. This is simply a metronome going into random then into scale. This was going to be used four times so it was more convenient to encapsulated them. I called it randslide, short for random slider. All controls are manually adjustable by the user. The result is not overly musical, but the random results are interesting. Kind of like R2-D2 on crack.
-
-
Haines, Christian. 2008. “CC2 – MIDI and Virtual Instrumentation.” Seminar presented at the University of Adelaide, 8th May.

1990-2005 Cycling 74/IRCAM

Reason. 2008 Propellorhead Software

Forum Week 8


VU meters and peak meters should be used in conjunction with each other. They monitor audio in two completely different ways and one should not be neglected over the other. The V in VU literally stands for voltage, although these days it generally stands for volume. A bit like the simpletons that changed the V in DVD to mean versatile. How can one distinguish between high and low frequencies and relative volume with a peak meter? They cannot. It is a fantasy. Just like the fantasy of waking up next to Jessica Alba. It will never happen. Peak meters simply show when you have exceeded the bandwidth of the digital medium, not when distortion occurs. Even if the red light appears that does not necessarilly mean you have 'gone over.' It may mean you have simply met your mark. And just because the peak meter did not flash red is no indication there is no ugly distortion like Julian Clarey going on. All a peak meter can accomplish is displaying transients. That is it. It will never identify the contour or relative volume of a sound. Sure some VU meters operate with LED’s but they have VU ballistics and not to be confused with peak meters. Ears are analagous to VU meters and move with the power of the sound. Do not listen to the down and out pill popping junkie bums that think a VU needle is something you stick in your arm or that DC regulated means they copped a court order to wear an ankle bracelet. Use a peak meter. Use a VU meter. Use them both. Know what they can accomplish and what they cannot. And most of all…know the difference between the two and never use the term VU loosely. There is a big difference between VU meters and peak meters just like peanuts and cashews.

Dowdall, Peter. 2008. “Forum.” Seminar presented at the University of Adelaide, 8th May.

AA2 Week 8 Mixing


I will present the three mixes in three stages of progress.
Mix1 has the guitars panned slightly off centre. Bass is placed to be subtle and add depth but not overbearing. The vocals has a telephone effect on it via eq.
Mix2 has reverb added to the guitars and the telephone effect removed.
Mix3 has an added delay of 68ms on the vocals. I also stuck a modulator effect on the rhythm guitar. A compressor was placed on the melodic guitar. I also automated the volume to duck at the word ‘sun’ to reduce sibilance. This needs to be fine tuned though.
With only four tracks I kept the differences subtle and ‘more real’ while sparing people any ridiculous panning effects or over the top automation. I felt keeping the emotion in the song would be more challenging and appropriate. I will most likely use these ideas in different places in the final song and thought I would practice making a real world mix.
-
-
Grice, David. 2008. “AA2 – Mixing.” Seminar presented at the University of Adelaide 8th May.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Forum week 7

Good Lord. I just realised that the ‘TC’ in the company name TC Electronics obviously stands for Tristram Cary. I think todays presentation by Steven was an appropriate and insightfull look into the life of Tristram. EMU would most likely not exist if not for him as he had a major influence in its creation. I think we need more wars so the military can throw away their ‘junk’ at the end of it so we can scavenge it all. Who am I kidding. The military would probably shoot anyone these days if they even thought of taking anything.

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 1st May.

Wednesday 7 May 2008

CC2 week 7

The multi slider acts as an addition object adding together number values. These separate values are sent to the + objects where they are added together in a line. This value is then separated by “less than greater than” objects and a note will be triggered somewhere between 0 and the added value.
-
-
Haines, Christian. 2008. “CC2 – Introduction to Max.” Seminar presented at the University of Adelaide, 1st May.

1990-2005 Cycling 74/IRCAM

Saturday 3 May 2008

AA2 Week 7 Piano

I set up an MS pair with the negative side tilted lower towards the bottom end and the positive side tilted up and away from the top end as demonstrated in Michael Stavrou’s book Mixing With Your Mind (pp92-93). This sounded good as it had ‘air’ but the tops sounded a little too close to my ears. I then placed the mics about 15cm further out and this resulted in a more darker even sound with the bottom end solid and the top notes fading off. It sounded better to me but still had a slightly mid sound to it. A bit more adjusting finding the mid area and pointing the dead side of the MS directly at it should get better results in the future. A spaced pair was also set up. The spaced pair consisting of NT5’s, each one placed mid way between strings and lid at each end of the piano, sounded overly bright to me but stereo wise sounded even across the speakers.

Grice, David. 2008. “AA2 – Recording Guitar.” Seminar presented at the University of Adelaide 29th April.

Digidesign. 1996-2007 Avid Technology, Inc All Rights Reserved.

Stavrou, Michael. “Chapter 6. PNO Secrets.” In Mixing with your mind. Christopher Holder ed. Flux Research. 2003.

Saturday 12 April 2008

CC2 Week 6


I wanted the drunk object to play random notes on the keyboard. My MAX keyboard goes from key 24 to 95 so I set that as a variable range with a range of 50 for the window. The tempo of the notes played can be adjusted by the user but I felt it needed more. The patch was not ‘drunk enough.’ I continued by adding the ‘random’ object and made that send random numbers up to 1000 so as to change the metronome speed. This created a very sporatic tempo. This second stage of drunkeness I nicknamed ‘paro.’ The patch is included as a separate MAX object in the folder along with the MIDI filter that was made to select black/white/all keys. I made some LED’s and a simple dial for the delay in Photoshop and put those in ‘pictctrl’ objects to finish it all off.


Haines, Christian. 2008. “CC2 – Introduction to Max.” Seminar presented at the University of Adelaide, 20th March.
1990-2005 Cycling 74/IRCAM

Forum Week 6

There is an asumption that we like this musique concrete stuff or at least a hope that we will eventually cave in and love it. As I alluded to in last weeks blog, the only ‘real’ future for this type of music is in psychoacoustics. Perhaps as a form of torture even. Now, I hear the lecturers and a few third years say “we must appreciate the history” and “we wouldn’t be here without these pioneers” and that is all well and fine but Burke and Wills were pioneers and yet we don’t get carted around in a horse and carriage around Uni every day or be expected to trek to Uni for four days when we can use an automobile of some kind to get here. There is more to electronic music than musique concrete and it’s sleep inducing derivatives. AND THIS WEEK I MISSED IT!!! BAH!! If what I interpret from other people’s blogs as correct, then I love the sort of thing that was presented. Manipulating sound and pictures to represent some sort of ‘wonkiness’ or to create the sensation that something isn’t quite right is where I want to go. Apparently David presented a video with out of synch sound. I’m not sure of the content of the video, but yes I find it completely annoying when a video drifts out of synch by even half a second throughout. Damn lazy people that cannot convert 29.97 non drop to 25 fps properly and think no one will notice. From an artistic viewpoint though, the creation of psychoacoustical phenomena of visual and audio ‘nervousness and uneasyness’ is definitely an area of interest.

Whittington, Steven. 2008. “Forum.” Seminar presented at the University of Adelaide, 10th April.
Harris, David. 2008. “Forum.” Seminar presented at the University of Adelaide, 10th April.