Friday, November 21, 2008

Week 10 Creative Computing - Integrated setup - Ableton Sampler

I'm not sure if this was week 9 or 10.....

We (Jamie and I) tried to demonstrate the concepts taught in class by quickly creating a sampler instrument using some short snippets of clarinet that we recorded in class. 

As demonstrated, we used a haphazard approach, using any samples and processing as needed to make drum sounds within the sampler. It is interesting that the source material hardly matters as long as it's interesting. This philosophy even applied to rhythms to some extent. Many of them were created by randomly pasting midi data around and transposing it. 

The effects available in live also make it easy to do something obvious quickly. We used the 'grain delay' and the 'erosion' on a bus to add a pleasing layer of white noise to the sound. 

With two layers of drums, one proper tonal instrument, and a bit turning loops on and off, we made 1'28 of electronic music!


Reference: Christian Haines. "Integrated Setup II." Lecture presented at the Electronic Music Unit, University of Adelaide, 14 October 2008.

Thursday, November 20, 2008

Audio Arts Major Project


Draft Mix MP3

I aimed to recreate the sound of the windy seaside staying quite faithful to the events of the recording I made. The main elements I included were wind, birds, cars, dog, rumble, rummaging, and footsteps. Part of the objective was for the final product to have a hyper-real sheen to it, which allows for some deviation from the original recording.

In order to achieve the hyper-real texture, synthesis was my main approach to sound creation. My main synthesizer was Plogue Bidule, as its flexible modules and routing options allow for a wide variety of timbres. I found some sounds far harder than others to generate.

Wind was a main focus, because it was dominant in the original recording. I created a Bidule patch with 8 channels of noise and independent filters. The frequency of the filters was determined by independent random oscillators, and also a master control, so that the 8 channels are independent but linked to some degree. A rumble adds depth and cinematic hyper-realism. While I have tried to evoke the sea using wind and rumble sounds, I think that the real ocean does make water sounds that my simulation lacks.

Glass was also tackled with Bidule, using an FM synthesis patch with modulation envelopes. After generating files of randomly varied glass sounds, I compiled them in logic to simulate the simultaneous clinking of many bottles. Many different sequences were bounced from separate logic session and later used in the main session.

Percussive hitting sounds proved to be some of the most difficult to recreate. I tried to process white noise, but my results were largely corny and reminiscent of poor films. In this case, I abandoned synthesis and quickly recorded myself hitting various objects on the desk in front of me. By taking small portions of this recording, timestretching them and using EQ, reverb and enveloping, I was able to create some marginally better foley sounds. I think that these percussive sounds are the weak point of the work – particularly the footsteps.

In the final mix, heavy EQ and short reverbs proved to increase the realism of many of the sounds. Pan automation allowed elements such as cars, bikes and dogs to move around.

I’m satisfied with my final product, however I think that improvement is possible, particularly with the rummaging sounds. Through this exersize, I have realised how difficult synthesizing real sounds is.

Creative Computing Major Project



Performance Recording Mix

This electroacoustic performance work for trumpet and computer builds from a gentle beginning by layering sound electronically. In order to explore a live improvisation aesthetic, the performance makes use of no pre-existing audio recordings, or rhythmic data. Only a basic form for the work was pre-decided, and the trumpet player largely improvised. This demonstrates that the musical outcome of electronic processing of sound relies little on the source material and more on the types of processes used. The work also explores what defines a piece of electronic performance music. The unique pre-existing aspect of this work is simply a particular processing and temporary recording array that has been configured to be interacted with in a particular way. One computer file, corresponding softwares, and any acoustic instrument are the only materials needed to reproduce the work. Thus, a file replaces a traditional music score.

Every sound that is heard originated from the instrumentalist at some point during the performance. The introduction contains a prominent pulsing that results from realtime processing of the trumpet. In order for the work to progress, the role of the software operator is largely in planning ahead by recording useful excerpts, which can later be creatively manipulated. Jamie was occupied with providing interesting source material through his self-taught trumpet style.

The relationship between the electronics and acoustic instrument are different in different parts of the work. In the gentle sections, Jamie was able to improvise with the setup and receive immediate feedback. In order to create the beat, he interacted with the equipment in a fixed, predecided way by making drum sounds. In louder sections, he is able to play over the music in a traditional manner as if it is separate musician.

Saturday, November 1, 2008

Week 12 Forum - Stephen steers the university battleship through oceans of amusement


Basically we discussed the course (and anything else we that came up).

There were some really interesting points raised about the nature of education, particularly in an artistic field. Is it actually possible to have an argument with Stephen? I suspect not - even if you get close, he'll pull out a wise cooking metaphor and make you stop and consider.

I agree with David that studying art that you don't (yet) enjoy is a mind opening process that is always a good thing, particularly if it's very developed art.

The stuff I said in forum was kind of just a rant and wasn't very clear, so I don't think it was very helpful. But when I mentioned Year 12 Music Tech, what I actually had at the back of my mind was that it's kind of opposite to this course. Here, we are all taught a wide range of techniques, must analyse a wide range of fixed styles, and then have to create music that uses particular techniques in a particular style. Last year, it was entirely about thinking of a sound you like and then heading for it, and all the tuition was about developing your particular compositions.

Well the outcome is that I consider pretty much everything I've done musically before the last 6 months to be embarrassingly boring really... so I think it's better to learn techniques, then take a breath and quickly make some music every now and then. Maybe this is because in electronic mediums, much of the originality come in the techniques of creating sound, and there are limitless undiscovered methods.

Reference: Stephen Whittington. "Music Studies (Music Technology) Course Feedback." Lecture presented at the Electronic Music Unit, University of Adelaide, 30 October 2008.

Wednesday, October 29, 2008

Week 9 Audio Arts - FM Synthesis

Audio Demo

Basic FM Synth
A simple FM synth based on the readings using one carrier and one modulator. The amplitude of the modulator is controlled in such a way that at 1.0, the frequency of the carrier moves between 0 and 2f (f is the note being played).

One envelope controls the overall output, while another controls the modulation depth.

I found that this simple FM setup was actually very effective for recreating real world sounds, particularly metallic percussion sounds. I was so proud of my gamelan emulation (reverb helped), that I had to design some upbeat gamlan elevator music (see audio demo). I think we should install this in the new Schulz elevator when no-one's looking.

Badass FM Synth
I tried expand the concept by using five oscillators linked in a chain so each oscillator modulates the next, so I suppose there are four modulators and one carrier. Each modulation stage has it's own modulation depth envelope. You can take audio feeds from the last three oscillators in the chain and mix them in stereo.

It did create some complex noisey textures, however I found it hard to create anything that was particularly real world by making use of the extra oscillators.

Reference: Christian Haines. "Additive Synthesis." Lecture presented at the Electronic Music Unit, University of Adelaide, 14 October 2008.

Tuesday, October 28, 2008

Week 10 Audio Arts - Additive Synthesis

Can I have the award for artistic patching? Impressionism vs bidule layouts....

Audio Demo


It's a midi controlled additive synth. You get 6 oscillators, and you get to choose the ratio of their frequencies to the midi note, their amplitude, and their wave type. There is a simple amplitude envelope to control the shape of the sound. It's polyphonic.

More Extreme Additive Synth (pictured)
There are 16 oscillators, each contained in a group. The oscillators are automatically mapped as harmonics in relation to the defined fundamental, however if you want a rougher tone, they can be scattered slightly using the "Freq Freakout Factor".

The amplitude of each oscillator is determined by a set ratio of the previous oscillator, creating a decreasing exponential curve. For example, if the "amp taper factor" is at 0.5, then every harmonic will be half the value of the previous one.

The amplitude of odd and even harmonics can be boosted and cut.

Frequency, pan, and amplitude can be varied individually using random value generators for each oscillator, creating evolving textures.

It's all pretty processor intensive, cause there are a total of 64 oscillators running simultaneousy (including the random modulators).

Reference: Christian Haines. "Additive Synthesis." Lecture presented at the Electronic Music Unit, University of Adelaide, 21 October 2008.

Saturday, October 25, 2008

Week 8 Audio Arts - Amplitude/Ring Modulation


Image (it won't insert properly)


I created an amplitude modulation patch and a ring modulation patch.

The amplitude modulation patch is basically for modulating oscillators that process a carrier in series one after the other.

My second patch (pictured) uses three oscillator. In each "iteration", the three possible combinations of two signals are ring modulated, and three new signals are created. The process is performed 8 times, however there is rarely much signal left at this point. An iteration selector fades between the 8 different stages so you can dynamically move from slight modulation to grainy noise. The 3 outputs can be individually panned to create a true rich stereo signal where the different channels are different but related. 

I wasn't very successful in creating organic sounds, however many types of rich machine sounds were easily accessible. An engine noise from this patch should be usable for the major project. Putting noise through the system created some interesting buffeting wind. The stereo output should be useful in a soundscape.

Reference: Christian Haines. "Amplitude and Ring Modulation." Lecture presented at the Electronic Music Unit, University of Adelaide, 7 October 2008.

Week 9 Creative Computing - Integrated Setup



Jamie and I created a piece by improvising using an integrated setup involving Bidule, Live, the Novation controller, an acoustic guitar, an SM57, and the Mackie mixer.

Within the computer, the Bidule patch was something I have been experimenting with at home in preparation for the major project - this is the first time I've used it with live audio. The string sound is from a pleasing sample we quickly loaded into the simpler in live (with reverb and delay). Bidule was the rewire host, and live was simply rewired in to provide the string sound.

I played synth strings via Novation and tweaked minor parameters of the Bidule patch during the performance while Jamie played guitar. I would have liked to have more control over Bidule, but that will require some further tweaking. I think that using the software with live audio input where we could experiment and interact with it helped us gain better results.

Obviously we used much pre-existing patching in the setup, however I think that we still created an interesting live performance by adding sound and interacting with the patch.

Reference: Christian Haines. "Integrated Setup I." Lecture presented at the Electronic Music Unit, University of Adelaide,  7 October 2008.

Thursday, October 16, 2008

Week 10 Music Tech Forum - Honours Presentations


Presentations from the honours students!

I really loved the animation projects of the first speaker. The animations were really pretty and I think the music fitted them perfectly. I loved the piano style and the vocals and I think I could listen to it on CD for enjoyment. A very comfortable contrast to what we normally hear in forum... but there is a place for that too.

The concept of the program that sonifies network data was totally awesome. I really like the way that it artistically represents the modern world where we're surrounded by machines that are automatically sending each other huge amounts of information before we even ask them to fetch us anything. It's a noisy society where we're bombarded with information from all angles, so why not convert it into music?

I guess what the program currently lacks on the musical front is a way to give the results some kind of macro form...

I was also wondering if the program can pull much meaning from the data values? For example, is it very different to modulating the values of the synth with a random oscillator? But I think these problems can be solved (don't ask me how). 

Reference: Stephen Whittington. "Week 10 Music Technology Forum - Honours Student Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, 16 October 2008.

Wednesday, October 15, 2008

Week 9 Forum - 3rd Year Presentations!

The third year presentations were all really impressive. I'll just describe that ones that come to mind first.....

Probably my favourite listening experience was the surround supercollider experience. This is partly because this is not something I'm used too but I reckon it's a worthwhile endeavor. I think that the current consumer craze is a little stupid cause I'd much prefer decent stereo sound than poor surround and a lower mids gap between the sub and sattelite speakers....

But luckily the Blue Skys have no such problems.... I loved being able to close my eyes and imagine I was in a jungle of crazy machines. I enjoy Super Collider's glassy sound sometimes (during 3rd year presentations). I'd like to hear it used with other contrasting elements.

The piano chord generator really interested me. I understand the chord choosing algorithm, but I didn't quite get how the rhythm generator works. I think was a really impressive example of this kind of patch. The output sounded really quite musical, I think because of the clever chord voicing system.

The melody warper was a rad idea that worked well I think however I was a bit distracted by all the changes of tempo (which were a good idea). I think it would be awesome to record the midi and render it elsewhere...

Reference: Stephen Whittington. "Week 9 Music Technology Forum - 3rd Year Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, 9 October 2008.

Wednesday, September 24, 2008

Week 8 Forum - Eraserhead

I was fairly excited to watch Eraserhead because I keep hearing about Lynch and I don't think I've seen any of his other films. I really enjoyed it - I'd definitely prefer to see something totally different than another boring movie stamped out of the usual mould.

There are some favourite moments.... I love the way Lynch manages to ironically reserve dialogue for completely obvious, pointless remarks. "Oh... you are sick" was insanely funny in my opinion, as well as "like regular chickens" and a few others. That explains why there is an Amon Tobin song of that name....

I like the way we do very different things in forum every week, and I think that watching a fairly experimental movie is a worthwhile endeavour when many of us are interested in film sound design and music. The constant rumbles and drones created a strong atmosphere and I thought they had a pleasantly solid tonal character. The contrast of the music and noise was very effective. 

I also think that all art forms influence each other in interesting ways and so it is a good thing to find interesting works in other disciplines.

Reference: David Harris. "Week 8 Music Technology Forum - Eraserhead." Lecture presented at the Electronic Music Unit, University of Adelaide, 18 September 2008.

Friday, September 12, 2008

Week 7 Audio Arts - Analog Synthesizers!

Above is a Roland SH-5 which I attempted to create some organic sounds with. It feels great to be playing with something that isn't a computer. The many controls allow easy realtime control to manually add that modulation you need without bothering to assign things to an LFO or modwheel. I used the pitch lever often to control pitch or filter in order to move it exactly as I needed. 


I found it quite surprising that it isn't too hard at all to obtain a few reasonably organic sounds, though I ran out of inspiration quickly.

The wind was created by filtering white noise with the LP and BP and I added in a slight whistle created by ring modulating the oscillators together.

I think I did the bird noises by increasing the resonance of the filter till it self oscillated. I then set up a suitable envelope and swept the filter with the pitch stick while playing notes.

Other sounds are fairly obvious! The white noise oscillator makes it all possible because otherwise we could only do pitched sounds.

Reference: Christian Haines. "Week 7 Audio Arts - Analog Synthesizers." Lecture presented at the Electronic Music Unit, University of Adelaide, 6 September 2008.


Week 8 Creative Computing - Ableton Live 2 - Let's Do the Time Warp Again Please


The Audio File

Here's my project for this week - it uses a bunch of samples from a recording of a school concert, a musical, and a rock band.

The warp markers were extremely useful to me. This interface for matching loops is so intuitive and accurate and makes it easy to work with looser beats that may fluctuate in tempo. It is so quick it would be feasible to do minor editing with this feature while playing live. I think the performance aspect of this software can be good when working with samples, because often really interesting combinations are found by trial and error which is easy in a performance environment.

I find myself losing touch with the macro form of the piece when sequencing live, however this is easily tidied up in the arrangement view. In creating this 45 second piece, I actually created a 2:20 groove and then raised the tempo and judiciously used "delete time" to get rid of the less interesting parts.

I had to just go for it and work quickly knowing that I couldn't save and improve the song incrementally over many sessions. This approach forced me to be more productive, creative and heavy-handed.

Reference: Christian Haines. "Week 7 Creative Computing - Ableton Live". Lecture presented at the Electronic Music Unit, University of Adelaide, 6 September 2008.

Week 7 Creative Computing


Wasn't sure if it needs to be 45 seconds or not?

Groove 2 MP3 (cut to 45 seconds)


I practiced a little at trying to get all the samples loaded really quickly and then switching scenes at the right moment. I think that the realtime nature of this software makes it an instrument that requires practice to really be able to improvise quickly. I wasn't able to do very much at all after 45 seconds, however with a little longer I was able to explore more, changing the loop length in order to switch time signature and create polyrhythms.

There are some quite good live DJ-type effects such as the "grain delay" which I used on trent's vocal. The XY controls are obviously geared towards performance.

I think the key/midi control options would really improve the agility of the performance if the "player" is skilled, however some setting up of the project may be required to make use of them.

The immediate nature of this performance sequencer and its restriction of saving meant that in some grooves I had happy accidents and discovered a new technique, effect, or sample combination while improvising. This is an example of the way in which restrictions may be conducive to creativity, which is then sped up by the focus on live performance.

Reference: Christian Haines. "Week 7 Creative Computing - Ableton Live." Lecture presented at the Electronic Music Unit, University of Adelaide, 6 September 2008.

Thursday, September 11, 2008

Week 7 Forum - Second Year Presentations

Super Collider

The second and and some third years showed us what they have been up to. This consisted mainly of max and supercollider patches.

I enjoyed it all! Sanads dance piece was pretty extensive for something he made last night.... I enjoyed it. He suggested that it is quite different to the dance music that people like, but I'm not so sure.... Other than the 6/8 time signature, I don't think its that different from what I've heard. I think this is probably because I don't understand the intricacies of dance music at all.

Edward's visual patch was way cool - very impressive and I liked that he could tweak parameters and get really different patterns.

The chord progression generator seems like a very useful device, even if it is just there for inspiration. The grid probability approach seems quite intuitive.

Freddie's "game" idea for his Max patch was really hilarious - I love it. Never would have thought about turning algorithmic composition into a game! Get your kids into max/msp this way......

I thought the supercollider stuff was interesting but it looks like heaps of work to get decent organic textures. That said - John Delaney did an amazing job of getting beautiful sounds from the code, and Matt's Fatboy Slim sample was absurdly rad.

Reference: Stephen Whittington. "Week 7 Music Technology Forum - 2nd and 3rd year presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, 11 September 2008.

Week 6 Creative Computing - More Logic Skills



I found it rather difficult to apply many of the new skills taught in the tutorial to a song that had already been completed. In order to try to demonstrate the techniques and make a noticeable change to the song, I've made some adjustments.....

Added the enveloper to the drums. I've overdone it to the point where there is too much attack but at least you can tell it is there. I found this plugin more useful on my own recordings.

Humanized the drums. The timing and velocity is just slightly randomized and I think it makes the looping a little less obvious.

I added reverb to the instruments so they sound a little more dub. Particularly the drums have a very fake cavernous effect (I chose this deliberately in space designer).

Redid the arrangement a little.

Last minor touch.... REPLAYED THE MELODY THROUGH AN ARPEGGIATOR! which I lovingly configured in the environment as per screenshot above. One note is converted to a chord via transformers and then tastefully arpeggiated [to hell and back] to develop the melody and add a minimalist flavour.

Reference: Christian Haines. "Week 6 Creative Computing - Logic Skills 3." Lecture presented at the Electronic Music Unit, University of Adelaide, 2 September 2008.

Week 6 Audio Arts - Interaction Design - Age of Mythology


The menu includes options related to which ancient God you choose to base your civilization around, what kind of enemy you will face, and the layout of the map that the game is set in. Actual gameplay involves controlling the numerous buildings and personnel of your empire. The game is created by Ensemble Studios and marketed primarily to children.

More info

Sampling technology is used to trigger sounds as the user interacts with the controls.

Some of the sound effects are directly related to the user's actions while others are designed to alert the player to issues such as an enemy attack. Glyphs are used to identify different units as they are selected.

In the menu screen, sounds are used to indicate mouse over as well as various selections. A paper rustle indicates scrolling through the available Gods which applies the ancient, mystical aesthetic of the game to a practical interface sound. Other sounds such as the divine thunder for final selections follow a similar theme. This is an example of form meeting function.

The interaction design is similar to other types of sound design in that it uses sounds in a symbolic way to convey information while fitting with the aesthetic of the product.


Reference: Christian Haines. "Week 6 Audio Arts - Interaction design." Lecture presented at the Electronic Music Unit, University of Adelaide, 3 September 2008.

Thursday, September 4, 2008

Week 6 Forum - AUDIO RASA CHARADE FUN GAME PERSIAN BABY

In today's exercise we attempted to recognize sonically represented emotions. Some radically different approaches were used in portraying the "rasa" such as:
  • Sampling different styles of commercial music
  • Downloading the pure, instinctive cries of yet-to-be-conditioned babies from youtube
  • Traditional western art music conventions
  • Moody synthesized textures
  • Drawing on the cliches of mainstream cinema
  • Speaking about emotional subjects in a foreign language
Some techniques definitely seemed more effective than others. While I thought it was really cool, we didn't seem to be very good at understanding Sanad's Persian even with strong inflections. People who drew strongly on obvious cliches and used a range of different forms (eg acoustic instrument, synth, nature sound) seemed to be most successful.

I think it is very interesting that many of the baby cries were quite understandable and Stephen suggested that there may be some aspects of language and aural association that are fundamentally built into our physiology and not learnt. Thanks to Freddy for this interesting point.

I was particularly impressed by the guy that used a lot of synthesized/processed sounds. I think he did a great job at portraying emotion through sonic texture (which is what we should be focusing on in our course) and without using obvious cliches.

Reference: Stephen Whittington. "Week 6 Music Technology Forum - Emo Music." Lecture presented at the Electronic Music Unit, University of Adelaide, 5 September 2008.

Sunday, August 31, 2008

Week 5 Audio Arts - Sound Art of Laurie Anderson


Laurie Anderson is a performance artist from Illinois whose work spans across many fields including visual art, poetry, photography, film, and music. She began performing in the 1970s and found fame in 1980 with the hit "O Superman". Her performances vary from spoken word to large multimedia events. She has published six books and her visual works have appeared in major museums. Laurie was the first artist-in-residence of NASA.


"Two Songs for Tape Bow Violin" combines spoken word, piano and tape bow violin (her own creation). The spoken word is a plain style that reminds me of a recording of John Cage's Einstein on the Beach. I think the tape bow violin is an interesting medium as it is both an instrument and an audio playback device and it is both a mechanical and electronic sound medium.

The work makes me feel quite sentimental. The piano and violin change the way we emotionally interpret the spoken words just as sound design may do for movie dialogue. I think the warped playback of the tape helps to portray a manipulated sense of time or a reliving of past events. I think this sound work was designed with a feeling in mind rather than musical form.

References: 
Christian Haines. "Week 5 Audio Arts - Sound Art." Lecture presented at the Electronic Music Unit, University of Adelaide, 26 August 2008.

"Laurie Anderson." Laurie Anderson Official Website. http://www.laurieanderson.com/downloads/LaurieAndersonBio.pdf (1 September 2008)

Saturday, August 30, 2008

Week 5 Forum - Negativland

I really enjoyed Stephen's Negativland presentation. Watching him nod his head happily and quietly sing with his opening track was an absolute joy (and treat).
For me, Negativland was a chance to think about the nature of satire and humour in art. I'm sure Stephen would agree with me that there is a place for jokes in music (The other Steven has been known to exclaim that Haydn is a very funny man). I think that Negativland's strategy of satirising everything whether they have a problem with it or not has good spirit because they are also satirising themselves and the idea of satire at the same time as making a point. I agree that "Christianity is Stupid" probably does not represent Negativland's actual opinion on this religion and in fact they are possibly making fun of the idea of satirising or complaining about religion. I am not offended at all by their work because I think they it is obvious that they are never entirely serious.

That said, I think that some works were clearly more serious than others. For me, 'My Favourite Things' was the most light-hearted while 'Guns' was the most pungent and affecting (and not very funny).

Thanks!

Reference: Stephen Whittington. "Week 5 Music Technology Forum - Negativland." Lecture presented at the Electronic Music Unit, University of Adelaide, 28 August 2008.


Week 5 Creative Computing - Isolation (MIDI Reggae Version)

Isolation MIDI Reggae Version MP3

I thought it would be interesting doing reggae as opposed to rock or something because it's got a really loose feel that is hard to emulate with MIDI. I used a large dynamic range in the hihat programming to emulate the heavy accenting of a real reggae drummer. I also allowed the notes to be not exactly on the beat with the kick and rim click landing slightly behind. I emulated strumming on the guitar which was also lagging behind the beat. I programmed the bass by ear, using a varying amount of swing in different parts of the groove. I noticed while programming that quantized rhythms did not sound at all right in the bass part due to the style's complex feel. I added a slide which does instantly make the part seem less synthetic as we are not used to hearing MIDI instruments imitate this technique.

Apart from all that, I think that Joy Division should have replaced Ian with a trombone player and kept playing their old stuff.....

Reference: Christian Haines. "Week 5 Creative Computing - MIDI Programming and Humanization." Lecture presented at the Electronic Music Unit, University of Adelaide, 26 August 2008.

Tuesday, August 26, 2008

Week 4 Creative Computing - Logic!


NIN Ripoff Attempt MP3

In order to explore logic a little and demonstrate a MIDI sequenced project, I decided to try to shamelessly rip off Nine Inch Nails a little. Listening to them afterwards, this project sounds no-where near as aggressive, but I don't think it's important.

I created most of the original MIDI data by hand and quantized it for a horribly rigid sound then edited it to add some new material. I created the (barely audible) triplet by changing the grid spacing to 1/24. The hat build is obviously manual. The drums were created from 2 instances of Ultradrum, one of which was distorted. The synths were slightly edited presets of the ES1 with some distortion for a bit of extra grit.

The bell sounds are FM, and there is a piano, an organ with a fast rotary speaker and another with a slow one.

Reference: Christian Haines. "Week 4 Creative Computing - Logic." Lecture presented at the Electronic Music Unit, University of Adelaide, 19 August 2008.

Monday, August 25, 2008

Week 4 Audio Arts - Analysing Sound for Ads

This toothbrush ad uses comical, overdone sound design in a similar way to classic cartoons to accompany a similar use of slapstick comedy. The comical xylophone tinkling appears to mock the cat while it meows curiously. When the cat falls over, the crash includes cymbals which references the orchestra effects once used in cartoons, as does the classic brass wow wow when the cat meets its fate. The swanky jazz is also reminiscent of older films and makes you feel increasingly smarter than the cat.


This higher budget CGI advertisement uses hyper-real mechanical noises in a similar way to modern action films. When we enter the world of the fuzzball table, the bassy grinding sounds suggest that it is a huge-scale version that may fit the human soccer players in it. The kicking sounds are also larger-than-life in order to create a strong sense of power and toughness. Non-diegetic crowd sounds add atmosphere and increase the sense of unreality of the fuzzball players coming to life. The action sounds contrast with the sparseness of the beginning and end of the ad which is set in the real world. The clock ticking makes this contrast more dramatic and suggests the event that is about to happen.

Reference: Christian Haines. "Week 4 Audio Arts - Advertisement Sound Design Analysis." Lecture presented at the Electronic Music Unit, University of Adelaide, 17 August 2008.

Saturday, August 23, 2008

Week 4 Forum - Indian Classical Music

Haysa
Joy, Humour - we can laugh by being happy.

Orchestra pianissimo major chord, twittering birds, crowd of people laughing, happy sigh, sustained synth tone

Adbhuta
Mainly Wonder - rasas.info says "When we understand that there are things that we do not understand, it makes life beautiful and exciting." The related ideas include curiosity and astonishment.

Windows 95 Startup sound, Major 9 Chord, Reverbed sigh, lush digital synth pad, rising shepherd tones, shiny FM tones

Veera
Courage also determination, pride, concentration. Interesting: Talking about your powers will reduce them, also courage does not mean independance. 

Viking metal riff, V-I in a minor key, cello solo within an orchestra, solo military trumpet, 

Karuna
Sadness, pity and compassion. The aim is to try to feel a less self-centred sadness and have pity for others. 

Minor 7 chord, Cry of a single bird/any dying animal, a constantly evolving noise with unsettled filter sweeps which appears to be unable to resolve

Krodha
Anger.

Distorted sounds eg guitar/drums, diminished chord, growl, metal whack, grinding noises, white noise swell 

Bhibasta
Disgust

Loosely distorted flapping guitar, clipped recording, hihats, sounds of vomiting, spitting, violin bowed incorrectly, untuned year 8 band, very bright low analog synth tone

Bhayanaka
Fear and worry, nervousness, jealousy etc.

Raised 7 in melodic minor scale moving up to 1, listening to the radio when it's almost tuned to a station and you can only hear unrecognisable voices, whimpering animals, creaking of doors/any creaking, breathing

Shoka
Grief, remorse, sorrow, misery

Quiet minor flute melody, quiet swell of diminished triad, whispers that you can't quite make out, radio white noise (the non-scary kind)


Shanta
Peace. Also: calmness, exhaustion. Sounds like you get there by obtaining the things you want as well as repaying your debts to society and thus making everything harmonious and chilled.

Root position tonic triad with octave below, single analog synth bass note, soft filtered characterful white noise, jungle ambience, cat purr, wind in trees, sigh

I've also come across Raudra - anger, shringara - love, vibhatsya - disgust.

References: 
Stephen Whittington. "Week 4 Music Technology Forum - Indian Classical Music Emotions." Lecture presented at the Electronic Music Unit, University of Adelaide, 21 August 2008.

"9 Rasas - The Yoga of 9 Emotions." Rasas.info. http://rasas.info (31 August 2008)



"In adolescents, periods of sadness may come when one feels neglected and tries to produce pity in others." - mmmm agreed.

Tuesday, August 19, 2008

Week 3 Forum - First Year Presentations


Cool to see what everyone's been up to!

For me, one highlight was Josh's composition and sound for the short animation project. I think I noticed a clever segue or two where the music changed in mood while cleverly retaining elements of the previous section. Can't believe this is just a hobby for him.

It was great to hear everyone's music concrete, though as the second and third years commented, that genre can be a little harsh on the ears if you are subjected to it for great time periods. I really enjoyed everyone's for different reasons, especially Jamie's trumpet skills.

Was nice to hear a contrasting AA project from Alex. Was definitely a good recording and sounds better on nice speakers (sounds incredible in studio 1). I reckon a wider mix would really help to make things sound clearer and more interesting.

I thought it was interesting that we all turned out these themed concrete pieces that were meant to tell a story (I was guilty but luckily no-one could recognize my kitchen sounds and plot of using the microwave). Then Stephen announced that concrete was more about abstracting real sounds to create something that may not have any explicit meaning. Hmmmm we look kind of stupid.....

We need to hear from Lisa...

Reference: Stephen Whittington. "Week 3 Music Technology Forum - First Year Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, 14 August 2008.

Monday, August 18, 2008

Week 3 Creative Computing - Modding the Spectral Freeze Crossfader

Here is the updated synth:
It was already quite complete with a fair few modulation options so I haven't changed much.

Additions:
Key -> Filter Option
  • Check the box to sync the filter frequency to the pitch of the note so you can use it to emphasize certain harmonics or the fundamental and have the filter pick the right range for any note.
FM Synthesis
  • You can modulate the pitch of the original oscillator with another high-frequency oscillator by a variable amount.
  • The frequency of the modulator is locked to the carrier and is defined by its ratio to the carrier.
  • The original oscillator can now be sent to the outputs bypassing the spectral section and a mix can be created between the original FM section and the freezer section which is also fed from the FM section.
FM->Freezer Envelope Crossfade
  • You can make notes begin with the emphasis on the FM section and then fade towards the more atmospheric spectral section over a variable time.
FX
  • Recorded the audio demo with some FX.
  • One of them is this stereo modulated delay thing. 
  • Basically the delay time is modulated by an oscillator and the oscillator is the opposite phase in either channels.
Reference: Christian Haines. "Week 3 Creative Computing - Modular Programming with Bidule." Lecture presented at the Electronic Music Unit, University of Adelaide, 12 August 2008.

Sunday, August 17, 2008

Week 3 Audio Arts - Deja Vu Sound Design Analysis

Deja Vu (2006)
I chose this scene because it contains cuts, tempo changes, and copious sound effects. All the sounds are very exaggerated, particularly the stylised machine sounds. This analysis helps me to understand how sound design is used to vary the rhythm and intensity of a film and make it more dramatic.

The first scene in the excerpt is of a character preparing to time-travel. It is hard to determine if the rising synthesized whirs are intended to be diegetic machine noises or non-diegetic mood effects. Keyboard typing foley sounds acoustic. Crashes and volume swells increase the tension and edited and reversed dialogue give a sense of time-travel.

Cut to a hospital where a character is being treated for serious injury. The rattling of his convulsing body changes in timbre and increases in amplitude as we cut to closer shot of him. Non-diegetic drum hits add intensity. After the buzz of the defibrillator, most other sounds are suddenly replaced by a dramatic rumble.

We jump to a more peaceful time several hours later when the character regains consciousness. Quiet footsteps, the murmur of the TV and a single beeping machine contrast with the rising tempo of the previous section.

Reference: Christian Haines. "Week 3 Audio Arts - Movie Scene analysis." Lecture presented at the Electronic Music Unit, University of Adelaide, 12 August 2008.

Sunday, August 10, 2008

Week 2 Forum - Some of David Harris's favourite things.



The first item was David's latest composition "Terra Rapta" which was written for the Grainger String Quartet. I really enjoyed it, particularly the disjointedness of the rhythms. Perhaps it really helps having such great musicians interpreting the music because they add an extra layer of musicality to the aggressive writing. I find Steven's view on program notes interesting. I think that they can particularly narrow your mind for instrumental music where there is no text and your mind is free to wander. But I understand that this piece was written with a particular theme in mind and so David was keen to express this.

The second part of the forum involved listening to an entire Schubert string work. It was definitely a great experience for me to listen closely for this amount of time (relating back to Steven's discussion of listening habits last week) however I didn't enjoy the majority of the piece much. I was very prepared to listen to it because I could tell that David really sees something special in it and so I wanted to try to see the magic that he sees, however I missed out.

References: David Harris. "Week 2 Music Technology Forum - My Favourite Things." Lecture presented at the Electronic Music Unit, University of Adelaide, 7 August 2008.

Week 2 Creative Computing - FFT Freeze Crossfader Synth

After experimenting with the "FFT Freeze" bidule, I attempted to build a synth that uses its glassy digital sound. I fed the FFT chain from an oscillator that emitted the frequency required. The FFT freezer is triggered on note on so that the new freeze is of the new note. I added a harsh, random frequency peak filter between the oscillator and FFT analyser so that every freeze has a different character. 

To make the sound less static, I implemented a refreeze at regular time intervals. To make the sound less static, I added a second freezer with a freeze cycle 180˚ out of phase with the first one and then a crossfader that crossfades between the two freezes and is phase synchronised such that refreezes are not heard and the sound is liquid and musical. A compressor decreases the swelling resulting from the primitive crossfader.

Added amp and filter envelopes. 

I had heaps of issues trying to get the patch to stop glitching when it's polyphonised possibly due to the several CPU intensive FFT bidules. As you can hear, I still have some problems, however slow attack pads which this synth is suited for are generally click-free. 

Audio Example - Includes delay and reverb.



Reference: Christian Haines. "Week 2 Creative Computing - Modular Programming." Lecture presented at the Electronic Music Unit, University of Adelaide, 5 August 2008.

Sunday, August 3, 2008

Week 2 Audio Arts - Soundscape Analysis


I attempted to record the interesting soundscape of the train, however my phone recording quality turned out to be unusable because few sounds could be distinguished. I then decided to settle for a room soundscape which contained various interesting elements including a TV. I tried to stir up the cats because I hear most people are indifferent to cats and so they won't be offended. 

For my notation system I tried to find graphical ways of representing many of the attributes of sound in a way that allows information to be precisely represented. I think that the strength of the system is that it represents volume, prominent frequency range, distance, L-R placement, and kind (eg percussive, continuous, intermittent) without resorting to messy symbols or labels. I think that its weakness is that it is hard to represent the envelope of a sound using just line thickness. Issues like these could be improved upon by using better graphics software that allows tapered lines, fading colours etc to represent smooth changes over time. This notation system would be compatible with these improvements. Perhaps I should have used height for amplitude and thickness for frequency as amplitude may be more important to present accurately.

Reference: Christian Haines. "Week 2 Audio Arts - Environment Analysis." Lecture presented at the Electronic Music Unit, University of Adelaide, 5 August 2008.

Week 1 Music Tech Forum - Listening Culture

Excessive volume levels
I think that this is a problem and it may exist because it's easy to crank up an ipod to full volume without annoying the neighbours.

Lower listening quality
Possibly people are paying less attention to music when they listen to it.  One thought is that there is nothing wrong with music designed to be listened to with reduced attention. Often the duration is longer than normal music and so you can still perceive the same amount of detail though it is more spread out.

My Life in the Bush of Ghosts
Steven was talking about the whole cocoon idea where we bring our own music in order to provide comfort in unfamiliar situations or to shut out the scary outside world. Perhaps this is really a cultural shift towards people preferring their life to have an artificial movie soundtrack (as Steven describes it). Well, as scary as it sounds, I think this is half interesting. Maybe we are all just striving to make our lives like the stereoptypes we see in the movies, but actually I think it's fun to crank Philip Glass and walk around the city and pretend you're living in Koyanisqatsi. Also observing the quiet businessmen on the train to a soundtrack of Mr Bungle can be quite hilarious.

References: Steven Whittington. "Week 1 Music Technology Forum - Listening Culture." Lecture presented at the Electronic Music Unit, University of Adelaide, 31 July 2008.

Week 1 Audio Arts - Windows 95 Startup Sound

Copy and Paste URL - http://www.angelfire.com/games5/clockmaster/themicrosoftsound.wav

by Brian Eno
This work is an example of sound design as it was created for a particular purpose in a product that has other features. Eno was given particular guidelines for creating the sound and it had to be suitable for its role within Microsoft's product.

This sound has the technical function of telling the user of the computer that it has nearly finished booting up as well the more psychological purpose of making the operating system seem helpful and user friendly. To compliment the improved interface, the operating system has a sound that contrasts with the harsh bleeps associated with more primitive systems and makes the user feel more at home. Familiarity with this sound may help users to feel comfortable on all Windows 95 machines.

The sample contains a synthesized electric piano arpeggio followed by an echoing acoustic piano note and a rising and falling synthesized string chord. The sparse arrangement exudes a calm, futuristic mood. The combination of the synth, piano and stylistically fake strings juxtaposes old and new elements to create a sense of movement with sentimentality. This is supported by the transition from sound to sound rather than layering them up. One consideration is that a sonically simple work may experience less adverse effects from the usually mediocre playback equipment.

References: Selvin, Joel. "Q and A with Brian Eno." The San Francisco Chronicle . June 1996. http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/1996/06/02/PK70006.DTL (accessed 3/8/2008)

Friday, August 1, 2008

Week 1 Creative Computing - Bidule experimentation

I tried to experiment with a few different modules to create a sound work. After experimenting with the step sequencers, I decided to focus on the 'stochastic midi note list' as the source for most of my midi data. In this object based environment where the emphasis is on creative routing and processes, I think it makes sense to leave the source notes to chance (to some degree) rather than using copious amount of sequencer objects.

Beat
  • Separate stochastic note list for each drum
  • Different settings for each drum
  • Used a transposer to change note from A to desired note
Toned backing sounds
  • Various synths fed by a sequencer or stochastics
FFT
  • Some sounds were created from FFT data created by analysing audio from a synth controlled by both stochastic note list and live midi input
  • FFT data resynthesized after processing
  • FFT data is converted to midi and triggers synths (claw and waveshaped drum)
  • The midi data is then looped back to the synth that feeds the FFT analysis to create an interesting data loop.
  • During the audio example performance I played some MIDI data live.
Occasional FX used eg reverb, delay, stereo spread reduction

If only I'd experimented a bit more first because now I've just discovered the wonders of parameter linking and the xy pad.

Thursday, June 26, 2008

Creative Computing Project - I Say Concrete Without a French Accent?

This piece explores the different musical outcomes brought about by the use of different equipment and associated techniques. The entire work is bound by the music concrete ideology and was created only from the sounds of food preparation equipment.

Part 1
This section follows the journey of a food preparer using traditional music concrete techniques. The microwave’s whirring sooths the user to a relaxed then they are roused by the chaotic beeping of several timers and the boiling of the kettle. Processing includes reverb, delay, flanger, varispeed and compression. The style of the result is inspired by pioneering concréte artists such as Pierre Henry and Pierre Schaeffer.

Part 2
Sounds of a microwave, kettle, beaters, glass jar and bowl are intensely processed using unpredictable software such as Soundhack and Fscape and sequenced in Pro Tools and Reason. This part draws inspiration from modern electronic artists from the “IDM” subgenre.

Part 3
A return to the style and techniques of classic 50s concrete as in part 1. The dishes are washed, however one cannot resist making music with the sounds of the glasses and cutlery while undertaking this task. The main sounds used are running water, a gas stove being lit, the ringing of a struck knife and the sound of a glass being played percussively.

I Say Concrete Without a French Accent? mp3

Screenshot of Pro Tools session for part 2.

Wednesday, June 25, 2008

AA Recording Project - The Notorious Daughters 'Rain'

I recorded this three piece band on 2nd June. I put the drums in the live room in order to use a room mic, and the guitar amp in the dead room to isolate it. The bass was DIed and the vocals were overdubbed in the control room.

After auditioning the U87s, I chose the AKG C414s as overheads. We pulled the resonant skin off the kick drum to get cleaner tone as we weren't happy with the timbre we obtained with the skin on. Standard mic configurations were used on the rest of the kit including an sm57 on snare, beta52 on kick and beta56s on toms. An sm57 and an md421 were used on guitar amps and the guitar track was doubled a second time with identical settings and mic configuration to achieve the uniform and symmetrical wall of noise common in 90s grunge.

In the mix I applied EQ and compression judiciously to every track. The room mic was gated slightly with the snare in to particularly give it room. I aimed to fit the kick underneath the bass guitar using EQ and also used a compressor on the bass that was triggered by the kick in order to help clean up the low end.

Sunday, June 8, 2008

Week 12 Forum - Scratch Tutorials, Windowlicker

We watched some special features from the "Scratch" documentary. Many of the segments were amusing in some ways, however there were interesting parts. The tutorial on how to rock a party was quite enlightening because I've always wondered what the DJ is actually supposed to do at a party other than play some records. There is obviously much skill in tempo matching, especially in realtime. It was quite a show of skill to watch the DJ keep two records in time that are both getting faster at different rates.

My other favourite part of the session was the Windowlicker video clip which I hadn't seen before. I love the way that although I assume the video tries to parody other videos that try to sell music with sex, it includes all the same glossy production techniques. It's great to see to two guys fail miserably and embarass themselves while picking up chicks. Also fantastic is the use of the exaggerated limousine shot which corresponds to a glitchy effect that is in the style of Aphex Twin but also sounds like a crash with skidding. And everyone loves to see a happy man dance. Many of the events work well with the music and increase the satirical effect.

Reference: Steven Whittington. "Week 12 Music Technology Forum - Scratch Special Features and Video Clips". Lecture presented at the Electronic Music Unit, University of Adelaide, 5  June 2008.

Saturday, June 7, 2008

Week 11 Forum - Philosophy of Music/Environmental Discussion

This week we discussed the philosophy of music which quickly led to a discussion of the environment in general.

Generally I think I kind of overlook the impact my music making has on the environment and take the attitude of "I need it, therefore no point in stressing about it's impact". I found another guy's blog interesting with a kind of existentialist vibe. But I disagree - I believe in right and wrong and I think everyone makes a difference. I do think it's slightly strange that we always assume we know what is good and bad. Meursault was so addictive.

The most interesting thing that I heard in forum was Steven's comment about an aboriginal group that decreased its palette of tools because it didn't need as many in a new location. Forgot that that was possible - doesn't seem to be the Western way. We have such an obsession with hoarding resources and increasing everything we can. I don't think we could ever stop now (without some kind of crisis) but it doesn't seem very worthwhile because it obviously doesn't actually make us happy. But also I agree with Steven that technology has got us into this mess and hopefully it can get us out.

References: 
  • Steven Whittington. "Week 11 Music Technology Forum - Philosophy of Music". Lecture presented at the Electronic Music Unit, University of Adelaide, 29 May 2008.
  • David Harris. "Week 11 Music Technology Forum - Philosophy of Music". Lecture presented at the Electronic Music Unit, University of Adelaide, 29May 2008.

Tuesday, May 27, 2008

Week 10 Creative Computing - Meta Synth

Here is my Metasynth soundwork

There are several sections that I composed using the image synth and the effects room.

Part 1
Used the image synth's sampler instrument loaded with quote samples
Sequenced a piece using the colours to pan the sound

Part 2
Used the image synth with it's default instrument and a pitch scale based on harmonic series.
Much smudging was used to create cloudy ambience
Bleeps created on a second layer using the tool that creates repeated notes
Part 3
I used reverb, compression and intertia on part 5 to create a pad sound and then used the harmonics effect to add extra interest

Part 4
"Come" sample was resonant filtered in stereo then turned into a beat using the delightful "shuffler".

Part 5
Used some "grain" and "stereoecho" to freak out the straight rhythms of part 4 and then used inertia to make the hits ring. Reminds me of the first track of Confield by Autechre.

Sunday, May 25, 2008

Week 10 Audio Arts - Acoustic Guitar

I like this as a more low-fi option. It might require bass roll-off to be usable. The harmonics of the notes seem to stand out while the picking is very suppressed. Lacks punchiness and definition but is warm and soft, a little dull.

Small diaphragm condenser is less coloured and provides a quality hi-fi stereo field with lots of pick attack. Coincident pair of identical mics makes for a natural image. Would suite a solo situation due to large natural sound.

Interesting how the thin, inarticulate 57 tone provides a bit of interest to the larger Neumann which also includes the pick attack. Lacks clarity and sounds thin, SM57 on neck picks up unwanted fret buzz.

I really like the way the revoiced guitar (NT4, left) sits above the original (U87 right) track. The Neumann's smooth rich tone fills out the brighter NT4. Separate panning of the NT4's capsules adds depth. Nice rich stereo field would provide a good bed for an acoustic track.

I like the way there seems to be a clear image that separates the percussive strumming and the ringing of the steel strings. Emphasizes harshness of steel strings, a little thin.

Friday, May 23, 2008

Week 10 Music Technology Forum - Turntablism

This weeks forum presentation was the DVD documentary "Scratch" about the history of turntablism. Steven introduced the DVD by explaining that the turntable had actually been used as a musical instrument long before "turntablism", one example of this is in early pre-magnetic tape music concrete where vinyl recordings were mixed and rerecorded.

I found the DVD very interesting. I never realized the extent of the DJ "solo" culture - I only knew them as accompanists to rappers with the occasional flourish of a complex scratch. The DMC world championships seem to hold considerable prestige for many.

The amount of musical exploration that has been created by experimentation with a former domestic playback device is extremely amazing and I think that this illustrates that interesting music can  gleaned from all kinds of areas if someone is willing to take the time to study it as the early DJs did. 

The "Amen Break" video was further evidence of this point (and was also very amusing). It's even more impressive that there are entire genres heavily relying on this sample. Perhaps this 5 second recording could be considered their instrument. Many sample maestros write for the same sample just as many composers write for the same instruments.

Reference: Steven Whittington. "Week 10 Music Technology Forum - Turntablism and the Amen Break". Lecture presented at the Electronic Music Unit, University of Adelaide, 22 May 2008.

Monday, May 19, 2008

Week 9 Creative Computing - SoundHack/FScape/NN19 GlitchFest

I tried to explore SoundHack and FScape as much as possible this week while creating a sampler instrument of glitchy effects that could be used in genres such as IDM in combination with fuller drum sounds. I started with my library of voice samples and began processing with both applications. Sometimes I chose files because they contrasted and I thought they would yield interesting results, while other times I chose randomly. Every so often I listened to the library and deleted the boring files.

I used SoundHack more as a utility to lengthen or pitchshift files with the phase vocoder and to create more synthetic tones using convolution and mutation. FScape was used to further process these sounds and erratically introduce more hardcore glitchyness. I found that the selection of audio files generally affected the result quite strongly and so experimentation was important to achieve interesting results. The manual is hilariously vague at times and very technical at others.


I ended up with 44 samples which I imported into the NN19 and automapped (1 sample per key) I then placed the instrument for 45 seconds using the pitch and modulation wheel (assigned to low-pass filter) and aftertouch (also assigned to LP filter).

Sunday, May 18, 2008

Week 9 Audio Arts - The Viscous Groove of Ska

30 Second Ska Groove by Jamie and Miles

A group of us spent 3 hours recording 4 different excerpts and this was the one I chose to mix. The mics are:
  • 2U87s in omni, spaced pair low over the kit
  • Beta52 inside kick 1 inch from beater area
  • Beta52 outside kick 30cm from resonant head
  • Sm57 on top snare angled inwards
  • Beta57 on bottom snare 2 inches from snare
  • Beta56 on hi-Tom
  • md421 on mid-tom
  • md421 on floor-tom
  • NT5 2 inches from top of hi-hats
  • c414 behind a baffle in the room
  • Avalon DI - Bass
  • md421 perpendicular to guitar amp
  • sm57 facing inwards - guitar amp
  • c414 facing at guitar amp
Mixing
Kick
  • Cut boxy lower mids, boosted bass, upper mid slap area
  • Positioned the kick higher than the bass
  • Gated for less mud
  • Compressed with long attack for slap
Snare
  • Cut boxy mid area
  • Boosted cutting treble
  • Gated for less mud
  • Compressed with moderate attack
Toms
  • Boosted higher frequency slap area
  • Cut excessive bass on some toms
  • Manually gated via editing
Overheads
  • Cut some bass, lower mids to decrease mud
  • Compressed slightly
Drum Bus
  • Compressed for pumping effect
Guitars
  • Blended together
  • Cut chirpy upper mids
  • Boosted treble, lower mids
Bass
  • Cut mids
  • Compressed with slow attack

Master Bus
  • Cut lower mids, compressed slightly
Snare seems to have just enough roominess while remaining clear in the busy fills. Guitars cut without harshness. Bass is rich. I think the kick sounds a little unnatural. Not quite gelling with the bass. Ride bell cuts.

Thursday, May 15, 2008

Week 9 Music Tech Forum

Today's forum was completely eye-opening to the world of music tech. Each of the experienced music technologists were involved in so many interesting studies.

I particularly enjoyed Seb's "milk crate" music. I really enjoyed much of the music and I think that it's a really interesting idea to try to "force" much music out of a short amount of time. A fantastic way to avoid procrastinating and spending time making decisions when it is possibly more productive to experiment and plan/think as little as possible (something I'm poor at). The results of milk crate support the value of this fast working style.

The water-based controller was also very very cool and obviously not a fully exploited concept yet, but I can understand why with the amount of zany stuff Seb is up to.

The second presentation was more chilled with some discussion of the relationship of music to science and history. Loved the quartz bowls, and also loved the theories about ancient constructions. Just been reading up on ancient greek musical philosophy for history and the importance ancient civilizations placed on music is really interesting. I'm beginning to think that there's not much difference between coincidence and real conspiracy anyway with regards to the wide use of the A# "natural" frequency.

Reference: Sebastian Tomczak, Darrent Curtis. "Week 9 Music Technology Forum - Recent Works". Lecture presented at Electronic Music Unit, University of Adelaide, 15 May 2008

Tuesday, May 13, 2008

Week 8 Creative Computing - Sampling


Sorry that it's mainly novelty noises......

I took around 12 very short phoneme samples from the quote and assigned them to sections of the keyboard. I found myself attracted to the snare and hat-like "shhh" and "sss" sounds (no processing was used) and also an "mmmm" which had a stable pitch. I reconfigured the sample layout so that these percussive sounds would be easy to play and I had a large range of "mmm" to groove on. The loop function was used on some sounds. I assigned the rest of the phonemes to the rest of the keyboard for icing. I creatively altered the root note of each sample using command-click to make some playback above their original pitch and some below.

For extra expression, I began assigning controllers to modulation destinations as below.
Aftertouch - filter frequency decrease + LFO level increase
LFO - noise wave - panning
Modwheel - filter frequency decrease and resonance increase

I experimented with the envelopes and while they would be very useful to create interesting sounds, I did not use them much in this exercise because I felt that using the same harsh envelope effect on every sample would make them sound similar and stop them being contrasted effectively.

Friday, May 9, 2008

Week 8 Audio Arts - Drum Recording

We recorded the in-house drumkit in two different multi-track configurations. I roughly processed the tracks because I decided that as this was usual for drums, it would help me compare the possible results for each setup better.

Overheads: U87s (cardoid) spaced pair
Kick: 2 beta52s in hole and on skin
Snare: SM57 top angled 45˙, beta57 bottom
Hi Tom: Beta56
Mid and Low toms: md421s
Hi Hat:NT5
Room: AKG C414 (omni) in corner of room
  • Soft, polished sound 
  • Wide stereo spread
  • Seperated elements
  • Reasonably natural
  • Room mic adds depth and body to snare
  • The kick does not have very much definition in slap, maybe too much air movement
  • Imaging of overheads perhaps a little unstable. Maybe too much separation of mics
  • Probably better for most modern, commercial pop/rock

1 U87 in omni high over the kit
SM57 12ish cm from the snare facing inwards
Beta52 about a foot back from the kick
  • Low-fi but present tone
  • Kick sounds huge and natural and snare cuts nicely, though toms are not very loud or defined. 
  • Kit elements less separated due to further-away mics and no stereo panning
  • Kick and snare can be mixed quite loud while keeping the kit homogenous. Important elements get priority.
  • Mono makes drums more compact so they could fit nicely into a mix without dominating
  • Nice natural ambience from omni overhead
  • Maybe useful for more vintage production styles or to sample in electronic pieces
Note: Sorry about the excessive bass - I mixed these on cans

Thursday, May 8, 2008

Week 8 Forum - Peter Dowdall - Audio Engineering, Session Management

This week experienced audio engineer Peter Dowdall spoke to us about many aspects of recording, editing and mixing for bands or advertising agencies.

First he discussed technical details and session management concerns on a recent recording of the "Mike Stewart Big Band" at EMU. It was good to hear a quality commercial recording done in EMU using nearly all in-house equipment. I was surprised at the amount of editing that Peter used even on Big Band music with skilled players. The editing was not audible and the resulting product was tight.

I found Peter's stories about his work in advertising and relating to clients interesting. He has had to record and edit without soloing tracks because the people sitting behind him in the control room needed to hear the whole mix. I see that it's quite important to remember that clients don't know what edits are easy or difficult to do, and so you have to foresee future requests and protect yourself in ways such as creating submixes so vocals can be replaced without redoing the instrumental mix. I liked Peter's suggestion that sounds that are considered wrong today are likely to be fashionable tomorrow. I appreciated advertising music more when he explained some of the art of achieving "maximum impact".

Reference: Peter Dowdall. "Week 8 Music Technology Forum - Audio Engineering and Session Management". Lecture presented at Electronic Music Unit, University of Adelaide, 8 May 2008

Wednesday, May 7, 2008

Week 7 Creative Computing - Sample Library

My compiled library.

Instrument Sound: Clarinet note becoming multiphonic
Found Sound: Clarinet case and accessories being dropped
Generated noise: Mid-range saw wave

Ascending filter sweep
  • Filtered a clarinet note and cut out a section
  • Timestretched it and pitch shifted for a chord
  • Automated a sweeping low pass filter and a peak band (for resonance). Two EQ plugins used for heavy boost
  • Pitched-down heavily filtered saw wave underneath
Descending filter sweep
  • Clarinet portion of above reversed
R2D2 Rotting in Hell
  • Sections of the clarinet case
  • Slowed down, lowered in pitch, 2 tracks
  • Reverbed
  • Automated peak EQ frequency by dragging knob insanely
  • Backwards reverb trail created from snippet, left channel pitchshifted up a semitone
Chilled Synth Pad
  • Many layers of pitch shifted saw wave sample with many layers of EQ and volume envelope
Panned Bleeping
  • Short toned portion of clarinet case drop (the metal mouthpiece cover rang) timestretched and pitch shifted, placed on multiple panned tracks
  • Backwards reverb
Descending flutter then back up
  • Heaps of tracks with systematically varied panning, EQ peaks and delay time
  • Clarinet note and clarinet case excerpts were placed on descending tracks
R2D2 Finally and Efficiently Dying in Hell
  • Buzzy sound created by clarinet with very short delay
  • Clarinet case section with long delay and reverb
Clarinet with glitchyness
  • Clarinet, it's case and protools operator playing in a trio
Marching Beat
  • Beat from clarinet case with pitch shift, EQ, fades etc.
  • Amplitube, delay, fades on clarinet
Glitchyness
  • Edited, reversed, timeshifted clarinet case


Saturday, May 3, 2008

Week 7 Audio Arts - Electric Bass Recording

We recorded 3 tracks simultaneously with Jamie playing to our previous track. We discarded the SM57 because it sounded thinner than the Beta 52. The amp was fed by a split from a Behringer DI box which also fed the Avalon preamp mic input. The three tracks were time-aligned later.

Beta 52, Side of cone, perpendicular to amp face, 30cm
Much upper mid presence, though this presence is slightly hard and cheap-sounding. Lacking stability in the bass frequencies. Very audible hum from the amp. Undesirable string noise stands out.

Solid sound with round warm bass. Has attack, without harshness. Maybe a little too much lower mids. Lacks upper mid presence which might be needed for some styles eg funk. Darker sound may sit well in a mix under other instruments.

Generally a nice compromise between the soft bass of the DI and the presence of the miked cabinet. The mic and DI signals perhaps sound a little separate still.

Mic and DI blend a little better. Notes sound much more even in volume compared to before, and some notes do not jump out anymore. More sustain, would probably sit much better in a mix. Attack is there but contained.

Thursday, May 1, 2008

Week 7 Forum - Tristram Carey

Tristram Carey was the founder of the electronic music unit that we all know and love, and also an electronic music pioneer, separately developing the synthesizer almost concurrently with Robert Moog. Last week, Tristram passed away and so Steven presented a tribute forum session about him.

This wasn't immediately apparent, because the session began with a bleeping sound installation that referenced Tristram's time as a navy radar operator (though the 2nd and 3rd years couldn't see the connection).

Steven explained some of Tristram's work and then we watched a documentary about him and his fellow "Electronic Music Studio" members. I was surprised that I hadn't heard of any of these pioneers while the house of Moog gets so much attention.

In reply to Steven's suggestion that the study of history is worthwhile, after spending 2 minutes with Carey's relatively simple "picnic" synthesizer (as used by Pink Floyd), I agree. I think that by looking at the origins of electronic music we can find new pathways, that were never fully explored; in 2 minutes I heard sounds from "synthy" with character that I have never experienced from the software which dominates today. Another discussion I found interesting was the differing attitudes of the early electronic musicians towards popular modern electronic music genres.

Reference: Stephen Whittington. "Week 7 Music Technology Forum - Tristram Carey". Lecture presented at Electronic Music Unit, University of Adelaide, 1 May 2008