Sunday, March 30, 2008

Week 4 Audio Arts - Acoustic Guitar Recording

Jamie and I recorded acoustic guitar in several different ways in the dead room. In all cases the microphone(s) were fed into the mackie desk preamps and then to protools. Levels were matched as best as possible using preamp gain on the way in, and then more accurately in logic for a fair comparison.

The microphones used were:
  • Rode NT5 - small diaphragm condenser
  • Shure SM57 - small diaphragm dynamic
Samples:
NT5 at 2 inches
Nice solo hi-fi sound with plenty of treble and bass. Perhaps too much bass due to proximity effect.

SM57 at 2 inches
More punchy but less detailed on the transients compared to NT5. Less treble and bass. Perhaps a solid sound to sit in a mix nicely without too much extraneous treble and bass information. More preamp gain needed.

A more natural sound focussed in the upper mids with less treble and bass. More lo-fi vibe that might be very nice in some production styles.
Seems quite scooped in mids compared to above sample. Mainly included to demonstrate the differences with above sample.
Stereo image seems very unstable. Perhaps the mics were too close or at too great an angle. Unwanted string/finger/fret noises seem to be exaggerated.

Saturday, March 29, 2008

Week 4 Forum - Student Presentations Part II

This week we heard work by the remaining 2nd and third year students.

The first presentation was about a granulator processor coded in max/msp. It was extremely cool. I think it could be a really interesting live performance tool because it makes cool electronic sounds in-realtime using audio that is also recorded on the fly - more interesting than triggering samples.

Another interesting presentation was about some dance music that someone created. A discussion about whether max/msp was useful for listenable/commercial music followed this. Some people expressed the views that this experimental program-it-yourself software is only useful for obscure music and that using more traditional tools is a more efficient workflow for creating commercial music.

A surround-sound installation was created in an art exhibition space (by some imaginative music tech student). I thought that the project was really cool and I'd love to do something like that one day.

Finally the last two presentations were about performance art and again using tools to manipulate audio in realtime for live performance. The ambient guitar piece achieved some really cool sounds and the maker of the less ambient piece demonstrated uber micro in a furious battle with various audio software.

Reference: Stephen Whittington. "Week 4 Music Technology Forum - 2nd and 3rd year presentations." Lecture presented at the Electronic Music Unit, 27 March 2008

This post was edited to conform to the word limit policy on 3/4/08

Sunday, March 23, 2008

Week 3 Creative Computing - SPEARing the paper samples

  • Start with paper rubbing sample
  • Rectangles were transposed and shifted to form dense tones
  • Tones were dragged around to create "melody"
  • Overall pitch lowered
  • Sculpted existing sound to cut treble
Alien Barn Dance
  • Original sample was a piece of paper being flapped
  • Selected each flap individually and changed it's pitch using the transpose tool so the pitch increases
  • Selected a chunk of audio and used transpose to turn it into a dense rectangle. Copied and pasted square and moved it to create chirping melody.
  • Created synthy pad by timestretching the last flap
  • Copied and pasted entire document and transposed down then moved up
  • Coped and pasted entire document again and selected and rhythmically displaced frequency bands
  • Copied and pasted bass hits creatively
  • Began with short snippet and timestretched it
  • Dragged parts up and down to change pitch
  • I did similar stuff to alien spew
  • More copying and pasting and pitch shifting to create blips
  • Transposed all the audio down
  • Cut time gaps in it to make a stuttering effect
  • Selected all the audio, copied and pasted it and timestretched it to be shorter/faster
  • Dragged these copies into different frequency bands at different times over the original
  • Cut down into the audio using lasso to make a filter sweep
  • Dragged rectangles of audio around to make high pitched drones
  • Boosted lower frequencies

Friday, March 21, 2008

Week 3 Forum - 2nd and 3rd year presentations

This week at forum we watched presentations by second and third year students of their projects from last year. They inspired me and I look forward to the open possibilities of the course in later years.

I like the way that the film project had sections which were created with a theme in mind even if this theme is not explicitly detailed in the film. The middle section was particularly musically interesting.

The projects that involved the use of max/msp seemed like they would be practically quite different to undertake with the process being quite removed from the music, and the music being as unpredictable as is required. It seems like much of the skill is in coming up with ideas to try and then exploring what sounds can be made with your program. I think it is quite interesting that the actual musical result is not necessarily obtained by having it in your imagination and aiming for it.

For practical musical enjoyment I really like the project with Iranian vocals. I thought that it was very impressive that the entire backing was performed in one hit with only a small pre-sequenced component to it.

Can’t wait to have a crack at a longer project myself.

Reference: Stephen Whittington. "Week 3 Music Technology - 2nd and 3rd Year Presentations." Lecture presented at the Electronic Music Unit, University of Adelaide, 20 March 2008

Edited on 4/4/08 to conform to the word limit policy

Week 3 Audio Arts - Headphone Monitoring

When I was practicing creating headphone mixes, the live room was in use so I checked the results by plugging my headphones directly into the amplifier in the control room. I successfully monitored my voice through a microphone. As demonstrated by David during the seminar, the headphone mix is easily accessed in the live room via studio 2 ports 3&4.

Thoughts on Headphone Monitoring
  • If monitor mixes are to be created from analog input channels then record-armed tracks should be muted in protools to avoid monitoring through two separate paths.
  • Two aux sends may be used per monitor mix to provide stereo.
  • Aux sends should be set to prefader in order that the engineer can listen to things at will without affecting musician's mixes.
  • As the inserts on the mackie desk are after the recording outputs, effects may be patched into the signal path (eg a compressor) for monitoring purposes while recording the dry sound.
  • It is very important for the sound engineer to pay close attention to the musician's headphone mix so that they feel comfortable and can perform to their full ability.
  • The aux sends (and therefore headphone mixes) can be monitored in the control room using the aux solo button for each aux send.
References: Lokan, David - Lecture given on 18/3/08 in the EMU Space, Adelaide University

Monday, March 17, 2008

Week 2 Creative Computing - Paper Samples

I ended up with 6 finished samples.

  • An amalgamation of a variety of mainly glitchy segments
  • Fast clicking is paper rubbing against itself and catching momentarily
  • The pan function was used on very short snippets of audio
  • The ending was created by repeatedly pasting an audio snippet but cutting it slightly shorter each time
  • Created by copying and pasting a limited number of samples
  • Samples were cut and pitch shifted to provide different drum functions
  • Cut to equal length
  • Reverb selectively applied
  • Two original samples
  • The sample was divided into segments and the pitch of each segment changed so that the overall pitch gradually decreases.
  • The second section is a short snippet of ripping that was continually pasted and increased in pitch each time
  • A scrunching sample with much backwards delay and reverb
  1. The sample was reversed
  2. Silence was added to the end to contain the reverb/delay tail
  3. Delay and reverb were applied and bounced
  4. The sample was reversed again
  • After pitch shifting a sample of paper waving I noticed that there was a mild ring to it
  • I emphasized the ring with an eq
  • I selected sections of the sample and pitch shifted and panned them to create a disorientating effect
  • I took a paper hit sample and reversed only one channel
  • I took a small part of this swish and looped it for a period creating a stereo stuttering effect

Saturday, March 15, 2008

Week 2 Audio Arts - Startup/Shutdown procedures for Studio 2

I booked a couple of sessions this week to practice the art of safely powering up and down all the equipment. By following the instructions on the EMU website I managed to turn on the necessary equipment and achieve signal flow from a microphone and a guitar into protools without any dangerous clicks through the monitors.

The startup order makes sense because it essentially works through the signal chain towards the monitors. A pop emitted by a piece of gear being turned on is only ever transmitted down the chain to a unit that is turned off and can't be damaged.

When I began trying to patch in outboard equipment I initially had issues when I accidentally mixed up the input and output jacks on the patch bay. This should become easy to avoid as output is always a top row and input the bottom row. I monitored through the Genelec speakers, however headphone monitoring should be easy to patch by feeding the headphone amp with the mixer outputs, or creating cue mixes using aux sends. One issue in live monitoring is that a signal may be fed to the monitors through it's mixer channel and protools simultaneously. In order to avoid hearing both signals mixed (they will probably sound phasey) then either protools or analog mixer monitoring should be disabled.


References: 
  • Studio 2 Guide - EMU website - http accessed 15/3/08 - http://emu.adelaide.edu.au/resources/guides/spaces/pdfs/studio2.guide.pdf
  • Lokan, David - Lecture given on 11/3/08

Week 2 Forum - Blogging Discussion

This week in forum we discussed blogging and whether it represents the democratisation of the internet or not. We also talked about our use of blogging in the music technology courses.

After Steven explained some useful terms for the discussion eg "bloggorrhea", "blogademia, "blog fodder" in a very convincingly serious tone we began to discuss blogs and youtube etc (which I think are essentially the same for the purposes of this discussion).

It was raised that as the amount of information on the internet increases, then the value of the information decreases as the supply becomes so great compared to the demand. This is related to the idea that the internet is communist in that it allows small players with little wealth/influence but good ideas to compete equally with larger players in providing this information, entertainment and services. Christian suggested that this isn't valid because what is important is how the information is found ie with search engines which a webpage can be optimized for. I agree that there are ways that established forces can maintain their influence using their financial advantage, however I do think that in some cases the internet evens the balance of power between individual and corporate machine.

References: Whittington, Steven - forum presentation on 13/3/08 at the Schulz building, Adelaide University

Edited on 4/4/08 to conform (exactly!) to the word limit policy

Week 1 Forum - The Synergy Project

This week we attended "The Synergy Project" which was a presentation about collaboration between people of different skills in technological art (as far as I could tell anyway). There were several speakers who each talked about their own projects.

I particularly enjoyed Matthew Gardiner, creator of mechanical origami flowers amongst other things. One of his projects had a musical aspect to it in that the electronic flowers were accompanied by a percussionist who played a musical score that was dynamically generated depending on various factors. I thought his interest in making art that is controlled by media such as news feeds or weather reports was very cool. I prefer the idea of things that react in some way to the world instead of depending on pure chance.

Ross Bencina, creator of audio mulch (live electronic performance software) was also great though I would have much rather watched him use audiomulch than talk about it. He talked about collaboration in electronic music and experimental electronic instruments and I'm excited that this is becoming more possible as I haven't read very much about electronic musicians working together before. I guess this is one aspect of the shift from programming towards realtime performance.

References: Matthew Gardiner and Ross Bencina. Presentations at The Synergy Project, presented on 7/3/08 at the Sir Hans Heyson Building, University of South Australia

Edited on 4/4/08 to conform to word limit policy

Thursday, March 6, 2008

Week 1 Creative Computing - Creating a MIDI network over LAN

After opening "Audio MIDI Setup", we created a new "session" on Jamie's computer which I then connected to. "MidiKeys" was used to test communication over the network. On Jamie's computer, MidiKeys was set to listen to the Novation controller and "thru" the information to to network session. On my computer it was set to listen to the network port and as expected when Jamie's Novation controller was played, MidiKeys lit up on my computer.

To demonstrate the practical uses of a MIDI network we used logic as a sequencer on Jamie's computer. I opened the (primitive) synthesizer application "SimpleSynth" on my own computer and recorded midi information in logic could be played back using my computer for the synthesis. This workflow could be useful to distribute tasks between multiple computers in a sequencing setup involving very CPU intensive virtual instruments. A MIDI network could also have applications in sharing MIDI timecode between multiple sequencing computers and to share outboard midi-controlled hardware in a multi-computer, multi-user environment.

During our general experimentation the latency did not register on the latency bar which suggests that this setup would be practical for musical use as the delay in communication should be unnoticeable.


One issue we came across was the accidental creation of a midi loop which quickly crashed logic and finally created some action in the latency bar of Audio/Midi Setup. We resolved the problem by disengaging the "thru" function that we were experimenting with in MidiKeys.

Wednesday, March 5, 2008

Week 1 Audio Arts - Hypothetical Pink Floyd Recording

In order to record Pink Floyd with all instruments playing live, some tradeoff may be needed between the vocal recording and live band chemistry. Ideally, Waters and Gilmore would sing in isolation booths in order for a condenser microphone to be practical (considering it's lower directionality). If the band preferred to play in the same room together then a dynamic microphone could be substituted for the U87 to decrease spill. In this case, amplifiers could be placed away from vocal microphones or possibly in an isolation booth to decrease spill. Baffles could also be used to isolate amplifiers if they were in the main room, however they may not be suitable around the drumkit as Nick Mason may need eye contact with the other players. Fairly traditional mic choices/techniques are used for a classic sound. Protools could be substituted for tape for a more traditional Pink Floyd sound.

Microphones Required

  • Vocals - Gilmore - Neumann U87
  • Vocals - Waters - Neumann U87
  • Backup Vocals - RCA 44BX x3 (for a mellower sound that keeps the main vocals in front)
  • Guitar - Sennheiser md421 close and Royer R121 slightly back
  • Bass - RE20 in conjunction with DI
  • Organ Leslie top - Shure sm57
  • Organ Leslie bottom - Sennheiser md421
  • Drum Overheads - AKG 414 x2 spaced pair
  • Snare Top - Shure sm57
  • Snare Bottom - Sennheiser md441
  • Toms - Sennheiser md421 x3
  • Kick Inside - d112
  • Kick Outer (further back) - RE20

Other Equipment
  • DI box
  • Hammond B3 with Leslie
  • Protools - at least 24 channels A/D
  • 24 channels of preamp
  • Approximately 20 mic stands and cables
  • Approximately 7 pairs of isolated headphones

Studio Requirements
  • Isolation booths for Waters, Gilmore and backing singers
  • Assistant Engineer