AF’s Weblog

June 14, 2012

Top 10 Things That Can Never Be Taught Often Enough In Audio

Filed under: Live Sound — Tags: , , , , — audiofanzine @ 8:48 am

10. Musicians feel most comfortable and play best when they hear what they need to hear on the stage. Of course, the experienced monitor guys and recording guys already know this. But it’s something for those less experienced to think about. No, it’s not about how much power you have or what kind of monitor wedges. It’s about psychology.

And I think it’s true that if you become good at monitors and understand how to please musicians, you are 90 percent there towards becoming a good mix engineer.

Sure, the last 10 percent might be the “magic” but you can’t make magic without the basics.

9.  Sound travels at 1,130 feet per second, at sea level, at 68 degrees F and 4 percent relative humidity. This is important because if you understand how sound propagates, you’ll automatically know more about microphone placement, setting delay towers, and things like delaying the mains to the backline. And you should also know that the speed changes with temperature, humidity, and altitude. (If you don’t, it’s a good idea to look it up.)

8.  The Inverse Square Law. You know, the thing about a doubling of the distance from the source means that the acoustic power is cut by 1/4, right? This applies all over the place, from mic technique to loudspeaker arrays. It relates to how much power you will need from the power amplifiers.

For instance, if you normally cover an audience at 20 to 60 feet from your stacks, but for the next gig, the audience will be 40 to 100 feet away, how much more power will you need to maintain about the same acoustic power? About four times as much! Maybe think about delay stacks (see #9).

Let’s see some more pointers…

2. Grounding. Let’s not mince words here: this is a subject you need to understand. If you have more than one path to ground in your audio system, and the resistance to ground is different between them, you will have problems with hum and buzz.

Related to this is how you terminate your connections, especially if any parts of the system go back and forth from balanced to unbalanced terminations.

It’s also a good idea to learn the sonic signatures of different kinds of hum and buzz to therefore speed up your troubleshooting when the time comes. This is because some types of buzz are not related to grounding problems, but instead may be power supply issues, for instance.

1.  Gain structure, baby. This is the main one, the real deal. The thing that, if you can’t learn, or don’t understand or have forgotten, will get you into more trouble than anything else. There will be more noise and/or more distortion in the system unless you get this right. And there will be less gain before feedback, too.

So here’s the deal: every input and every device has an optimum range of levels it wants to see or wants to work with. If you’re feeding something a signal that is too low, you have to make this up somewhere, and therefore you’ll be bringing up the noise more than it should be. And that noise will be in your signal from then on.

Oh, sure, there are noise reduction devices you can use, but why do that when proper gain structure will take care of it for you? And really, we should use the least processing possible to get the job done because things sound better that way.

Alternatively, if you an input or a mix bus is fed too much signal, headroom will run out, which means you’re adding distortion. And this, also, cannot be removed later. Artistically adding distortion via plug-ins, hot-rodded guitar amps or certain outboard gear can be cool. Adding it by slamming your inputs or your mix bus is not cool.

For instance, if a wireless microphone output can be set at line level, but you set it to mic level and connect it to a mic input on your mixer, you will have more noise than if you connect the line output to the line input. Why? Because essentially you’re padding down the output then boosting it back up again with a high-gain mic preamp.

Sure, sometimes you might want to put the signal through a transformer or other “good” distortion device—just be aware that from a gain structure point of view, this is not ideal.

OK, that’s the list. If you’ve already mastered these things, great! You’re probably doing better mixes, with more gain before feedback, better coverage and happier musicians than those who don’t. But please don’t rest on your laurels – get out there and learn as much as you can.

Those of us going to your shows will know it when we hear it!

To read the full detailed article see:  Top 10 Things That Can Never be Taught Often Enough in Audio

October 14, 2009

Basics of Acoustics: Time (I)

Filed under: Instructional articles — Tags: , , , , , — audiofanzine @ 6:01 am

Time (1)

With stopwatch in hand, our perception of time seems straightforward. But in everyday life we’re not always watching the clock, and everyone knows that the passage of time is relative. It differs from one person to another and especially from one activity to another: An hour spent watching a great movie doesn’t feel as long as an hour in traffic.

SignalFig.1a : Sound signal with reverb. Note the progressive attenuation of the sound level

Scientists may conceive time in seconds, but most musicians feel it in a more fluctuating manner: either in the speeding up of a tempo or the slightly off-pitch note due to stress. In fact, pitch, which is defined by frequency, is a value linked to time and depends on our perception of a second. If it seems longer or shorter, the note can seem sharp or flat.

It’s said that during the middle ages, long before the invention of the metronome in 1816, a person’s pulse was used as the reference. It was therefore better to choose a musician who was calm.

Signal reverseFig.1b : The same signal, through a reverse effect Reverse : maximum gain is at the end of the signal

An Experiment

For those of you who can remember magnetic tape, a piano note played in reverse doesn’t sound at all like a piano, and a verse of Shakespeare in reverse sounds strangely like…Swedish. In fact, what our ears perceive as a single homogenous sound is really like a small train made up of four different cars: if we watch it as it moves forwards or backwards, the order of arrival won’t be the same and therefore our perception of the sound will be different.

It’s this idea that’s expressed through the notion of the A.D.S.R curve, also called envelope curve. A « reverse » preset found on some reverbs manipulates nothing but the reverb envelope. It will probably be a decreasing sound and look like figure 1. If it’s played in reverse, the end will therefore be played before the beginning (figure 1b).

Compression

The second case in which a musician-technician might find themselves confronted with having to manipulate an envelope generator: a compressor. A compressor usually has envelope adjustments that change the action time of the compression effect (fig. 5)

Depending on the gear, you’ll usually find an Attack adjustment, which corresponds to the time the compression kicks in once the signal reaches the limit of compression. By putting this setting on slow, the compression will be much more discrete and lets you assure a certain amount of compression without it being too sensitive (for classical music for example). But on the other hand, all sudden peaks corresponding to short attacks will escape treatment. A short Attack adjustment will allow the compressor to react instantly , but that typical compressed « punchy » sound will be heard. In today’s music this can be a desired and interesting effect, if used with moderation. You can also modify the Release which adjusts the time it takes the compressor to bring the level back to its initial level. As with the attack settings, a middle setting will be more discrete and will be more delicate in bringing the level back down. The opposite, a release set to zero can, if the compressor intervenes often, give a disagreeable wave effect.

Figure 5Fig 5a: Cubase’s standard compressor

Figure 5Fig 5b: TC Electronics TC CL1B plug-in which models a tube compressor

To read the full detailed article see:  Basics of Acoustics: Time

October 6, 2009

Music Notation Basics

Filed under: Instructional articles — Tags: , , , — audiofanzine @ 8:11 am
Learning to Read Music
It’s never too late to learn how to read music, and with a little practice and tenacity, the basics can be learned quite quickly. And because knowing how to read and write music still has its benefits and uses, no matter what style of music you’re into, this article has been put together to present the most important and prominent of these basics.

Table Of Contents:

NotesDuration Values, Tuplets, Beams, Note Names, Octave

Accidentals Sharp, Flat, Natural, Double Accidentals

The Staff Ledger Lines

Measures/BarsBarlines

ClefsG Clef, F Clef, C Clef

Rests

Time Signatures

Key Signatures

Notes

Micro Spider

They represent a sound’s duration and pitch. The duration is represented by the type of note head ( the oval part of a note), and/or it’s stem and flag. Pitch is represented by the note’s position on the staff. The higher the note on the staff, the higher the pitch, and vice versa.

Duration Values

1 whole note = 2 half notes = 4 quarter notes = 8 eighth notes =

16 sixteenth notes = 32 – thirty-second = 64 Sixty-fourth notes etc.

In Britain the names are different for these notes values:

whole note = semibreve, half note = minim, quarter note = crotchet, eighth note = quaver, sixteenth note = semiquaver, thirty-second = demisemiquaver, Sixty-fourth note = hemidemisemiquaver etc.

Now let’s explore other elements of music notation…

To read the full detailed article see:  Music Notation Basics

September 30, 2009

Understanding Reverb

When we hear sounds in the “real world,” they are in an acoustic space. For example, suppose you are playing acoustic guitar in your living room. You hear not only the guitar’s sound, but because the guitar generates sound waves, they bounce off walls, the ceiling, and the floor. Some of these sound waves return to your ears, which due to their travel through the air, will be somewhat delayed compared to the direct sound of the guitar.

This resulting sound from all these reflections is extremely complex and called reverberation. As the sound waves bounce off objects, they lose energy and their level and tone changes. If a sound wave hits a pillow or curtain, it will be absorbed more than if it hits a hard surface. High frequencies tend to be absorbed more easily than lower frequencies, so the longer a sound wave travels around, the “duller” its sound. This is called damping. As another example, a concert hall filled with people will sound different than if the hall is empty, because the people (and their clothing) will absorb sound.

Reverberation is important because it gives a sense of space. For live recordings, there are often two or more mics set up to pick up the room sound, which can be mixed in with the instrument sounds. In recording studios, some have “live” rooms that allow lots of reflections, while others have “dead” rooms which have been acoustically treated to reduce reflections to a minimum – or “live/dead” rooms which may have sound absorbing materials at one end, and hard surfaces at the other. Drummers often prefer to record in large, live rooms so there are lots of natural reflections; vocalists frequently record in dead rooms, like vocal booths, then add artificial reverb during mixdown to create a sense of acoustic space.

Whether generated naturally or artificially, reverb has become an essential part of today’s recordings. This article covers artificial reverb – what it offers, and how it works. A companion article covers tips and tricks on how to make the best use of reverb.

Now let’s take a sneak peak into the nitty gritty of reverb…

….

Advanced Parameters II

High and low frequency attenuation. These parameters restrict the frequencies going into the reverb. If your reverb sounds metallic, try reducing the highs starting at 4 – 8kHz. Note that many of the great-sounding plate reverbs didn’t have much response above 5 kHz, so don’t worry if your reverb doesn’t provide a high frequency brilliance – it’s not crucial.

Reducing low frequencies going into reverb reduces muddiness; try attenuating from 100 – 200Hz on down.

Early reflections diffusion (sometimes just called diffusion). Increasing diffusion pushes the early reflections closer together, which thickens the sound. Reducing diffusion produces a sound that tends more toward individual echoes than a wash of sound. For vocals or sustained keyboard sounds (organ, synth), reduced diffusion can give a beautiful reverberant effect that doesn’t overpower the source sound. On the other hand, percussive instruments like drums work better with more diffusion, so there’s a smooth, even decay instead of what can sound like marbles bouncing on a steel plate (at least with inexpensive reverbs). You’ll hear the difference in the following two audio examples.

Maximum DiffusionNo Diffusion

The reverb tail itself may have a separate diffusion control (the same general guidelines apply about setting this), or both diffusion parameters may be combined into a single control.

Early reflections predelay. It takes a few milliseconds before sounds hit the room surfaces and start to produce reflections. This parameter, usually variable from 0 to around 100ms, simulates this effect. Increase the parameter’s duration to give the feeling of a bigger space; for example, if you’ve dialed in a large room size, you’ll probably want to add a reasonable amount of pre-delay as well.

Reverb density. Lower densities give more space between the reverb’s first reflection and subsequent reflections. Higher densities place these closer together. Generally, I prefer higher densities on percussive content, and lower densities for vocals and sustained sounds.

Early reflections level. This sets the early reflections level compared to the overall reverb decay; balance them so that the early reflections are neither obvious, discrete echoes, nor masked by the decay. Lowering the early reflections level also places the listener further back in the hall, and more toward the middle.

High frequency decay and low frequency decay. Some reverbs have separate decay times for high and low frequencies. These frequencies may be fixed, or there may be an additional crossover parameter that sets the dividing line between low and high frequencies.

These controls have a huge effect on the overall reverb character. Increasing the low frequency decay creates a bigger, more “massive” sound. Increasing high frequency decay gives a more “ethereal” type of effect. With few exceptions this is not the way sound works in nature, but it can sound very good on vocals as it adds more reverb to sibilants and fricatives, while minimizing reverb on plosives and lower vocal ranges. This avoids a “muddy” reverberation effect that doesn’t compete with the vocals.

THE NEXT STEP: APPLYING REVERB

Now that we know how reverb works, we can think about how to apply it to our music – but that requires its own article! So, see the article “Applying Reverb” for more information.

To read the full detailed article see:  Understanding Reverb

Blog at WordPress.com.