AF’s Weblog

March 2, 2011

Tips for Mixing Rap Vocals

If I had to pick the most frequent question I get asked on a regular basis – it would have to be “how do I mix rap vocals?” Or some variation thereof. At least once a week, if not more often.

I mix a new rap vocal four or five times a week – much more if you count different rappers on the same song. I have developed an approach – sort of a formula to create a formula. In truth, we know that all songs, vocals, captures, and performances are different. There can never be one formula to mix all vocals effectively. And there are many approaches to conceptualizing a vocal treatment – mine is one of many.

The Concept

It all starts with the concept. I say this time and time again, and it only gets more true as I say it – in order to mix anything – you need an end game. There has to be some kind of idea of where the vocal is going to go before you start getting it there. That idea can and probably will change along the way, but there has to be some direction or else why do anything at all.

The big problem most people have with mixing rap vocals is that they think of the word “vocals” without considering the word “rap.” Rap is supremely general – there are big differences between 1994 NY style rap vocals, and 2010 LA style rap vocals.

Now let’s have a listen to some mixing samples…

Processing

Now you have the vocals clean (or maybe they came in clean to begin with). It’s time to decide what to do with them. Now, I can’t write how you should or should not process your vocals, but I can give some insight into things to consider and think about.

Balance

Figuring out the relationship between the vocals and other instruments in the same frequency area is extremely important. Quintessentially, Hip Hop is all about the relationship between the vocals and the drums – and the number one contestant with the voice is the snare. Finding a way to make both the vocals and the snare prominent without stepping on each other will make the rest of the mix fall nicely into place.

In “1nce Again,” you’ll notice that the snare is a little louder than the vocals, and seems to be concentrated into the brighter area of the frequency spectrum, while the vocals are just an inch down, and living more in the mid range. This was a conscious decision made in the mix. But mixes like Loungin’ have the vocals on par with the snare. And Massive Attack has the vocals up – but it’s not really a snare, it’s a percussive instrument holding down the 2 and 4 that lives primarily in the lower mid region.

“Air”

Hip Hop vocals generally do not have much in the way of reverb. There are three reason for this primarily. 1) Rap vocals tend to move faster and hold more of a rhythmic function than sung vocals – and long reverb tails can blur the rhythm and articulation. (2) The idea of Hip Hop is to be “up front and in your face”, where reverb tends to sink things back in the stereo field. (3) Everyone else is mixing their vocals that way. Not a good reason, but kind of true.

 

However, vocals usually do benefit from sense of 3-D sculpting, or “air.” A sense of space around the vocals that make them more lively and vivid. Very short, wide, quiet reverb can really do the trick here. Another good thing to try is using delay (echo) – and pushing the delay way in the background, with a lot of high end rolled off of it. This creates the sense of a very deep three dimensional space, which by contrast makes the vocal seem even more forward. Lastly, if you are in a good tracking situation, carefully bringing out the natural space of the tracking room can be a good way to get super dry vocals with a sense of air around them. Compression with a very slow attack, and relatively quick release, and a boost to the super-treble range can often bring out the natural air.

Shape & Consistency

A little compression is often nice on vocals, just to sit them into a mix and add a little tone. On a sparse mix, a little dab’ll do ya. The most common mistake people make when processing vocals for Hip Hop is to over-compress. High levels of compression is really only beneficial to a mix when there is a lot of stuff fighting for sonic space. When you read about rapper’s vocals going through four compressors and really getting squeezed it’s probably because there are tons of things already going on in the mix, and the compression is necessary for the vocals to cut through. Or because it’s a stylistic choice to really crunch the vocals.

Filtering

What’s going on around the voice is just as important to the vocals as the vocals themselves. Carefully picking what to get rid of to help the vocals along is very important. For example, most engineers hi-pass filter almost everything except the kick and bass. That clears up room for the low information. But often the importance of low-pass filtering is overlooked. Synths, even bass synths, can have a lot of high end information that is just not necessary to the mix and leave the “air” range around the vocals feeling choked. A couple of well placed low-passes could very well bring your vocals to life.

 

Also, back to the subject of hi-passing, unless you are doing the heavy handed Bob Power thing, you really don’t need to be hard hi-passing your vocals at 120hz. The human voice, male and female, has chest resonance that goes down to 80hz (and even under sometimes). Try a gentle hi-pass at around 70 or 80hz to start with if you’re clearing up the vocals. Or maybe no hi-pass at all…

Presence

Deciding where the vocal lives frequency wise is important. Mid sounding, “telephonic” vocals can be cool at times, low mid “warm” sounding vocals certainly have their place. Commonly, the practice is to hype the natural presence of the vocals by getting rid of the “throat” tones and proximity build up which generally live around the 250-600hz range (but don’t mix by numbers, listen listen listen). This in turn exaggerates the chest sound, and the head sound – particularly the sounds that form at the front of the mouth, tongue, and teeth – these are the tones that we use to pronounce our words and generally live in the upper midrange (2k-5k, no numbers, listen listen listen).

I think that about covers the basics of what to listen for when working your vocals.

To read the full detailed article with sound samples visit:  Mixing Rap Vocals

Advertisements

December 22, 2010

A Guitarist’s Guide to Multiband Distortion

If you’re a guitarist and you’re not into multiband distortion…well, you should be. Just as multiband compression delivers a smoother, more transparent form of dynamics control, multiband distortion delivers a “dirty” sound like no other.

Not only does it give a smoother effect with guitar, it’s a useful tool for drums, bass, and believe it or not, program material – some people (you know who you are!) have even used it with mastering to add a distinctive, unique “edge.”

As far as I know, the first example of multiband distortion was a do-it-yourself project, the Quadrafuzz, that I wrote up in the mid-’80s for Guitar Player magazine. It remains available from PAiA Electronics (www.paia.com), and is described in the book “Do It Yourself Projects for Guitarists” (BackBeat Books, ISBN #0-87930-359-X).

I came up with the idea because I had heard hex fuzz effects with MIDI guitar, where each string was distorted individually, and liked the sound. But it was almost too clean, yet I wasn’t a fan of all the intermodulation problems with conventional distortion. Multiband distortion was the answer. However, we’ve come a long way since the mid-’80s, and now there are a number of ways to achieve this effect with software.

How it Works

Like multiband compression, the first step is to split the incoming signal into multiple frequency bands (typically three or four). These usually have variable crossover points, so each band can cover a variable frequency range. This is particularly important with drums, as it’s common to have the low band zero in on the kick and distort it a bit, while leaving higher frequencies (cymbals etc.) untouched.

Then, each band is distorted individually (incidentally, this is where major differences show up among units). Then, each band will usually have a volume control so you can adjust the relative levels among bands. For example, it’s common to pull back on the highs a bit to avoid “screech,” or boost the upper midrange so the guitar “speaks” a little better.

With guitar, you can hit a power chord and the low strings will have minimal intermodulation with the high strings, or bend a chord’s higher strings without causing beating with the lower ones.

Now let’s take a closer look at some plugins…

Rolling Your Own

You’re not constrained to dedicated plug-ins. For example, Native Instruments’ Guitar Rig has enough options to let you create your own multiband distortion. A Crossover module allows splitting a signal into two bands; placing a Split module before two Crossover modules gives the required four bands. Of course, you can go nuts with more splits and create more bands. You can then apply a variety of amp and/or distortion modules to each frequency split.

Yet another option is to copy a track in your DAW for as many times as you want bands of distortion. For each track, insert the filter and distortion plug-ins of your choice. On advantage to this approach is each band can have its own aux send controls, as well as panning. Spreading the various bands from left to right (or all around you, for surround fans!) adds yet another level of satisfying mayhem.

Here a guitar track has been “cloned” three extra times in Sonar, with each instance feeding an EQ and distortion plug-in. These have been adjusted, along with panning, to create multi-band distortion.

And Best of All….

Thanks to today’s fast computers, sound cards, and drivers, you can play guitar through plug-ins in near-real time, so you can tweak away while playing crunchy power chords that rattle the walls. Happy distorting!

To read the full detailed article see:  A Guitarist’s Guide to Multiband Distortion

April 9, 2010

Music Making with a Computer (Part 1)

A computer to make music? Sounds great. Which computer should I get and with what specification? Good question. But first things first: what is a computer and how does it work?

Computers revolutionized the way we work, regardless of what you call work: music production, accounting, management. Can you imagine having to write your CV with a typewriter (carbon copy included) instead of a text editor? Of course you can’t. The same applies to music recording and producing: it’s hard to do it without a computer… You’ll certainly find vintage fundamentalists here and there, but we all have to resign to the fact that all songs released these days have been processed in one way or another with a computer before they hit the market – even if just because all formats are digital nowadays (CD, MP3; except for the DJ and Hi-Fi freak vinyl niche market).

It is indeed still possible to record an album with a good, old multitrack recorder, and to enjoy that special sound character a tape provides, but you have to admit that it requires a lot of time and money (service, tapes, etc.), and thus it is an expensive hobby for the rich. Unless you are Jack White or Lenny Kravitz or you have enough money to rent Abbey Road for three months to edit tapes with glue and scissors, you’ll have to make do with a computer to make your music – like 99% of home studio owners and sound engineers.

What’s the purpose? With a suitable interface and software, you can control all sorts of electronic MIDI instruments (synth, sampler, etc.) and virtual instruments, you can record and mix audio with all necessary effects… What’s more, you can save as many variations as you want, repair mistakes and enjoy the wonders of cutting, copying and pasting; live or in the studio. And all of that for a ridiculous price, considering what you had to pay to do the same 30 years ago.

In short: you need a computer! Ok, but which one? Mac? PC? With which processor? And what hard drive? How much RAM? But, most importantly, how can I choose from the options available if I don’t know what a CPU is or what does RAM do?

Don’t panic! We’ll help you get things straight…

Computer Parts

Regardless of whether you have a Mac or a PC, computers generally work the same, all the more ever since Apple started using Intel processors. The difference between these two platforms resides mainly in the operating system (Window, Mac OS X, Linux, etc.), their design and the software available. It doesn’t matter if you decide to assemble your own computer or buy a pre-assembled model by a given manufacturer, a quick overview of the different parts of a computer will be very useful in order to understand their roles…

CPU

The CPU (Central Processing Unit) or microprocessor, is often compared to the brain of the computer because it manages all calculations. Considering that all data passes through the CPU, its processing power is of utmost importance for the overall performance of the computer. When it comes to audio, for example, it processes a reverb effect while displaying the graphic user interface and manages all other computer instructions (data keyboard, etc.). To use a musical metaphor, you could say it’s the musical conductor of your computer

Basically, a CPU is a small silicon square on which several millions of transistors are assembled: over 60 million on a Pentium IV and more than 731 million on the Core i7 thanks to the continuous progress in miniaturization. More cells in the silicon brain provide more power, but the number of transistors is not the only factor: the processor design and its speed also come into consideration.

The faster the CPU, the more calculations it will be able to process in a given time. This speed is measured in gigahertz (GHz). When a CPU is clocked at 2 GHz, it means it can process two milliard cycles per second. But what is a cycle? Good question! To keep it short and simple, let’s say that a cycle is a basic calculation, like adding two numbers. Multiplying two numbers takes several cycles and dividing them even more. Why? Because a CPU is extremely limited compared to the human brain. But it is extremely fast and the user can’t really notice it, which gives the impression that the machine is more intelligent.

But keep in mind that this clocking frequency is only theoretical because, in real life, our processors rarely work at full capacity. Why? Because they are slowed down by other components like RAM (Random-Access Memory). Furthermore, increasing CPU frequency is not the only nor the simplest way to increase a computer’s power. In fact, the latest CPU generations have improved their architecture implementing multi-core processors.

A multi-core CPU is a chip including several processors connected in parallel. You can find dual-core (two cores), quad-core (four cores) and even octo-core CPUs (eight cores).

By using this technology, it is now possible to improve the processing power without increasing the CPU clock, thus avoiding heat generation problems due to higher speeds.

Nowadays, these types of CPUs (mainly dual-core and quad-core) are mounted in all computers on the market regardless of whether it is a Mac or a PC.

Now let’s take a look at some other parts…

Conclusion

Now that you have been educated on the basic parts of a computer and what they do, in the next article we will deal with the specific setup choices available to have a computer ready for making music. And you can trust us, there are plenty of choices…

To read the full detailed article see:  Making Music with a Computer

October 14, 2009

Basics of Acoustics: Time (I)

Filed under: Instructional articles — Tags: , , , , , — audiofanzine @ 6:01 am

Time (1)

With stopwatch in hand, our perception of time seems straightforward. But in everyday life we’re not always watching the clock, and everyone knows that the passage of time is relative. It differs from one person to another and especially from one activity to another: An hour spent watching a great movie doesn’t feel as long as an hour in traffic.

SignalFig.1a : Sound signal with reverb. Note the progressive attenuation of the sound level

Scientists may conceive time in seconds, but most musicians feel it in a more fluctuating manner: either in the speeding up of a tempo or the slightly off-pitch note due to stress. In fact, pitch, which is defined by frequency, is a value linked to time and depends on our perception of a second. If it seems longer or shorter, the note can seem sharp or flat.

It’s said that during the middle ages, long before the invention of the metronome in 1816, a person’s pulse was used as the reference. It was therefore better to choose a musician who was calm.

Signal reverseFig.1b : The same signal, through a reverse effect Reverse : maximum gain is at the end of the signal

An Experiment

For those of you who can remember magnetic tape, a piano note played in reverse doesn’t sound at all like a piano, and a verse of Shakespeare in reverse sounds strangely like…Swedish. In fact, what our ears perceive as a single homogenous sound is really like a small train made up of four different cars: if we watch it as it moves forwards or backwards, the order of arrival won’t be the same and therefore our perception of the sound will be different.

It’s this idea that’s expressed through the notion of the A.D.S.R curve, also called envelope curve. A « reverse » preset found on some reverbs manipulates nothing but the reverb envelope. It will probably be a decreasing sound and look like figure 1. If it’s played in reverse, the end will therefore be played before the beginning (figure 1b).

Compression

The second case in which a musician-technician might find themselves confronted with having to manipulate an envelope generator: a compressor. A compressor usually has envelope adjustments that change the action time of the compression effect (fig. 5)

Depending on the gear, you’ll usually find an Attack adjustment, which corresponds to the time the compression kicks in once the signal reaches the limit of compression. By putting this setting on slow, the compression will be much more discrete and lets you assure a certain amount of compression without it being too sensitive (for classical music for example). But on the other hand, all sudden peaks corresponding to short attacks will escape treatment. A short Attack adjustment will allow the compressor to react instantly , but that typical compressed « punchy » sound will be heard. In today’s music this can be a desired and interesting effect, if used with moderation. You can also modify the Release which adjusts the time it takes the compressor to bring the level back to its initial level. As with the attack settings, a middle setting will be more discrete and will be more delicate in bringing the level back down. The opposite, a release set to zero can, if the compressor intervenes often, give a disagreeable wave effect.

Figure 5Fig 5a: Cubase’s standard compressor

Figure 5Fig 5b: TC Electronics TC CL1B plug-in which models a tube compressor

To read the full detailed article see:  Basics of Acoustics: Time

October 6, 2009

Music Notation Basics

Filed under: Instructional articles — Tags: , , , — audiofanzine @ 8:11 am
Learning to Read Music
It’s never too late to learn how to read music, and with a little practice and tenacity, the basics can be learned quite quickly. And because knowing how to read and write music still has its benefits and uses, no matter what style of music you’re into, this article has been put together to present the most important and prominent of these basics.

Table Of Contents:

NotesDuration Values, Tuplets, Beams, Note Names, Octave

Accidentals Sharp, Flat, Natural, Double Accidentals

The Staff Ledger Lines

Measures/BarsBarlines

ClefsG Clef, F Clef, C Clef

Rests

Time Signatures

Key Signatures

Notes

Micro Spider

They represent a sound’s duration and pitch. The duration is represented by the type of note head ( the oval part of a note), and/or it’s stem and flag. Pitch is represented by the note’s position on the staff. The higher the note on the staff, the higher the pitch, and vice versa.

Duration Values

1 whole note = 2 half notes = 4 quarter notes = 8 eighth notes =

16 sixteenth notes = 32 – thirty-second = 64 Sixty-fourth notes etc.

In Britain the names are different for these notes values:

whole note = semibreve, half note = minim, quarter note = crotchet, eighth note = quaver, sixteenth note = semiquaver, thirty-second = demisemiquaver, Sixty-fourth note = hemidemisemiquaver etc.

Now let’s explore other elements of music notation…

To read the full detailed article see:  Music Notation Basics

September 30, 2009

Understanding Reverb

When we hear sounds in the “real world,” they are in an acoustic space. For example, suppose you are playing acoustic guitar in your living room. You hear not only the guitar’s sound, but because the guitar generates sound waves, they bounce off walls, the ceiling, and the floor. Some of these sound waves return to your ears, which due to their travel through the air, will be somewhat delayed compared to the direct sound of the guitar.

This resulting sound from all these reflections is extremely complex and called reverberation. As the sound waves bounce off objects, they lose energy and their level and tone changes. If a sound wave hits a pillow or curtain, it will be absorbed more than if it hits a hard surface. High frequencies tend to be absorbed more easily than lower frequencies, so the longer a sound wave travels around, the “duller” its sound. This is called damping. As another example, a concert hall filled with people will sound different than if the hall is empty, because the people (and their clothing) will absorb sound.

Reverberation is important because it gives a sense of space. For live recordings, there are often two or more mics set up to pick up the room sound, which can be mixed in with the instrument sounds. In recording studios, some have “live” rooms that allow lots of reflections, while others have “dead” rooms which have been acoustically treated to reduce reflections to a minimum – or “live/dead” rooms which may have sound absorbing materials at one end, and hard surfaces at the other. Drummers often prefer to record in large, live rooms so there are lots of natural reflections; vocalists frequently record in dead rooms, like vocal booths, then add artificial reverb during mixdown to create a sense of acoustic space.

Whether generated naturally or artificially, reverb has become an essential part of today’s recordings. This article covers artificial reverb – what it offers, and how it works. A companion article covers tips and tricks on how to make the best use of reverb.

Now let’s take a sneak peak into the nitty gritty of reverb…

….

Advanced Parameters II

High and low frequency attenuation. These parameters restrict the frequencies going into the reverb. If your reverb sounds metallic, try reducing the highs starting at 4 – 8kHz. Note that many of the great-sounding plate reverbs didn’t have much response above 5 kHz, so don’t worry if your reverb doesn’t provide a high frequency brilliance – it’s not crucial.

Reducing low frequencies going into reverb reduces muddiness; try attenuating from 100 – 200Hz on down.

Early reflections diffusion (sometimes just called diffusion). Increasing diffusion pushes the early reflections closer together, which thickens the sound. Reducing diffusion produces a sound that tends more toward individual echoes than a wash of sound. For vocals or sustained keyboard sounds (organ, synth), reduced diffusion can give a beautiful reverberant effect that doesn’t overpower the source sound. On the other hand, percussive instruments like drums work better with more diffusion, so there’s a smooth, even decay instead of what can sound like marbles bouncing on a steel plate (at least with inexpensive reverbs). You’ll hear the difference in the following two audio examples.

Maximum DiffusionNo Diffusion

The reverb tail itself may have a separate diffusion control (the same general guidelines apply about setting this), or both diffusion parameters may be combined into a single control.

Early reflections predelay. It takes a few milliseconds before sounds hit the room surfaces and start to produce reflections. This parameter, usually variable from 0 to around 100ms, simulates this effect. Increase the parameter’s duration to give the feeling of a bigger space; for example, if you’ve dialed in a large room size, you’ll probably want to add a reasonable amount of pre-delay as well.

Reverb density. Lower densities give more space between the reverb’s first reflection and subsequent reflections. Higher densities place these closer together. Generally, I prefer higher densities on percussive content, and lower densities for vocals and sustained sounds.

Early reflections level. This sets the early reflections level compared to the overall reverb decay; balance them so that the early reflections are neither obvious, discrete echoes, nor masked by the decay. Lowering the early reflections level also places the listener further back in the hall, and more toward the middle.

High frequency decay and low frequency decay. Some reverbs have separate decay times for high and low frequencies. These frequencies may be fixed, or there may be an additional crossover parameter that sets the dividing line between low and high frequencies.

These controls have a huge effect on the overall reverb character. Increasing the low frequency decay creates a bigger, more “massive” sound. Increasing high frequency decay gives a more “ethereal” type of effect. With few exceptions this is not the way sound works in nature, but it can sound very good on vocals as it adds more reverb to sibilants and fricatives, while minimizing reverb on plosives and lower vocal ranges. This avoids a “muddy” reverberation effect that doesn’t compete with the vocals.

THE NEXT STEP: APPLYING REVERB

Now that we know how reverb works, we can think about how to apply it to our music – but that requires its own article! So, see the article “Applying Reverb” for more information.

To read the full detailed article see:  Understanding Reverb

September 25, 2009

Spice it up! An Introduction To Modes

Filed under: Instructional articles — audiofanzine @ 8:24 am

Context!!!

The aim of this article is to serve as a simple and straightforward introduction, or re-introduction, to the wonderful, but sometimes confusing world of modes.

There are quite a few misconceptions about modes and how they work. Much of the confusion comes from the word “mode” itself, since it implies more of a reference to another scale than an actual scale in its own right. We’ve all heard, or read, and half understood, that modes are based on this or that scale (usually the major scale), and that all you need to do is play from a certain degree (note) up or down the scale one octave to the same note and you get the mode in question. And, of course, when you tried it, you didn’t hear any difference so you gave up!

The problem with this over-simplification (though technically it is true) is that it overlooks the most important aspect of modes and possibly of music itself: context! If you are playing over a C drone or C major chord progression, and your ear hears C major, you can play E Phrygian or F Lydian (two modes “based” on the C major scale) until you’re blue in the face, but you’ll never hear anything but a C major sound (see Ex. 1)! Context is everything: if you play, over that same C drone, a C Lydian or C Phrygian scale (mode) then you definitely will hear a change and a different flavor(see Ex. 2). Sometimes the flavor change is slight and sometimes it’s radical!

Ex.1: C Major, E Phrygian, F Lydian over a C drone: The C major sound is unbroken even though E phrygian and F Lydian are being played

Ex 2: C Phrygian, C Mixolydian, and C Lydian over a C drone: You should hear three distinct flavors

Static versus Changing Harmonies

In this article we won’t be dealing with modes in the context of jazz or changing harmonies. We’ll be concentrating on static or “modal” harmonies. This means that even though there may be more than one chord, the harmony, or mode, or key center will stay the same (or in the case of a drone: neutral).

The reason for this is that in non-modal jazz there’s usually a quick succession of chords and changing key centers, and modes in this context just fly by, making it difficult to feel or hear any kind of flavor or get any kind of appreciation of the mode. Plus modes in this traditional jazz context are often just a means of playing the right notes (playing in) or playing “wrong” notes( playing dissonant or purposely playing “wrong” notes) over a given chord.

By taking our time and playing over static modal harmonies, or just a drone, we’ll be able to hear and eventually recognize the different flavors of each mode.

Scale or Mode?

I won’t be making any difference between scale and mode because they are virtually the same thing. For all intents and purposes: any mode is also a scale (sort of like the particle/wave duality of light ); and any scale could be considered a mode of another related scale. For the moment, the goal is to try and simplify things and cut away some of the jargon.

Now let’s take a deeper look…

Conclusion

The modes presented here are just 3 out of the 7 diatonic modes. Two of the other 4 should already be familiar to you. They are: Ionian, which is nothing more or less than the major scale; and Aeolian, which is the minor (natural) scale. This leaves: Dorian, which is very similar to the natural minor scale, and Locrian, which is probably one of the least used scales/modes in music (except maybe to solo over certain chords) .

Spice your music!

For the moment, you should concentrate on these three modes, and make sure you learn and hear them well before moving on to other stuff. Just like with other aspects of music, you need to build strong foundations. Learning too many scales at a time will only ensure that you play none well.Try to remember that every mode has it’s distinctive flavor. And it’s usually just one or two notes (intervals) that create that distinctive flavor. It’s these notes that you should try to recognize. For example, if you listen to Sting’s « when we dance »; at first it just sounds like a basic major-scale sound, but then he sings that #4 and everything just changes. Just that one note gives the whole song a different feel and flavor. And this is important: each mode has a different spice or flavor to it, and they often have an effect on our emotions. Movie composers know that well, and have been using changing modes to play with or heighten our emotions since the beginning of cinema.

If you decide to jump ahead and look at other modes, don’t forget about context! It’s good to know what scale each mode is derived from, but remember that if you’re playing mode X (that is related to or derived from mode Y) that you should be hearing an X tonality (try a drone on X), and not Y. If you’re hearing Y as the tonal center while trying to play mode X, then you’re just wasting your time.

As stated before, listening and recognizing are crucial. Record yourself playing different modes and see if you can tell which one is being played and where the characteristic notes are. Also, a good way of seeing if you have understood something is to try to explain it to others. So go and find someone patient (preferably a musician) and see if you can teach them what you’ve learned.

To read the full detailed article see:  An Introduction to Modes

April 13, 2009

Audio Encoding: What Lies Ahead?

Introduction to Audio Encoding

In today’s high definition world, we want the best quality in every film, picture, and sound we encounter. Blu-Ray DVD’s seem poised to take over the market, pushing the inferior DVD’s down the same path as the VHS tape. In much the same way, we see digital audio pushing for the same quality. Although the popularity of the iPod and mp3s has given rise to an age of over-compressed, low-quality music, we also see a rise in vinyl sales, as well as developments such as Sony’s Super Audio CD (SACD). This implements a relatively new process to encode audio, paving the way for a massive change in the quality of music we listen to, if it is accepted.

Analog vs. Digital

In order to understand audio encoding, the difference between analog and digital must be understood. Something that is analog is an uninterrupted, pure, natural sound. The human voice, a guitar, and a vinyl record are all examples of analog sound. When a vinyl record is cut, a needle senses the vibrations from an audio source and cuts it exactly into the vinyl. This is why vinyl could be said to have the highest sound quality, and is still popular today, despite many alternatives and developments. An analog signal can encompass all frequencies, even the inaudible. This is why a live orchestra sounds more “full” than a recording, even of the highest quality. Audiophiles will argue that the energy of the inaudible frequencies adds to the quality of sound, even though they cannot be perceived by the ear.

Ouverture

A digital signal is a replication of an audio signal by a number of ones and zeros. It is the same way a picture on a computer screen is replicated by thousands of intensity values represented in binary code. The music on an iPod, a compact disc, and an mp3 are examples of digital replication. Even many modern musical instruments have implemented digital sound, from digital keyboards to electronic drums to guitar pedals. Digital sounds can be made with a programmable chip, rather than a circuit, and is much more reliable, inexpensive, and easy to mass produce as a result. However, there are drawbacks to digital sound that sacrifice the sound quality, and these drawbacks are being constantly developed and upgraded to replicate an analog signal more precisely.

Now let’s take a closer look…

….

Conclusion

Thus, 1-Bit modulation has been implemented in many new “high definition” audio devices, and developers continue to use this process to expand into multi-bit modulators and other hybrid converters. It has become accepted among audiophiles, and is slowly taking the place of PCM. Is one better than the other? It is still debatable. However, 1-Bit modulation allows for simpler circuitry and much better noise shaping in lower frequency bands. The SNR is much better than PCM, except in the higher frequency range, where much of the noise is inaudible anyway. The design is simpler, using more digital implementation than PCM, and as programming advances, digital functions like noise shaping will be enhanced.

Once 1-Bit modulation starts to become affordable, consumers will begin to realize the poor quality of mp3’s. Steps will be made to expand 1-Bit audio into the portable market, and “high definition” audio will become the norm. Until then, only the select few who have heard the differences will know how much better sound quality can be, and will only strive to educate the rest.

To read the full detailed article see Audio Encoding

August 26, 2008

Sound Techniques : basics of acoustics

Filed under: Instructional articles — Tags: , , , , — audiofanzine @ 3:49 pm

With stopwatch in hand, our perception of time seems straightforward. But in everyday life we’re not always watching the clock, and everyone knows that the passage of time is relative. It differs from one person to another and especially from one activity to another: An hour spent watching a great movie doesn’t feel as long as an hour in traffic.

Read the article about basics of acoustics on Audiofanzine.

Blog at WordPress.com.