AF’s Weblog

October 17, 2012

The Importance of Space in a Mix: Part I

Filed under: Mixing reviews — Tags: , , , — audiofanzine @ 12:26 pm

To read the full detailed article see:  The Importance of Space in a Mix: Part 1

When entering a new space, how often do we consider it’s sonic characteristics. And more frequently, when building a mix, how often do we think of space(s) as its own sonic element?

We spend a great deal of time considering individual sounds in a space. We prescribe attributes to the instruments and the players in order to organize our thoughts about the sounds and how they blend. We may often say a singer is “mid-rangy,” a snare is “ringy,” or perhaps the acoustic guitar is “warm.” We do the same for microphones, pre-amps, compressors, and what have you. It is surprising how little time is spent considering the sound of rooms, reverbs, delays, and whatever other spaces are coexisting within our mix. Considering that sound is defined by air vibrations within a space one would think the room would be held in equal importance to that which is resonating in it. But, when entering a new space, how often do we consider it’s sonic characteristics. And more frequently, when building a mix, how often do we think of space(s) as its own sonic element?

Perhaps more often than we realize. After all, why do we spend so much time rolling through reverb presets trying to find the perfect one – when we seldom know what the right one will be? And why does a plate sound good one time, but a hall sounds better next? Something instinctive is motivating these decisions. Like all sound sources, we are on some fundamental level listening for – and striving for – tone, rhythm, and coherence.

Reverb

The purpose of having customizable reverb is to find that which perfectly compliments the sound source – or the surrounding sound sources. We can pick and choose a reverb with a certain sound that highlights the tones or rhythms in our mix. And frequently, we will send multiple sound sources to the same reverb for the sake of coherence.

The complication comes in when there are multiple spaces present in the mix. After all, how can one element exist in two spaces at once? Or three? or, why is it that the choir sounds like it’s in a church while the lead vocalist sounds like she/he is in a concert hall?

Sonic Cues for the Listener

Of course, the end listener is not listening on such a discerning level. The end listener is only picking up on subtle sonic cues that either indicate the sound is coherent or disjointed. So our task is to lead the listener’s ear where we want it to go. Do we want a unified sense of space, or something surreal?

That’s our job as the artist, producer, or engineer. To orchestrate all the sounds and consider what feelings and emotions they evoke. They key word here being “orchestrate.” A random piling of sounds will certainly sound “unmixed” or perhaps more importantly, “ineffective.” Reverb and space are no exception.

Listening for Spacial Characteristics

The primary goal to understanding and sussing out any mix is listening. When listening to the drums, bass, vocals, strings, etc, perhaps we should also make a point of listening to the space in the capture. If you’re not used to listening to space then using a compressor as a listening tool with a fast attack and release and a low threshold will exaggerate the room sound in the capture. Everything has spatial characteristics. A bass DI’d has no space sound – but that is still a spatial characteristic and must be considered. After all, if everything is close miked in isolation rooms, or DI’d, the capture is going to come out very dry, for better or worse (usually worse).

While listening to spacial sound, we are inherently listening to our front to back sound field. A DI’d bass is going to sound extremely forward while our drum kit miked from thirty feet away will naturally sound way back. This is a major advantage when organizing the image of our mix – as it can be recorded strategically to do the front to back work for us.

Tonal Cues

The trickier part of listening to space is the tonal cues. This is an immensely complex task, but can effectively be dumbed down into frequency response and “texture.” This can be broken down into an even more fundamental question: Are the room sounds complimenting each other or clashing? A bright, open, Lex PCM 96 Hall reverb might sound fantastic on vocals, but if the acoustic guitar was recorded in a dark sounding, dense room, the two reverb sounds will clash (or at least sound incoherent). While every mix is different, by and large this example will yield something that sounds “unmixed.”

Mix the Ambience

A brilliant colleague of mine named Gregory Scott turned me on to a unique but supremely effective concept. He said that one of the fastest ways to improve one’s mix is to “mix the ambience.” I’ve taken this to mean mixing not just with the space sound(s) in mind, but actually take the time to get all your room mics, reverbs, and delays up front or in group-solo and mix them. Get the plate slap from the snare sounding like it belongs with the room capture on the guitar. Or – if you have a surreal space – make sure it’s orchestrated in a way where the entire sense of space is working in the mix, or focus of the space moves in an evocative way (more on this in the next article). Once all the ambience tracks are mixed start bringing in the elements that have the most space in them – drum OHs, and mid-distant strings for example – and focus specifically on their space and how it sounds with the other spaces.

Tools for Mixing the Space

As with all facets of mixing and recording, the source sounds are paramount.

Choosing the best reverb(s) for the job up front will ultimately determine the end result. So, even before we get into the mixing of the space, let’s talk about sound selection. In a musical piece, we can treat the reverb as any other sound source, with four basic components:rhythm, volume, tone and texture.

Rhythm

One of the key elements of any reverb is it’s decay. The length of the tail is often an indication of the expanse of the space. However, it also determines the time in which the reflections sustain in the mix – and that’s a rhythmic concern.

A long sustaining sound in a fast tempo piece, or rhythmically complex piece is going to mask elements of the mix and generally slur the overall rhythm. A quickly decaying tail in a slow piece on the other hand, will leave a lot of empty space with very little impact from the reverb. Find a tail length that compliments the speed of the piece.

Pre-Delay

Another rhythmic consideration is the speed of the pre-delay. Pre-delay is a key element in determining the front to back relationship of the dry sound and the space it exists in. In other words, pre-delay helps the ear recognize how close or far the dry sound is. Generally speaking, the longer the pre-delay, the closer the dry sound. A zero millisecond pre-delay means that the reflections and the dry sound are reaching the ear simultaneously – which puts the dry sound far away. This acoustic phenomenon could be an article all to itself, but we’ll leave it at that for now.

Pre-delay is also a rhythmic element – it determines a space of time from the initial dry sound before the early reflections show up. Anything within the Haas Zone (10ms or less), isn’t going to have much effect on the rhythmic sense of the sound. Once you start getting up to 20ms and greater, the slap back effect becomes distinct and there is a clear rhythmic effect. Find a pre-delay that compliments the speed or rhythm of the piece.

Lastly, some reverbs (particularly room and hall style reverbs) have a rhythmic space between the early reflections and late reflections. This is not always controllable, but listening for that “bulky” moment in the reverb sound is very important when selecting a reverb. Often times, plates are a good choice for drums partially because there are no “early” or “late” reflections – eliminating that particular rhythmic concern.

Volume

Generally, when I’m mixing, I prefer just enough reverb to add a little life to the elements in the mix. Often, I’m setting my reverbs 15 or 20dbFS lower than my dry elements. However, this isn’t to say reverbs can’t come to the foreground. It’s a very important aesthetic decision. Just remember that whether the reverbs are subtle or prominent, they still need to sound right. Tone and Texture – this is where we get into the gritty stuff. There are many factors in determining the tone and texture of a reverb.

First comes the style of the algorithm or convolution, then the three “D”s: diffusion, density, and damping….

Conclusion

This is definitely a lot of information to absorb (pun completely intended). Read, re-read, and play with different settings on your reverb units, and note the results.

Be discerning – the rhythmic, tonal, and textural choices are equivalent of choosing guitar amp settings or drum tunings. If chosen wisely, the mix will be easy, if not, you’re in for an uphill battle.

To read the full detailed article see:  The Importance of Space in a Mix: Part 1

 

 

Advertisements

April 4, 2012

Tips for Mixing Toward Loudness

Filed under: Mixing reviews — Tags: , , , , , , — audiofanzine @ 7:25 am

To read the full detailed article see: Mixing Toward Loudness

Some people want their music really loud, and there’s nothing wrong with that. If loudness is part of their aesthetic and the audience likes it, then I say let’s go for it. In order to deliver the most musically effective loudness, that goal must have been addressed in the mixing process, but not as directly as you might think.

It’s important to remember that there are mix masters, and then there are replication or download masters. Your project isn’t finished until it has been mastered, so the relative loudness of a mix does not represent the final level of the project. Comparing the loudness of a mix master with a finished commercial CD is not particularly useful.

However, there are a lot of aspects of mixes that directly contribute to the eventual loudness of a finished master. So what should you be listening for while you’re mixing? Here’s an example scenario:

My client has brought me a set of 5 multi-track recordings to mix. The client is very concerned that her project should fit in with the latest release from Artist X as much as possible, including being equally loud.

Here are some things I would be sure to pay attention to while mixing her project:

The Loudest Instrument

What is the loudest instrument in Artist X’s mixes? 

The answer is probably pretty consistent across the whole CD; and I’ll be sure to use a similar approach with my client’s project.

This may not seem like a pivotal factor, but the relative loudness relationships within a mix establish a lot about the eventual absolute volume of the mix (and the project). If one hip hop mix has a lot more vocal content than another, the relative loudness of the two mixes will be confused.

If I’m mixing in a drum-heavy genre, I’ll be careful to reference that primary balance benchmark. If my next project is a vocal-driven style, I’ll simply re-establish my benchmark. In either case, I’ve setup the balance relationships within my mixes so that they can directly compare with other albums in the presumed audience playlist.

Now let’s take a closer look….

Mastering

These types of musically relevant aspects of mix structure will help you create consistent, engaging mixes that fit into a genre in a lot of fundamental ways. The mastering process can then more effectively finish preparing those mixes for their commercial audience, including addressing their market loudness.

To read the full detailed article see: Mixing Toward Loudness

October 6, 2011

Tips for Mixing the Low End

Filed under: Bass, Mixing reviews — Tags: , , , , — audiofanzine @ 5:58 am

Besides vocal mixing – I would say the most common question I read about on the internet is how to manage the low end. The kick and bass, or whatever else might be occupying that area, is the weight and power of a track. In addition, it’s often the rhythmic backbone.

People tend to have a lot of trouble with low end, and I think there are two specific reasons why:

1) Harder to Hear

A lot of speakers and headphones simply don’t reproduce the low end with great detail and accuracy. You really need large cones, preferably 8″ or more to be able to produce the low end correctly. On top of that, rooms need a lot of treatment to manage the low end correctly. Parallel walls and corners tend to distort bass reproduction, making it hard to gauge what you are hearing. To complicate the issue – the actual bass range is much smaller linearly speaking than higher octaves. You can go from a sub bass A to a bass A in 55hz. In the upper ranges, 55hz might not even get you to the next note! Ultimately, this mathematically means your bass elements more readily over lap and leave you with less space to make things separate sounding and focused.

2) Frequency Perception

If you set a bass signal at equal amplitude with a mid-range signal, you’ll perceive the bass signal as being quieter (see Fletcher-Munson Curves). This means it takes more juice for the low end to come out booming. Especially if you’re going head first into some heavy compression, which is often the case for Dance and Hip Hop music. But hey, how important is having a big low end in Dance and Hip Hop? Oh wait…. So how do we get the low end focused and big?

Let’s take a closer look…

Don’t

  • Don’t carve out low frequencies to make room for other low elements. Bass doesn’t work this way too often. If you are carving out frequencies, do it because there’s an excessive resonance there, or because there’s sub build up.
  • Don’t be afraid of narrow boosts. Convention seems to say that you should boost wide. But in the low range – wide becomes very relative. Remember, there’s less space in the low range – so a wide bandwidth is going to be super wide. Also, narrow boosts can help emphasize a good sub. Just be careful when choosing what narrow band – make sure it helps the bass element and fits properly in the context of the track.
  • Don’t feaking side chain every bass to every kick in every mix. Yes, ducking the bass from the kick can be a good way to get the kick in the open. However, the bass should be SUPPORTING the kick. And if it’s supporting the kick, and you duck it out of the way – there goes your support. Also, remember that ducking has rhythmic consequences. In certain styles, where the kick is coming in regular intervals – this can be cool. When the kick doesn’t come in regular intervals, or very close together, you start losing definition of that rhythm. I actually like to do the opposite. Long sustaining bass lines tend to have very little movement, and don’t always aid the rhythm. I’ll side-chain an expander, or an upward expander to the kick – so when the kick hits, the bass jumps a little with it.

In the track below – I actually do both. During the verse, the bass is chained to expand with the kick. In the chorus, the bass is chained to duck the kick. Oh, and there’s a sine wave gated to the kick drum that’s tuned to the root of whatever chord the song happens to be at.

Daylight – Matthew Weiss Mix

To read the full detailed article see:  Tips for mixing the Low End

July 22, 2011

Exclusive Interview with Dave Pensado

Filed under: Mixing reviews — Tags: , , , , , — audiofanzine @ 11:14 am

Dave pensado is a man who requires minimal introduction. He’s a world class mix engineer who’s worked on countless hit records. He’s also a teacher & mentor to an entire generation of successful mix engineers (including Jaycen Joshua, Ethan Willoughby, Ariel Chobaz & more).

Dave was kind enough to take time out of busy schedule to join us and answer some questions. Enjoy.

—–

 

I went to bed at 3am last night. When did you go to bed? What does your average week look like?


I’m still awake, I didn’t go to bed. I work about 105 hours a week, every day is 14 hours to around the clock. When I get on a roll I don’t like to stop. It’s not unusual after two weeks to slow down for a day or two though.

 

Let’s do the quick bio thing. Did you grow up in a musical family? Start playing early? See yourself as a mixer?


I was involved with music very early on. My mom was a gifted musician, and I learned a lot from her. I don’t know if I was particularly predisposed to mixing – really, I don’t even look at myself as a mixer, I look at myself as a guy who makes records. I just don’t participate in the entire process. I usually come in at the later part. But I don’t separate the different categories of engineering – it’s all just the process of making the record. For me, I enjoy every part of the process, but I tend to find myself at the mixing stage. For a while I thought I’d be playing on the records. Going from playing to engineering is not that big of a step though. A number of engineers started this way. We were broke musicians, we couldn’t hire an engineer.

 

Cool. Let’s talk “Pensado’s Place.” You’re making accomplished individuals very accessible. You’re exposing tons of great information. Why is it that you seem to have no qualms about revealing so many of your techniques?


It’s good to reiterate the point: I’m not selling my engineering, I’m selling my taste.

Even though Jaycen learned some engineering from me, he came to me with incredible taste. Dylan also has taste. I pick them because of their taste. They absorbed their engineering skills over time. The unique thing is that none of my assistants sound like me. We work together so much, and I hear little things in their mixes – but they’re their own people, and should be. If we were painters, and we decided to study art at a college, one of the problems is that artists sometimes come out third rate copy of their teachers. Some teachers grade from the perspective of what they feel is good. But it’s really about aesthetic.

This is a good time to let the readers know, if you have two hours available, the best use of your time is tolisten to as many records as possible instead of just learning techniques. That time comes after immersing yourself in records you enjoy. Create a set of references. There’s an old myth that says whenever you buy an acoustic guitar, set it in front of your speakers and play the best music you know and let the guitar absorb it, and the wood will retain that sound. Mixers need that same sort of thing. Get your own taste and then study.

 

It really can’t be said enough. So, where do you see the show going? It seems to be gaining popularity – it’s a fantastic show. What’s the goal?


I don’t want every Pensado’s Place episode perfect for every human – I want each one for certain things. I want each episode to have a timeless appeal – I don’t want them to be irrelevant in a year. It’s not just about mixing, but everything around the profession. One of the concepts behind the show is the question: once you make a mix, what the heck do you do with it?

 

I’m going to have A&Rs on the show, people on the business side. Even an art professor from UCLA because the brain has the same components; creativity is creativity, and I want different perspectives. I might have a show on successful mix engineer’s hobbies, and how those hobbies can make you a better engineer. I hope the entertainment makes it accessible to everyone, but not every episode is aimed at everyone.

 

I cook. Little known fact. What’s your hobby?


Photography. I use a lot of visual metaphors for mixing.

 

What is the future of “Pensado’s Place.” Do you have a definite plan, an indefinite plan?


I see it having a definite future. I may hand it off to someone else, but as long as people care, it’ll still be on. It’s all about hanging with my friends. I’ve always envisioned the show having an importance – it might morph, it might change just like our industry changes and our profession of mixing has changed.

 

Mixing in 2011 is 60% different than mixing in the 90s. I’ll have people on the show to help us feel into the future – it’s how to make a living – it’s how to learn – it’s a broad, almost impossible task, but it’s fun. What people don’t know is that I don’t allow the show to be edited. It’s live because that’s who we [my guest’s and I] are. Only time there would be an edit is if a guest said something that he later thought was uncomfortable.

 

Pensado’s Place is really much more than Dave Pensado. You have a great team. Herb is fantastic.


I’ve known Herb 20 years, just being in his presence is fun for me. I think if you look at the guests and the interaction with Herb and I – they all start out a little nervous and then settle in. I’m proud of what we’ve accomplished starting out with nothing. Now the show takes 20 people. If you add up all the views on YouTube, and all the episodes everywhere they’re viewed, we’re probably going to hit – well, a lot of views.  I couldn’t put it together without Will and Herb, and Ryan, Ben and Ian. I get the glory but they do the real leg work. My wife filters through the questions.

 

I’m sure you get a lot of emails and comments.


I get about 300 emails – I don’t have time to respond every time someone contacts me. So, to everyone reading this, know that even if I don’t respond, I read every single email.

 

You were once quoted saying that mixing R&B is more challenging than rock. The sound of rock seems to have adopted a lot of pop trends, influenced by hip-hop. Do you feel rock mixing has changed? How so? Is it still easier?


I still stand by that statement. However, when I first made the statement, I assumed that people would print the rest of what I said! To clarify, the difference is that in the rock world, all of the effort to get quality is in the tracking. In the R&B world, everything is left for mixing. Tracking for R&B is just “get it to tape” – it’s a fix-in-the-mix philosophy, but in a lot of Pop, mixing is an integral part of the production. What I mean, is the producer is creating sounds, he’s mixing as he goes. When I get an R&B or Pop record in, the session has plugins on every track; he mixed as he went. Then I have to sort through all of that and pick it apart.

 

On the rock side, it’s rare that I get plugins on the tracks because the information is in the live capture. An incredible skill and talent has gone into getting the tracking right on the way in. I personally think the most intricate skill is required for tracking, a good tracking engineer can rival the best mixing engineer. Having said that, as to which is harder, I’m totally capable of screwing up either; they require different skill sets. The one thing that I’ve always maintained a great mixer should do is find the energy, the emotion, and what makes the song unique. Manny kind of went into that a bit, and I was mesmerized listening to his answers.At the end of the day mixing is not manipulation of sound – it’s emotion.

 

Very early in my career, I think I’d been engineering for 3 weeks, I did a bagpipe album for the top bagpiper in the world – it sounded like someone stomping through a field of cats. It was difficult to wrap my head around because when I EQ’d it (to smooth out the sound) the whole sound went away. So I accepted it and just turned to the playing. The album was well received – turns out that figuring out the emotion is what made it successful.

 

Our job is to ease the pain a bit in our culture. Even not so esoterically, what people remember is the emotion and the feeling they get from a song. Therein lies the secret to selling records, and perhaps why we’re not selling records now.

 

At what point do you say “I’m done” with a mix? What’s the feel?


I started mixing 35 years ago and I’m still waiting to finish a couple of those mixes. You don’t finish, you just run out of time. In classical and jazz, it may be possible to finish a mix. Currently with the internet, by the time you finish, by the end of the night, it’s obsolete. I enjoy staying ahead of trends, and contributing to the advancement of trends. But these trends always change. And really, you can hear a song a million different ways. I’ve actually recently gone and redone some mixes from a few months ago.

 

What trends have you stayed on the cutting edge of?


Two years ago I was predicting a shift and trend toward euro dance invading hip hop.

Another trend, Rock – just to stir the pot – I don’t think there is any Rock anymore, at least not that’s easily accessible. Rock is now Pop music with turned down guitars and sweet effects. The last great Rock record was Queens of The Stone Age. Rock is now pop with guitars instead of synthesizers. The drums aren’t even live.

 

Do you see more sample replacement or programming in Rock?


What’s the difference? When you change out the drums and make the drum timing so perfect, all you’ve done is create a programmed part. With live drums, you get the drummer, and you don’t dick with it. Maybe a couple nudges – but perfectly timed drum tracks is an anathema to Rock.

 

With R&B you have a steady drum track. We don’t rely on the drums to create the rhythm, we play against the perfect rhythm. You have things that move around it, that make it pocket. In Rock, the drum track should move. The drums on the Rolling Stones music, everybody’s following Keith – and that works. Had you quantized Charlies’ drums, then, Keith would have been out of time. The argument is not live or programmed, it’s perfect or emotional.

 

I once got the idea that ambiance is about one third of a mix. I have yet to feel other wise. To me, room, reverb, delay makes or breaks a mix. Where does it fall along your scale? How long do you spend crafting ambiance?


I spend an inordinate amount of time making ambiances. There’s two pan pots, there’s left and right and front to rear. The front to rear is imaginary – a person is at the other end of a gymnasium, and they yell – the initial sound hits my ear and my brain calculates where they are, 50-100ms. I get that early reflection, which cues my ear to the location and size of the space. With careful manipulation of reverb, echo, pre-delay, early reflections, you can place things pretty accurately.

To read the full detailed article see:  Dave Pensado Interview

March 11, 2011

Mixing in a Plug-In World

Filed under: Mixing reviews, Plugin — Tags: , , , , , , , — audiofanzine @ 9:17 am

You gotta love plug-ins, but they’ve changed the rules of mixing. In the hardware days, the issue was whether you had enough hardware to deal with all your tracks. Now that you can insert the same plug-in into multiple tracks, the question is whether your processor can handle all of them.

Does it matter? After all, mixing is about music, balance, and emotional impact—not processing. But it’s also about fidelity, because you want good sound. And that’s where Mr. Practical gets into a fight with Mr. Power.

The Plug-in Problem

Plug-ins require CPU power. CPUs can’t supply infinite amounts of power. Get the picture? Run too many plug-ins, and your CPU will act like an overdrawn bank account. You’ll hear the results: Audio gapping, stuttering, and maybe even a complete audio engine nervous breakdown.

And in a cruel irony, the best-sounding plug-ins often drain the most CPU power. This isn’t an ironclad rule; some poorly-written plug-ins are so inefficient they draw huge amounts of power, while some designers have developed ultra-efficient algorithms that sound great and don’t place too many demands on your CPU. But in general, it holds true.

Bottom line: If you need to use processing in your mix, you want as much available power as possible. Here are the Top Ten tips that’ll help you make it happen.

1. Upgrade Your CPU

Let’s get the most expensive option out of the way first. Because plug-ins eat CPU cycles, the faster your processor can execute commands, the more plug-ins it can handle. Although there are a few other variables, as a rule of thumb higher clock speeds = more power for plug-ins. Still running in the sub-GigaHertz range? Time for an upgrade. Cool bonus: Pretty much everything else will happen faster, too.

2. Increase Latency

Réglage de latence

Fig. 1: Click for full image and description.

And in the spirit of equal time, here’s the least expensive option: Increase your system latency. When you’re recording, especially if you’re doing real-time processing (e.g., playing guitar through a guitar amp simulation plug-in) or playing soft synths via keyboard, low latency is essential so that there’s minimal delay between playing a note and hearing it. However, that forces your CPU to work a lot harder. Mixing is a different deal: You’ll never really notice 10 or even 25ms of latency. The higher the latency, the more plug-ins you’ll be able to run. Some apps let you adjust latency from a slider, found under something like “Preferences.” Or, you may need to adjust it in an applet that comes with your sound card or audio interface (Fig. 1).

Now let’s take a closer look…

9. Use Snapshot Automation

Plug-ins aren’t the only things that stress out your CPU: Complex, real-time automation also chows down on CPU cycles. So, simplifying your automation curves will leave more power available for the CPU to run plugs. Your host may have a “thinning” algorithm; use it, as you generally don’t need that much automation data to do the job (particularly if you did real-time automation with fader moves). But the ultimate CPU saver is using snapshot automation (which in many cases is all you really need anyway) instead of continuous curves. This process basically takes a “snapshot” of all the settings at a particular point on the DAW’s timeline, and when the DAW passes through that time, the settings are recalled and applied.

10. Check Your Plug-in’s Automation Protocol

Our last tip doesn’t relate to saving CPU power, but to preserving sound quality. Many plug-ins and soft synths offer multiple ways to automate: By recording the motion of on-screen controls, driving with MIDI controller data, using host automation (like VST or DXi), etc. However, not all automation methods are created equal. For example, moving panel controls may give higher internal resolution than driving via MIDI, which may be quantized into 128 steps. Bottom line: Using the right automation will make for smoother filter sweeps, less stair-stepping, and other benefits.

Okay . . . there are your Top Ten tips, but here’s a bonus one: Any time you go to insert a plug-in, ask yourself if you really need to use it. A lot of people start their mix a track at a time, and optimize the sound for that track by adding EQ, reverb, etc. Then they bring in other tracks and optimize those. Eventually, you end up with an overprocessed, overdone sound that’s just plain annoying. Instead, try setting up a mix first with your instruments more or less “naked.” Only then, start analyzing where any problems might lie, then go about fixing them. Often tracks that may not sound that great in isolation mesh well when played together.

To read the full detailed article see:  Mixing in a Plug-In World

March 2, 2011

Tips for Mixing Rap Vocals

If I had to pick the most frequent question I get asked on a regular basis – it would have to be “how do I mix rap vocals?” Or some variation thereof. At least once a week, if not more often.

I mix a new rap vocal four or five times a week – much more if you count different rappers on the same song. I have developed an approach – sort of a formula to create a formula. In truth, we know that all songs, vocals, captures, and performances are different. There can never be one formula to mix all vocals effectively. And there are many approaches to conceptualizing a vocal treatment – mine is one of many.

The Concept

It all starts with the concept. I say this time and time again, and it only gets more true as I say it – in order to mix anything – you need an end game. There has to be some kind of idea of where the vocal is going to go before you start getting it there. That idea can and probably will change along the way, but there has to be some direction or else why do anything at all.

The big problem most people have with mixing rap vocals is that they think of the word “vocals” without considering the word “rap.” Rap is supremely general – there are big differences between 1994 NY style rap vocals, and 2010 LA style rap vocals.

Now let’s have a listen to some mixing samples…

Processing

Now you have the vocals clean (or maybe they came in clean to begin with). It’s time to decide what to do with them. Now, I can’t write how you should or should not process your vocals, but I can give some insight into things to consider and think about.

Balance

Figuring out the relationship between the vocals and other instruments in the same frequency area is extremely important. Quintessentially, Hip Hop is all about the relationship between the vocals and the drums – and the number one contestant with the voice is the snare. Finding a way to make both the vocals and the snare prominent without stepping on each other will make the rest of the mix fall nicely into place.

In “1nce Again,” you’ll notice that the snare is a little louder than the vocals, and seems to be concentrated into the brighter area of the frequency spectrum, while the vocals are just an inch down, and living more in the mid range. This was a conscious decision made in the mix. But mixes like Loungin’ have the vocals on par with the snare. And Massive Attack has the vocals up – but it’s not really a snare, it’s a percussive instrument holding down the 2 and 4 that lives primarily in the lower mid region.

“Air”

Hip Hop vocals generally do not have much in the way of reverb. There are three reason for this primarily. 1) Rap vocals tend to move faster and hold more of a rhythmic function than sung vocals – and long reverb tails can blur the rhythm and articulation. (2) The idea of Hip Hop is to be “up front and in your face”, where reverb tends to sink things back in the stereo field. (3) Everyone else is mixing their vocals that way. Not a good reason, but kind of true.

 

However, vocals usually do benefit from sense of 3-D sculpting, or “air.” A sense of space around the vocals that make them more lively and vivid. Very short, wide, quiet reverb can really do the trick here. Another good thing to try is using delay (echo) – and pushing the delay way in the background, with a lot of high end rolled off of it. This creates the sense of a very deep three dimensional space, which by contrast makes the vocal seem even more forward. Lastly, if you are in a good tracking situation, carefully bringing out the natural space of the tracking room can be a good way to get super dry vocals with a sense of air around them. Compression with a very slow attack, and relatively quick release, and a boost to the super-treble range can often bring out the natural air.

Shape & Consistency

A little compression is often nice on vocals, just to sit them into a mix and add a little tone. On a sparse mix, a little dab’ll do ya. The most common mistake people make when processing vocals for Hip Hop is to over-compress. High levels of compression is really only beneficial to a mix when there is a lot of stuff fighting for sonic space. When you read about rapper’s vocals going through four compressors and really getting squeezed it’s probably because there are tons of things already going on in the mix, and the compression is necessary for the vocals to cut through. Or because it’s a stylistic choice to really crunch the vocals.

Filtering

What’s going on around the voice is just as important to the vocals as the vocals themselves. Carefully picking what to get rid of to help the vocals along is very important. For example, most engineers hi-pass filter almost everything except the kick and bass. That clears up room for the low information. But often the importance of low-pass filtering is overlooked. Synths, even bass synths, can have a lot of high end information that is just not necessary to the mix and leave the “air” range around the vocals feeling choked. A couple of well placed low-passes could very well bring your vocals to life.

 

Also, back to the subject of hi-passing, unless you are doing the heavy handed Bob Power thing, you really don’t need to be hard hi-passing your vocals at 120hz. The human voice, male and female, has chest resonance that goes down to 80hz (and even under sometimes). Try a gentle hi-pass at around 70 or 80hz to start with if you’re clearing up the vocals. Or maybe no hi-pass at all…

Presence

Deciding where the vocal lives frequency wise is important. Mid sounding, “telephonic” vocals can be cool at times, low mid “warm” sounding vocals certainly have their place. Commonly, the practice is to hype the natural presence of the vocals by getting rid of the “throat” tones and proximity build up which generally live around the 250-600hz range (but don’t mix by numbers, listen listen listen). This in turn exaggerates the chest sound, and the head sound – particularly the sounds that form at the front of the mouth, tongue, and teeth – these are the tones that we use to pronounce our words and generally live in the upper midrange (2k-5k, no numbers, listen listen listen).

I think that about covers the basics of what to listen for when working your vocals.

To read the full detailed article with sound samples visit:  Mixing Rap Vocals

Create a free website or blog at WordPress.com.