AF’s Weblog

October 21, 2011

10 Questions About Mastering Your Recordings

Filed under: Mastering — Tags: , , , , — audiofanzine @ 7:43 am

Mastering is a crucial process, but it’s not always all that well understood by the average musician…so let’s deal with some of the basic issues.

Your tunes are done, and you’ve decided it’s time to create a CD — which brings you to the subject of mastering, where all the tunes are assembled and optimized for the best possible sound. You really don’t want to make any mistakes at this crucial stage.  Indeed. Mastering can make or break a record, so there’s a lot of interest in doing it right. Here are the ten most common questions I hear from people who are about to get their work mastered.

Q. What’s the best piece of gear for giving me a professional, “radio-ready” sound?
A. The best piece of gear is a professional mastering engineer who has done this process before for hundreds, if not thousands, of recordings.

Q. So do I just send an audio CD with all the cuts, and the engineer masters them?
A. That’s one option, but certainly not the most desirable. Although you should always check with the engineer for specific requirements, if you recorded your music in high-resolution audio, then it’s best to provide those high-resolution mixes, as WAV or AIFF files. The mastering engineer will likely do some processing, and 24-bit files give more “calculational headroom.”


Steinberg’s Wavelab includes excellent
dithering options, but don’t apply these
if you plan to hand off your file to a
mastering engineer.

Q. Wouldn’t it be better to send a dithered version of the 24-bit files, as the files are going to end up as 16 bits on a CD anyway?
A. No. Dither is always applied as the very last stage of mastering, when the higher resolution signal gets downsized to the 16 bits required by Red Book audio.

Q. I want a couple of cuts to crossfade into each other. Should I do the crossfading myself and send the combined cut?
A. Probably not. Fades can be dicey, and again, the mastering engineer will likely have tools that provide the best possible audio characteristics when creating fades. Also, that will insure dithering happens to the combined file — you don’t want to dither two files, then crossfade them. Just make sure that you include full documentation on where you want the fade to begin and end for the two cuts.

 

To read the full detailed article see:  10 Questions about Mastering

February 25, 2011

Much Ado About Dithering

Filed under: Mastering — Tags: , , , , — audiofanzine @ 10:31 am

It’s a dirty job to go from high-res audio to 44/16, but someone’s got to do it.

The ultimate form of digital audio used to have a 16-bit word length and 44.1 kHz sampling rate. Early systems even did their internal processing at 16/44.1, which was a problem — every time you did an operation (such as change levels, or apply EQ), the result was always rounded off to 16 bits. If you did enough operations, these roundoff errors would accumulate, creating a sort of “fuzziness” in the sound.

The next step forward was increasing the internal resolution of digital audio systems. If a mathematical operation created an “overflow” result that required more than 16 bits, no problem: 24, 32, 64, and even 128-bit internal processing became commonplace. As long as the audio stayed within the system, running out of resolution wasn’t a problem.

 

Nowadays, your hard disk recorder most likely records and plays back at 24, 32, or 64 bits, and the rest of your gear (digital mixer, digital synth, etc.) probably has fairly high internal resolution as well. But currently, although there are some high-resolution audio formats, your mix usually ends up in the world’s most popular delivery medium: a 16-bit, 44.1kHz CD.

What happens to those “extra” bits? Before the advent of dithering, they were simply discarded (just imagine how those poor bits felt, especially after being called the “least significant bits” all their lives). This meant that, for example, decay tails below the 16-bit limit just stopped abruptly. Maybe you’ve heard a “buzzing” sort of sound at the end of a fade out or reverb tail; that’s the sound of extra bits being ruthlessly “downsized.”

Dithering to the Rescue

Dithering is a concept that, in its most basic form, adds noise to the very lower-level signals, thus using the data in those least significant bits to influence the sound of the more significant bits. It’s almost as if, even though the least significant bits are gone, their spirit lives on in the sound of the recording.

Cutting off bits is called truncation, and some proponents of dithering believe that dithering somehow sidesteps the truncation process. But that’s a misconception. Dithered or not, when a 24-bit signal ends up on a 16-bit CD, eight bits are truncated and never heard from again. Nonetheless, there’s a difference between flat-out truncation and truncation with dithering.

Now let’s take a closer look at dithering…

Dithering Rules

The First Law of dithering is don’t dither a signal more than once. Dithering should happen only when converting a high bit-rate source format to its final, 16-bit, mixed-for-CD format (and in the years to come, we’ll probably be dithering our 32 or 64-bit internal processing systems down to 24 bits for whatever high-resolution format finally takes off).

For example, if you are given an already dithered 16-bit file to edit on a high-resolution waveform editor, that 16-bit file already contains dithered data, and the higher-resolution editor should preserve it. When it’s time to mix the edited version back down to 16 bits, simply transfer over the existing file without dithering.

Another possible problem occurs if you give a mastering or duplication facility two dithered 16-bit files that are meant to be crossfaded. Crossfading the dithered sections could lead to artifacts; you’re better off crossfading the two, then dithering the combination.

Also, check any programs you use to see if dithering is enabled by default, or enabled accidentally and saved as a preference. In general, you want to leave dithering off, and enable it only as needed.

Or consider Cubase SX, which has an Apogee-designed UV22 plug-in. Suppose you add this to the final output, then suppose ou add another plug-in, like the Waves L1-Ultramaximizer+. This also includes dithering, which defaults to being enabled when inserted. So, check carefully to make sure you’re not “doubling up” on dithering, and disable dithering in one or the other.

Dithering dans Cubase

If you insert dithering in Cubase SX, it defaults to being enabled. So if you use this, make sure that any other master effects plug-ins you add do not have dithering enabled (in this screen shot, the WAVES dithering has been turned off). Or, disable Cubase’s dithering section and use the other plug-in’s dithering instead.

The best way to experience the benefits of dithering is to crank up some really low-level audio and compare different dithering and noise-shaping algorithms. If your music has any natural dynamics in it, proper dithering can indeed give a sweeter, smoother sound free of digital quantization distortion when you downsize to 16 bits.

To read the full detailed article see:  All About Dithering

February 24, 2011

How to Create Wide Open Mixes

Filed under: Mastering, Mixing reviews — Tags: , , , , , — audiofanzine @ 11:15 am

Here are some secrets behind getting those wide, spacious, pro-sounding mixes that translate well over any system.

We know them when we hear them: wide, spacious mixes that sound larger than life and higher than fi. A great mix translates well over different systems, and lets you hear each instrument clearly and distinctly. Yet judging by a lot of project studio demos that pass across my desk, achieving the perfect mix is not easy…in fact, it’s very hard. So, here are some tips on how to get that wide open sound whenever you mix.

The Gear: Keep it Clean

Eliminate as many active stages as possible between source and recorder. Many times, devices set to “bypass” may not be adding any effect but are still in the signal path, which can add some slight degradation. How many times do line level signals go through preamps due to lazy engineering? If possible, send sounds directly into the recorder—bypass the mixer altogether. For mic signals, use an ultra-high quality outboard preamp and patch that directly into the recorder rather than use a mixer with its onboard preamps.

Although you may not hear much of a difference when monitoring a single instrument if you go directly into the recorder, with multiple tracks the cumulative effect of stripping the signal path to its essentials can make a significant difference in the sound’s clarity.

But what if you’re after a funky, dirty sound? Just remember that if you record with the highest possible fidelity, you can always mess with the signal later on during mixdown.

The Arrangement

Before you even think about turning any knobs, scrutinize the arrangement. Solo project arrangements are particularly prone to “clutter” because as you lay down the early tracks, there’s a tendency to overplay to fill up all that empty space. As the arrangement progresses, there’s not a lot of space left for overdubs.

Here are some suggestions when tracking:

  • Once the arrangement is fleshed out, go back and recut tracks that you cut earlier on. Try to play these tracks as sparsely as possible to leave room for the overdubs you’ve added. Like many others, I write in the studio, and often the song will have a slightly tentative feel because it wasn’t totally solid prior to recording it. Recutting a few judicious tracks always seems to both simplify and improve the music.
  • Try building a song around the vocalist or other lead instrument instead of completing the rhythm section and then laying down the vocals. I often find it better to record simple “placemarkers” for the drums, bass, and rhythm guitar (or piano, or whatever), then immediately get to work cutting the best possible vocal. When you re-record the rhythm section for real, you’ll be a lot more sensitive to the vocal nuances.
  • As Sun Ra once said, “Space is the place.” The less music you play, the more weight each note has, and the more spaciousness this creates in the overall sound.

Now let’s take a closer look…

Mastering

Mastering is the Supreme Court of audio—if you can’t get a ruling in your favor there, you have nowhere else to go. A pro mastering engineer can often turn muddy, tubby-sounding recordings into something much clearer and defined. Just don’t expect miracles, because no one can squeeze blood from a stone. But a good mastering job might be just the thing to take your mix to the next level, or at least turn a marginal mix into a solid one.

The main point of this article is that there is no button you can click on that says “press here for wide open mixes.” A good mix is the cumulative result of taking lots of little steps, such as the ones detailed above, until they add up to something that really works. Paying attention to detail does indeed help.

To read the full detailed article see:  How to Create Wide Open Mixes

 

February 9, 2011

DC Offset: The Case of the Missing Headroom

Filed under: Mastering — Tags: , , — audiofanzine @ 3:11 pm

It was a dark and stormy night. I was rudely awakened at 3 AM by the ringing of a phone, pounding my brain like a jackhammer that spent way too much time chowing down at Starbucks. The voice on the other end was Pinky the engineer, and he sounded as panicked as a banana slug in a salt mine. “Anderton, some headroom’s missing. Vanished. I can’t master one track as hot as the others on the Kiss of Death CD. Checked out the usual suspects, but they’re all clean. You gotta help.”

Like an escort service at a Las Vegas trade show, my brain went into overdrive. Pinky knew his stuff…how to gain-stage, when not to compress, how to master. If headroom was stolen right out from under his nose, it had to be someone stealthy. Someone you didn’t notice unless you had your waveform Y-axis magnification up. Someone like…DC Offset.

 

Okay, so despite my best efforts to add a little interest, DC offset isn’t a particularly sexy topic. But it can be the culprit behind problems such as lowered headroom, mastering oddities, pops and clicks, effects that don’t process properly, and other gremlins.

DC Offset in the Analog Era

We’ll jump into the DC offset story during the 70s, when op amps became popular. These analog integrated circuits pack a tremendous amount of gain in a small, inexpensive package with (typically) two inputs and one output. Theoretically, in its quiescent state (no input signal), the ins and out are at exactly 0.00000 volts. But due to imperfections within the op amp itself, sometimes there can be several millivolts of DC present at one of the inputs.

Normally this wouldn’t matter, but if the op amp is providing a gain of 1000 (60dB), a typical 5 mV input offset signal would get amplified up to 5000mV (5 volts). If the offset appeared at the inverting (out of phase) input, then the output would have a DC offset of –5.0 volts. A 5mV offset at the non-inverting input would cause a +5.0 DC offset.

There are two main reasons why this is a problem.

  • Reduced dynamic range and headroom. An op amp’s power supply isbipolar (i.e., there are positive and negative supply voltages with respect to ground). Suppose the op amp’s maximum undistorted voltage swing is ±15V. If the output is already sitting at, say, +5V, the maximum voltage swing is now +10/-20V. However, as most audio signals are usually symmetrical around ground and you don’t want either side to clip, the maximum voltage swing is really down to ±10V—a 33% loss of available headroom.
  • Problems with DC-coupled circuits. In a DC-coupled circuit (sometimes preferred by audiophiles due to superior low frequency response), any DC gets passed along to the next stage. Suppose the op amp mentioned earlier with a +5V output offset now feeds a DC-coupled circuit with a gain of 5. That +5V offset becomes a +25V offset—definitely not acceptable!

Now let’s take a closer look at some other cases…

Digital Solutions

There are three main ways to solve DC offset problems with software-based digital audio editing programs.

  • Most pro-level digital audio editing software includes a DC offset correction function, generally found under a “processing” menu along with functions like change gain, reverse, flip phase, etc. This function analyzes the signal, and adds or subtracts the required amount of correction to make sure that 0 really is 0. Many sequencing programs also include DC offset correction as part of a set of editing options (Fig. 3).
  • Apply a steep high-pass filter that cuts off everything below 20Hz or so. (Even with a comparatively gentle 12dB/octave filter, a signal at 0.5Hz will still be down more than 60dB). In practice, it’s not a bad idea anyway to nuke the subsonic part of the spectrum, as some processing can interact with a signal to produce modulation in the below 20Hz zone. Your speakers can’t reproduce signals this low and they just use up bandwidth, so nuke ’em.
  • Select a 2—10 millisecond or so region at the beginning and end of the file or segment with the offset, and apply a fadein and fadeout. This will create an envelope that starts and ends at 0, respectively. It won’t get rid of the DC offset component within the file (so you still have the restricted headroom problem), but at least you won’t hear a pop at transitions.

Case Closed

Granted, DC offset usually isn’t a killer problem, like a hard disk crash. In fact, usually there’s not enough to worry about. But every now and then, DC offset will rear its ugly head in a way that you do notice. And now, you know what to do about it.

To read the full detailed article please see:  DC Offset

February 2, 2011

Mastering: Curve Analysis and Acquision Software

Bob Ludwig, Doug Sax, Bernie Grundman – they’re masters of mastering. They produce hit after hit, with nothing at their disposal other than…well, experience, talent, great ears, the right gear, and superb acoustics.

So maybe you’re missing one or more of those elements, and wish that what came out of your studio sounded as good as what comes out of theirs. So, why not just analyze the spectral response curves of well-mastered recordings, and apply those responses to you own tunes?

Why not, indeed – but can you really steal someone’s distinctive spectral balance and get that magic sound?

The answer is no…and yes. No, because it’s highly unlikely that EQ decisions made for one piece of music are going to work with another. So even if you do steal the response, it’s not necessarily going to have the same effect. But the other answer is yes, because curve-stealing processors can really help you understand the way songs are mixed and mastered, and point the way toward improving the quality of your own tunes.

As to the tools that do this sort of thing, we’ll look at Steinberg’s FreeFilter (which was discontinued, but still appears in stores sometimes), Voxengo CurveEQ, and Har-Bal Harmonic Balancer. They’re very similar, yet also, very different.

How They Work

FreeFilter and Voxengo split the spectrum into multiple frequency bands in order to analyze a signal. These create a spectral response, as from a spectrum analyzer, while a song plays back. During playback, the program builds a curve that shows the average amount of energy at various frequencies. You can apply this analysis (reference) curve to a target file so that the target will have the same spectral response as the analyzed file, as well as edit and save the reference file.

Har-Bal isn’t curve-stealing software per se. While optionally observing the response of a reference signal, you can open another file, and see its curve superimposed upon the reference. You can edit the opened file’s curve so it matches the reference signal more closely, but this is a manual, not automatic, process.

Fig. 1: The black line is the spectral response for Madonna’s Ray of Light; the red line represents a Fatboy Slim mix. Fatboy’s has a lot more treble, while Ray of Light has a serious low-end peak.

The manual vs. automatic aspect is in some ways a workflow issue. FreeFilter and Voxengo start by creating the reference curve, but give you the tools to adjust this manually because you’ll probably want to make some changes. Har-Bal takes the reverse route: You start out manually, and if you want to, use the tools to create something that resembles the visual reference curve, which was generated automatically when you opened the file. Also remember that curve-stealing is only a part of these programs’ talents; they’re really sophisticated EQs.

So what do some typical curves look like? Check out Fig. 1. The black line is the spectral response for Madonna’s “Ray of Light,” while the red line represents a Fatboy Slim mix. Past about 1 kHz, Fatboy’s curve shows enough high frequency energy to shatter glass. “Ray of Light” has a higher response below about 400 Hz, due mostly to a prominent kick. It has a more thud-heavy, disco kind of vibe, whereas Fatboy Slim leans more toward a techno style of mastering. Apply these curves to your own music, and they’ll take on the characteristics of the reference tunes – but the results may not be what you expect, as we’ll see.

Now let’s take a look at the individual software…

So What Does Work?

Using your ears to compare your work to a well-mastered recording is a tried-and-true technique, but it shortens the learning process when you can actually compare curves visually and see what frequencies exhibit the greatest differences.

I’ve found a few reference comparison curves for Har-Bal that work well for certain types of music: Fatboy Slim for when dance mixes are too dull, “Ray of Light” for a house music-type low-end boost, Cirque de Soleil’s “Alegria” for rock music, and Gloria Estefan’s “Mi Tierra” for acoustic projects. On very rare occasions I use their curves, but when I do, they’re more like “presets” because they end up getting tweaked a lot. Automatic curve-stealing just doesn’t do it for me, but “save me 10 minutes by putting me in the ballpark” does.

But my main use for curve-analyzing software is for stealing from myself. After mastering a music project for a soundtrack, one tune sounded a little better than the others – everything fell together just right. So, as an experiment, I subtly applied its response to some of the other tunes. The entire collection ended up sounding more consistent, but the differences between tunes remained intact – just as I’d hoped.

Another good use was when German musician Dr. Walker remixed one of my tunes for a compilation CD, but used a loop for which he couldn’t get legal clearance. Rather than give up, I created a similar loop that wasn’t a copy, but had a similar “vibe.” Yet it didn’t really do the job – until I applied the illegal loop’s response curve to my copy. Bingo! The timbral match was actually more important than the particular notes I played in terms of making the loop work with the rest of the tune.

To read the full detailed article please see:  Curves of Steal

This does produce a weird paradox, though: I used a piece of curve-stealing software to avoid stealing a piece of copyrighted material. I guess it’s all part of the living in the 21st century.

June 9, 2010

Audio Mastering

Tom Volpicelli of The Mastering House answers the top 10 common questions about mastering.

What is mastering and the role of the mastering engineer?


Mastering is essentially the step of audio production used to prepare mixes for the formats that are used for replication and distribution.  It is the culmination of the combined efforts from the producer, musicians, and engineers to realize the musical vision of the artist.  Each stage of the audio production process, from pre-production to mastering, builds on each other and is dependent on the previous process.  Mastering is the last opportunity to make any changes to positively affect the presentation of your music before it evolves from a studio environment to the outside world.

An awareness of the differences between the roles of mixing and mastering engineers should be noted.  While the tools may be similar, the perspectives between mixing and mastering are very different. When mixing, the focus is on the internal balance of individually recorded tracks and effects used both sonically and creatively for a single piece of music.

An album cannot be heard in its entirety until the job of a mix engineer is completed. The mastering engineer picks up where the mix engineer leaves off. Mastering is geared toward creating the balance required to make the entire album cohesive. The mastering engineer is most concerned with overall sonic and translation issues.  A mastering engineer works with the client to determine proper spacing between songs and how songs will be ordered on the CD. The flow of an album must appeal to the listener; it should engage them and take them on a musical journey as determined by the artist. Any final edits will be addressed during the mastering process as well.

Finally, the role of the mastering engineer is to provide preparation and quality control of the physical media send to the plant for replication.  This includes listening to the premaster CD to verify integrity, along with the more technical aspects such as encoding text, UPC/EAN and ISRC codes, checking for errors within the media and providing any necessary documentation such as a PQ list.

Is mastering always necessary?

A writer’s words are not complete until the editor approves them. A painter’s work is not complete until it has been matted and framed.  A musician’s work requires the same treatment. Audio production should not be rushed, finished haphazardly or completed “just to get it out there”. A finished product should reflect all of the work of the artist, producers and engineers that carry that vision forward.  Even a “perfect” mix needs mastering to a degree. In this case, you want the mastering to be as transparent as possible so that the original sound is maintained while preparing it for the final media.

As mentioned earlier, it is difficult for a mixing engineer to know how an entire album will sound in its entirety while mixing an individual track. In some cases a given track may be perfect on its own.  However, when that track is placed within the context of an album, slight adjustments in level or frequency balance may be required.  Given the amount of music distributed online, an album needs to stand out from start to finish to be noticed in such a competitive market. If the final goal is to create a product that is ready to be played on the radio, distributed online, or sold as a physical product, it should be mastered.

Mastering helps say something about the professionalism of the artist, from the arrangement of certain styles of songs to the volume of the recording to the pacing of the tracks. If an artist is serious about their music, they should make sure that someone with experience signs off on the finished product.

What kind of improvements can be expected from mastering?

Mastering can help to achieve the correct balance, volume, and depth for a style of music. It can add clarity and punch to music, giving it more vitality.  The idea behind mastering is that a product will sound better after it is treated by the mastering engineer. The degree with which a mastering engineer can achieve this is dependent on the given mixes. In some cases there may be limitations or compromises that need to be made.

One limitation of mastering is the inability to restore severely distorted material. Distortion in a mix is like corrosion; once present it cannot easily be removed and has permanently destroyed a part of the material.  While mastering can mask the effect of some types of distortion, it is essentially covering blemishes that should be addressed before the mastering stage. A common misconception is that mixes should be as “hot” as possible. With the advent of 24 bit digital technology there is no reason why mixes have to “go into the red.”

Most mastering engineers recommend a cushion of anywhere between -6 to -10 dBFS from peak level to help ensure that clipping does not take place and to allow room for processing.  In addition to peak level, the crest factor (peak-to-average ratio) is very important. While dynamic range can always easily be reduced, it is very difficult to undo the effects of over compression or limiting.

If the internal balance of a stereo mix is off, there may be compromises in the sound of the mastered track that will need to be made. For example, if cymbals or a vocal is very sibilant and bright while other parts of the mix are dark, it can be difficult to balance the overall sound in a way that enhances all elements.

In addition to frequency, levels between tracks may also be an issue. If the mastering engineer is given a stereo mix (as is usually the case) specific individual components of the mix cannot be completely isolated and processed separately.  While there are techniques such as de-essing, mid/side processing, equalizing or compressing for a specific imbalance, the results will likely not be as good as with a mix not having these issues and allowing the mastering engineer to address the balance on the whole.

One method of getting around internal balance issues is to provide alternate mixes. Some examples are vocal up/down mixes or mixes where one EQ is favored over another. Another method is supplying the mastering engineer with “stems” or sub mixes of the stereo track.  These might include a separate stereo mix of the vocals or instruments that when summed together are the same as the stereo mix minus any stereo bus processing.  In this case the mastering engineer is placed slightly in the role of a mix engineer and can make adjustments that wouldn’t be possible with a stereo mix alone. Another advantage with using stems is that alternate masters can easily be created such as radio edits, instrumental and vocal-only masters.

Another area where “fixing it in the mix” is better than “fixing it in mastering” is when dealing with the issue of noise. Mute automation on individual tracks should be used where there are noises during sections of a track that are not contributing to the mix.  Some examples are electric guitar hum/buzz on intros, outros, and breaks, bleed from headphones on the vocal track when the vocalist is not singing, drummers laying down their sticks after cymbals have faded but while other instruments are still playing at the end of a track.

Should you choose an engineer based on their “style”?

nevermindTen different mastering engineers working in the same room with the same equipment will create ten totally different masters, each sounding great on their own.  If you ask those same engineers to go back and reproduce any given master, you are likely to get ten almost identical masters back.  While each individual mastering engineer has his own style, it is important that he is able to separate himself from his style when needed.  An engineer should never let his personal taste interfere with the goal of the artist he is working with. Again, this is where communication with the client is a crucial element.

A good mastering engineer should be well versed in a variety of different categories of music. In general, there is no reason why an engineer known for creating great Country albums cannot produce a great Rock album.  While an engineer’s work should be able to transcend musical genres, if a mastering engineer has a certain style that is appealing to you as the artist, you should consider working with him.  It is important that both the engineer and the artist can communicate in a way that is complimentary to both individuals.

Which is more important, a technical background or musical one?

A mastering engineer should be well versed both technically and musically. The craft of the engineer is to be able to know good music and know how to make that music sound better.  Still, while a technical background is extremely important in the mastering world, that background should not interfere with the aesthetics.  Likewise, any personal feelings an engineer has about the stylistic choices of the music he is mastering should ultimately be discussed with the musician. It is because of this that an engineer’s musical background should not hinder his craft.

Given a technical background, some mastering engineers are capable of making modifications to equipment to create a more transparent sound, or provide color according to their taste and needs.  Having a musical background, particularly in the area of pitch, allows an engineer to identify frequency issues relating to musical notes and can speak directly to the musician about these issues in their terms.

An engineer should make sure that he strays away from favoring either background. While most engineers come from one or the other, their craft is in combining the two.  A mastering engineer should remain as objective as possible while still providing necessary feedback and insight from both a musical and technological perspective.

To read the full detailed article see:  Audio Mastering

Create a free website or blog at WordPress.com.