Mittwoch, 16. August 2017

10 Tips for using an audio compressor in the mixdown.




1. The best audio compressor is the one whose presence is not heard at all. (Apart from exceptions such as side-chain compression in EDM).


2. A compressor does not increses the signal, but it reduces peak levels. Only the Make Up Gain will raise the level.

3. This does not make the loud sections of the music louder but the quiet.



4. The compressor reduces the original dynamics.

5. Problematic are the transients (fast attacks), for example of an acoustic guitar. The transients compressed at first, because they have a relatively high level. This reduces the level, but also the original sound, because transients are very important for the sound.

6. Voices can be easily reduced because they have very few transients. 12 dB and more (recommended ratio 4: 1) are pssible.

7. But with a strong voice compression, the breather and other background noises are much louder.

8. With the side-chain input, the work of the compressor is not controlled by the input signal, but by the signal that is present at the side-chain input.

9. For example, a synthesizer in a mix can be automatically quieter when the voice is applied to the side-chain input. This creates space for the voice and makes a mix lively.


10. In the parallel compression, the compressed signal and the uncompressed signal are interconnected phase-stiff. This way you can increase the low level signal and keep the original transients. (Dry control)

Stefan Noltemeyer  
www.mastering-online.com


Freitag, 28. Juli 2017

technical anaysis of a produced song



Before I start mastering a song, I listen and analyze its acoustic-technical problems. I separate the overall sound into its components.

First, I check out the low frequencies. 
Are the basses generated by the bass,the bassdrum or both?
Are there any other instruments which sounds below 100 Hz? -
If the Lo End o.k or should I use a high-pass filter above 30 Hz. (This shows me also my spectrum analyzer)
Are there any problems between the bass and the bassdrum?

Next, I check out the mid frequencies between 200 Hz and 3000 Hz (where the "music plays").
Is there any instrument or a voice that is too loud?
Is there a single frequency that is too loud?
Is there any instrument or a voice that should be louder?
Is the kick of the bassdrum (the attack) and snaredrum powerful enough?

What about the presences (between 3000 Hz and 8000 Hz)
Which instruments are still involved in this frequency range?
Is the voice sound present enough?
Are there problems with "S" in the voice?
Are there instruments and overtones that are to strong here?
Is the hihat too loud (classic error in the mixdown)?
Is Snaredrum still there?

Next check out the high-end above 8000 Hz
Which instruments play "top"?
(Again) Are the hihat, ride and crash o.k.?

What about the total sound
Does anything boom in the bass?
Does the title sound too sharp or too dull?
What happens in the side channel, are there phase cancellations (stereo information)?
What is the total level, has the sum ever been compressed?
Do we need more loudness
if I´m unsure I will listen to the song in comparsion to other titels same genre.

Stefan Noltemeyer

Donnerstag, 10. Dezember 2015

Linear and non linear audio distortion


Essay from the book "Mastering" (translated by google)

Stefan Noltemeyer: www.mastering-online.com

Distortions

All components of a sound studio can be divided into four groups:
1. transducers (microphones, speakers)
2. amplifiers (preamplifiers, power amplifiers, impedance converter, equalizers, compressors, limiters, effects units)
3. Memory (analog and digital tape machines, computer hard discs)
4. Cable

Excepting the digital storage, all these components produce distortion in varying levels.
Distortions inevitably arise in any type of audi transmission.
Distortion in the audio technology means , the modification or manipulation of an original signal. Distortion may intentionally or unintentionally caused.

One distinguishes between linear and non-linear distortions.

Linear distortions

In linear distortions can turn different delay distortion and amplitude distortion. In contrast to non-linear distortions incurred in linear distortions no additional frequencies or harmonics.

Runtime distortions in each transmission chain. A signal always comes later to the receiver, as it has been sent. If a full-range equally offset in time, the problem is mostly irrelevant. Exceptions are the latency problems in working with computers. The processing time required by a DSP (Digital Signal Processor), audio data can be processed depending on the processing power and quality of the audio card to several hundred milliseconds, respectively. When a musician with his instrument or recording his voice "by the computer" listen, he will feel the delay as disturbing.
In the analog audio technology problems arise when several signals with different delays are switched together.
Delay distortion in which the delay is shorter than the period of a single vibration, called phase distortion or phase shift.

If different frequency bands vary greatly delayed in a transmission chain, may lead to audible effects.
I distinguish two different groups of perpetrators.
Firstly produce individual components in the electronics delay distortions, on the other hand caused delay distortions under certain conditions in acoustics.
In an amplifier stage, there are different times for high and low frequencies by specifications of individual components. A well-known example is the so-called RC member. It consists of a combination of a capacitor and an electrical resistance. Such a RC member, for example, the main component of an equalizer. It shifts the phase angle from high to low tones within an audio signal. Audible this is only in extreme cases. Relevant it is when adding up many small runtime shifts. Details are described in the chapter "Equalizer".

On closer examination of the acoustic aspects of audio engineering, one often encounters phenomena that cause delay distortions.
Phase cancellations make disturbing especially with low- frequency signals.
In a graphic example, we have two sine waves with a frequency of 50 Hz and amplitude equal to 10 milliseconds large relative to one another. This corresponds exactly to a 180 degree phase shift. The vibration is placed in the left channel of the stereo image, the other right. If you switch now the signal to mono, the two vibrations are added to a sum. At the crest of a sine wave is a wave of the other sine wave added. In this case, delete them both completely made Dis sum is zero. Silence !

Since there is practically no music in the pure sine waves, and as a phase shift of exactly 180 degrees will never happen, this is a theoretical example. At a wavelength of several meters for low frequencies (6,80m at 50 Hz) can very well keep in mind that the phenomenon of phase cancellation blatantly changes if, for example, one meter further away from the sound source.

When you record a sound source with two or more microphones,
it comes to phase cancellation, since the sound requires different maturities to the different microphones. It caused interference when you switch the signals from the microphones together. Clear up the sound waves, is known as destructive interference, add up the sound waves it is called constructive interference. In both cases there is a distortion of the original sound. In acoustics it is called comb filter effects.

The frequency spectrum of an instrument e.g. an acoustic guitar or a wing, which is picked up by two microphones, by such a comb filter effect is partly strongly distorted. The decisive factor is each other the volume balance of both microphones.
Are they identical, the effect is strongest.
Is a microphone, for example, one meter further away from a sound source, as a
otherwise, the result is a delay difference of 0.029 sec.

Almost classical problem is during the recording of a piano or a grand piano. The problem arises as follows: It is common that a microphone for the bass and one is used for the treble. Now since both microphones not only the area for which they are intended, to record, but the full range of the wing, there is phase shifts due to different maturities of the two signals. Since the sound is very slow, small run-time differences become a problem already. To illustrate this, let us imagine a single bass string of a wing. If she was injured, the resulting sound from two microphones is recorded. Now these two microphones have a different distance from the sounding string. Suppose the microphone, which is actually intended for the recording of the treble, located one meter further away from the sounding string, as the microphone for recording the bass. The sound needs a 1: 340 th of a second longer to treble microphone. In mixing the signals from both microphones now come together. In the nature of the sound, both are almost identical, after all, they have taken up the same sound, but they are time-shifted slightly against each other. It can be assumed that it now provides a panorama of the microphone to the left and the other to the right. Thus, the problem is actually eliminated. Now if a listener is exactly in the middle of the speakers, which reproduce the piano sound, it will at least be able to hear in the deep tones that there are phase cancellation. If you switch now this stereo signal to mono, the two slightly offset signals add together. Now it comes to cancellations of individual frequencies. Compared with a recording of the same instrument with only a microphone, there will be a lesser amount of bass. How serious are the cancellation depends on the ratio of the volume of both signals. The greater the difference in the volume, the less is the problem. If both signals are approximately identical, arising comb filter effects. The human ear will perceive such a comb filter effect than a changed tone.
It is known that the human ear for delay distortions or phase distortion is much less sensitive than, for example for amplitude distortions. But the ear may very well perceive the resulting errors. You identify it only takes a little practice.

To learn how a phase cancellation effect in a complex signal, one can tentatively change the polarity on one of his speakers. The membrane of a loudspeaker vibrates at a resting point. Now, if the polarity is reversed a box that has the result that the membrane of a box goes to the outside at a signal and at the other box inwards. This has the consequence that a box generates a sound-pressure and the other box, a sound negative pressure.
This can be heard. One gets the impression as if the bass reproduction to be considerably weaker.



In 1988 I began in Frankfurt in a recording studio as an assistant, coffee maker Pizzaholer and studio technicians ,. This studio was set up for "audio file" Sound Recording. It had a big recording room with a grand piano and a control room with incorporated premeasured Studio Acoustic (live end / dead end). We have there recorded and produced many jazz and rock bands. In receiving free times I was the studio often for its own sessions and sound-engineering experiments. Many of my knowledge and experience I have acquired to me there. When shooting our wing I have tried many variations on the one hand the best possible way to capture and secondly to minimize the resulting phase shifts through the "Polymikrofonie" the sound of the instrument. By the way, that has my ability to perceive even small phase shifts, trained.
Apart from the reduced bass playback leaves a phase shift an indefinable, skuriles feeling in the head. A friend described it as follows: It feels like someone driving with a sponge over your scalp.
Recently, I have a plug-in of the company Universal Audio named "Little Labs" discovered. It offers the possibility of the phase of a signal of 0 degrees to 180 degrees to rotate continuously, and it can delay the signal continuously from 0 to 4 milliseconds. With this tool, you can compensate for the phase problems of a stereo signal effectively. As a control, I take a phase correlator for help. The less left of center moves the indicator, the optimal will be the result.


Possible technical causes of phase cancellation:
Often I have XLR cable soldered; An XLR connector has three numbered with numbers Anschüsse, No. 1 is the connection for the shield, No. 2 for the positive signal wire, No. 3 for the negative signal wire. When I 2 and 3 Swap connection to one of the two male ends, I create in conjunction with a second (properly connected cables) a 180 degree phase cancellation.
Modulation effects such as chorus produce phase shifts that can cause cancellations of low frequencies. Therefore, the chorus on a bass is critical.

Amplitude distortion

Amplitude distortion called all changes an original signal, wherein the amplitudes of individual frequencies are amplified or attenuated. Are, for example, raised the high frequencies, the sound will be more brilliant than the original. If the low frequencies increased, you will feel the sound "more powerful" than the original sound.
With an equalizer amplitude distortions are deliberately brought about in order to make a sound in a certain way.
In the German translation the word Equalizer "leveler" means.
Originally an equalizer was used in the broadcast technology to compensate for unwanted amplitude distortions of a transmission chain. It should therefore be "razed" by means of an equalizer, the transmitted sound to the original.
There are a number of polluters of amplitude distortion.
Each cable amplitude distortions produced by the electrical resistance of the copper wire. If all frequencies equally reduced, you will be able to perceive the change only in the volume. Will some frequencies more reduced than others, one perceives a change of tone. The problem is known in a guitar cable. Since the output of an electric guitar is high-impedance, the cable acts as a filter that reduces high frequencies.

Amplitude distortions also arise because an amplifier not all frequencies of the audio spectrum accurately amplifies the same. Especially at very low and very high frequencies that is relevant. A quality characteristic of amplifier is the frequency response (20-20,000 Hz for example)
This means that the amplifier is capable of all frequencies between 20 and 20,000 Hz with (almost) the same volume can be transferred.
Relatively strong arise unwanted amplitude distortion in a loudspeaker or a loudspeaker. In most chains About Conference speakers are the "weakest" element, because it causes the most massive linear distortions.

Nonlinear Distortion
Includes a sound non-linear distortions, the original have been added intentionally or unintentionally overtones. Most is integral multiples of the fundamental frequency. The ratio of the root of the unwanted overtones is called distortion. The louder the harmonics relative to the fundamental, the higher is the THD. These distortions are caused by the non-linear characteristics in the active components (transistors, ICs, amplifier tubes) of an amplifier. So there are the non-linear distortions, which are referred to in common parlance as distortions. Due to the high quality of today's electronic components, one can assume that you have this kind of distortion only perceives when overloaded an amplifier massively. The resulting congestion in the heat also plays a role, because the temperature often influence on the characteristic of an active component.

The most famous use of non-linear distortions, there is in the world of guitarists. What is usually unintentionally, is looking for a guitarist. Originally, the distortion created only if you hired a guitar amplifier for maximum gain and thus overburdened. The guitarist enjoys not only the overtones newfound, but he also uses the fact that the Sustain extended by the distortion, or in other words, that the sound is longer before he stopped. Since this effect is desired to produce only in connection with a brutal volume, the industry has been built since the 60s devices that generate at least this or a similar distortion at low volume. They call themselves Fuzz, Tube Screamer, overdrive or distortion.

Montag, 3. August 2015

10 Tips for the perfect mix




  1. Even the mix is almost done in the process of producing a song, it makes sense to start the mixdown from the very beginning. Switch off all tracks, bypass all effects and inserts like, compressor and equalizer. Start with the bassdrum, adjust the level to appx -10 dBFS.
  1. continue with the snaredrum, the hi-hat, the bass follows ... (These are the instruments that generates the highest level.) Next are harmony instruments, keyboards and guitars. After that you can open the main voice to check if there are any sounds which covers the vocals.
  1. divide the mix in subgroups, one group for drums, one for keyboards, one for the voices, and so on ....You can edit the over all sound of the groups and you get better control of the total level.
  1. In the work of each single track, insert at first the equalizer following the compressor. For insert effects, like phaser, flanger, delays you can switch a second compressor after that.
  1. When filtering individual tracks, it may be possible, that the sound of an instrument it self does not sounds great, but in mixdown it works perfectly. So when setting a equalizer also listen to the connection of all instruments, not only solo.
  1. Acoustic instruments, voices, guitars, uses absolutely a compression. It limits the dynamic range. A ratio of 4:1 with a compression of 8-12 dB is useful for the voice. Watch the low frequencies compressing the bass (It easily gets lost); acoustic guitars tend slightly to pump by compression.
  1. Every instrument, every sound gets its assigned position. Imagine a three-dimensional coordinate system. There are left and right (panorama), up and down (treble and bass) and front and rear, determined by reverb and delay. The fourth dimension is the time. Each instrument gets his place. The automation allows changing the level of instruments in the flow of the song.
  1. Whenever multiple instruments sounds in the same or similar frequency range simultaneously, they mask each other. One classic example: the snaredrum and the voice (depending on the pitch), the louder the snare, the quieter sounds the voice.
  1. Effects are important. They are the salt in the soup, but each sound has its moment of performance. Delays can be programmed, on and off switched. Its easy to vary the send level of a reverb effect. Too much of a good thing brings too salty to enjoy.
  2. Never do the mix into an inserted limiter in the sum. He falsified the instrument balance. Maximizing or mastering is a separate session. It´s no problem to raise up a low leveled mixdown, but if its to high, you get in trouble (distortion). Never do mixdown and masring at once. It is useful to take a break of 24 hours until you do the mastering, or consult a specialist like...

Stefan Noltemeyer: www.mastering-online.com
E-Mail: info@mastering-online.com



Donnerstag, 25. November 2010

MS-editing in the mastering process

MS-editing in the mastering process

The same way you can split a stereo signal into a left and right signal, you can split it into a middle and side signal. The middle signal is the sum of left and right. The side signal is the difference of left and right.

When we split the signal this way we have other possibilities to edit when mastering. We can edit individual instruments in the stereo panorama, and it is possible to edit the middle signal and side separately. This is useful for editing the hihat or a doubled (left/right) guitar.

M=L+R
S=L-R
M+S=L
M-S=R

M= mid signal
S= side signal
L= left channel
R= right channel


Stefan Noltemeyer :  www.mastering-online.com
E-Mail: Info@mastering-online.com

Montag, 5. Oktober 2009

The finess difference with Tube-Tech SMC 2B and Studer A 80



There are two devices which are significant in creating the sound of www.mastering-online.com. These are the analogue multiband compressor with tube technology Tube-Tech SMC 2B and the analogue taperecorder Studer A 80.
In the following notes I describe what these machines do and how we use them.
With the multiband compressor it´s possible to compress several frequencies in a different range. This is very useful because , in a “normal” compressor, the frequencies are compressed which have the highest level. These are mostly the frequencies of 40 Hz to 400 Hz. So the compression of the complete spectrum is determined by these low frequencies.
With the Tube Tech I have the choise to compress three different frequency bandwidths. If there is a song which has a strong bass, it´s possible to set the compression only these lower frequencies. The vocals or guitars are still untouched.
The increase in volume is realised by the compression of the Tube-Tech’s tubes. There is a different adjuster for each bandwidth (adjusters for bass, middle and the treble bandwidth). This is nothing more then a three band equlizer. So I use this machine not only to increase loudness but it is a dynamik equlizer as well. The equalization starts when a defined threshold level is reached.
So it is possible to raise the level of separate instrument groups which makes it possible to enormously influence the balance of the original mix. This is naturally only done when necessary. In respect to the original sound of the song we keep the original charcter. The goal of mastering is to feature what´s best in the music.
So I use the Tube-Tech primarily to bring back several instruments or vocals that have been lost in the mix. As a result you get a „tidy“ total sound.
With little tweaks of compression from 2-3dB, and a tube sounded make up gain I am shaping an unique homogeneously warm sound
-------------------------------------------------------------------------------------

By using our Studer A 80 tape recorder the mastering process gets a unique direction.
The mastered song is recorded on a magnetic tape and played back just 1/10 sec later.
The specific charakter is formed by mapping the audio signal in magnetic energy and converting it back.
Subjectively, this creates the impression that the digital bits are joined to a homogeneous whole field together. My impression is that it advances the deep staggered .
So this is additive process to bring the analogue sound in the digtal world.

The band saturation is a popular theme in the use of analog tape recording.
You will get a compression when you record a very high level on a magnetic tape.
The magnetic flux has a specific limit. If you cross this level the magnetic flux doesn´t raise in the same way as the power level. In this way you will get the famous band saturation.

Stefan Noltemeyer :  www.mastering-online.com
E-Mail: Info@mastering-online.com

Donnerstag, 13. Dezember 2007

Equlizer-Test

Equalizer test and comparison

After purchasing an Avalon AD 2055, I thought it would be useful to present my observations of this equalizer and the other equalizers that that we have at are disposal here at mastering-online.


Software EQ,

Emagic 7.2 : Channel EQ, Linear Phase EQ, Match EQ, Fat EQ, Silver EQ, DJ EQ,
UAD Plug Ins: Cambridge EQ, EX-1, Pultec EQ,
Izotope: Ozone 3

Digital Hardware:

EMT 248, Drawmer Masterflow, Tc Finalizser,

Analog Hardware:

Klein Hummel UE 1000, Tube Tech SMC2B, SPL PassEQ, Avalon AD 2055





In keeping with the demands of a modern state of the are mastering studio I purchased the Avalon AD 2055 and tested it against the digital Equalizers that are predominantly used by us here at the studio.

The Avalon has a passive shelf EQ filter for bass (18Hz-450Hz) and a high passive EQ filter (1.5KHz – 25KHz). These filters can be changed to parametric EQs which furthers the Avalon’s versatility. In addition there are two full parametric mid band EQs from 35hz to 450Hz and from 160Hz to 2 KHz, with variable frequency selection that can be switched to multiplications of ten. With this setup one can get to any necessary frequency, and again showing the flexibility of this Equalizer.

One point of the Avalon AD 2055, that I do not like, is there is no stereo link making it necessary to tweak left and right channels.

I like this equalizer not only for its fantastic filters. But also for its user friendly control knobs, that almost any Tec, would rather turn, then to be clicking and dragging around his or her mouse.


A question that I frequently get (and probably will continue getting) is “can I describe the sound of a particular equalizer?” Well, first and foremost the sound of an equalizer depends primarily on the music that is being worked on, and nothing more. The equalizer itself has no sound (unless I let it fall to the floor).

Now if I lift the highs with a wide band filter, I will be raising the highs that are present in this particular song, what I hear has to do with what is happening in the song, and not necessarily from the quality of the equalizer (unless it is an equalizer of very low quality, that may distort the signal and add hissing noise. Something that on the digital realm is as good as non existent).
Dwelling on the subject of “different sounding equalizers”. I found it necessary to do a comparison test. First I took a piece of music that sounded muffled. I then adjusted all of the equalizers, (Cambridge - Universal Audio Plug In, Pultec - Universal Audio Plug In, Izotope – Ozone 3, EMT 248, Channel EQ Emagic 7.2 Plug In, K&H UE 1000, SPL PassEQ, Avalon AD 2055) to the same setting , of a 6dB lift of 4KHz. And then matched all volume levels.

For monitors I used my Genelec 1031A with Subwoofer 1091A.

The Avalon stood up to and surpassed all of my expectations, and all of the digital Equalizers sounded weak in comparison. Of the digital Equalizers the Izotope faired best and the UAD’s Cambridge Eq sounded the worst.

To describe what I heard, I would say that with the digital filters I could hear that it was merely a mathematic computation, that lifted the highs at 4KHz but the sound remained muffled.

On the other hand the Avalon is different in that the music begins to live and my ears are exposed to what was once hidden, truly enriching the sound of the music.

The other two analog companions, had there difficulties competing with the Avalon. I had the feeling that the UE 1000s’ shelf filter is not steep enough, leaving the sound yet again muffled. The SPL has a similar problem, in that when raising the volume, phasing accurse, do to a R-C link.

So now I am looking to working daily, with my new main squeeze the Avalon AD 2055 (and no Avalon is not paying me for the plug)

All this is not to say, that before the Avalon we could not master a recording well. I have been using the EMT 248 for over 10 years with outstanding results. Imperative to making a good master is the technician doing the master.

If one is trying to realize a maximum technical achievement you must really stretch your self to the limit and that last 5% will be the toughest. Good equipment helps.

If you have any suggestions or questions please do not hesitate to write me at info@mastering-online.com


Technorati Tags: EQ, mastering, audio, Avalon
Technorati Profile

Posted from Stefan under 08:07 0 Kommentare

Labels: Audio, EQ, mastering

Abonnieren Posts (Atom)

Stefan Noltemeyer :  www.mastering-online.com
E-Mail: Info@mastering-online.com