Lkfs Premiere Pro



I also adjusted the master volume with a hard limited, then used a loudness radar to target the loudness at -15.0 LKFS. In terms of the visual part, I converted some clips in Premiere Pro into After Effects composition to add motion graphics. We've all grown up with looking at film, and one of the best ways to give your footage.

The project is a documentary with dialogue, backing music and narration also occasional SFX. Here is the issue. I mixed the levels in the sequence so everything sounded/blended perfectly and fell within the -24 LKFS. Browse the latest Adobe Premiere Pro tutorials, video tutorials, hands-on projects, and more. Ranging from beginner to advanced, these tutorials provide basics, new features, plus tips and techniques.

Audio normalization can boost your audio to a target level by altering the amplitude of the overall recording by the same amount, while at the same time ensuring that the peak won't exceed 0dB to avoid clipping and distortion.

Now this might sounds abstract to you. Let's try another way to get the gist.

We measure things in real life from ground up, like how tall you are or how many floors there are in a building. In digital audio, however, things are measured from ceiling down. Now the ceiling is 0dB, below you have -1dB, -2dB, etc.

Most audio editing software come baked with audio normalizing tools, with which you can adjust the volume to a standard level, ensuring that the loudest part won't hit through the ceiling.

Here is a screenshot of sound waves before normalizing:

After normalizing, it looks like this:

Abbreviations you will meet in this post:

dB: short for decibels. An audio signal is measured in decibels.

dBFS: Decibles relative to Full Scale. Full Scale = 0dB, and you can consider it as the ceiling of your room. If anything tends to go above the ceiling, it will be chopped off, resulting in audio distortion or clipping. dBFS is usually written as dB, as in 0bB, -3dB, -10 dB, etc.

Rules of thumb: Audio levels must not exceed 0dB before exporting. If normalization is the last step, you can set it to 0dB, though -3dB to -6dB is recommended.

What Does Normalizing Audio Do

Why you need to normalize audio? What does this function do? There are 2 main situations when you want to normalize:

1. Boost volume to the maximum without clipping:

  • This is the most common case when you have a single audio clip. Normalizing help you to increase its volume proportionally right up to 0dB (-6dB to -3dB is recommended), resulting in a louder sound without changing the dynamic range of the original file.
  • It is not necessarily a change to the maximum though. You can boost the volume to a target value, as long as it sounds louder, while at the same time not exceeding 0dB, thus avoid audio clipping.

2. Match volumes of multiple audio files

  • When you merge several audio files into one, you might find each of them was recorded in different loudness. It could be very disturbing for the audience when the sound suddenly goes up or down. With normalizing tool, now you can set the volume level of each clip to an identical value, -3dB for instance.
  • A step further, let's say you have both voiceovers, intros & outros in the sound design, you can normalize voiceovers to -3db, and background music (intros & outros) to another level below -3dB.

For beginners, this could be the end of the story, as normalizing is only a matter of several clicks in audio editing software. However, the audio files you meet in reality have this and that problems. For instance, the audio file you have is soft in general, while there is only one sudden spike. If you normalize the audio according to the peak volume, there is less room left for the soft sound to boost. In this case, you will need to use a limiter in the first place, and then the normalizing tool. We cover more explanations in the later part of this post.

How to Normalize Video in Premiere Pro

As you may already know, the quickest way to adjust volume is to drag up and down that rubber hand inside the audio clip. The problem is, this might result in a peak going above 0dB, and that's where normalizing comes in.

To normalize audio in Premiere Pro:

Lkfs Premiere Pro Review

Step 1. Select audio clips in Premiere Pro.

Step 2. Right click > Audio Gain (or simply type G on your keyboard).

Step 3. Select Normalize Max Peaks to or Normalize All Peaks to based on your situation. See detailed explanation below.

If you only have one audio clip, there is no difference between normalize max peak and normalize all peaks. These two options are handy when you have several audio clips in the timeline. Since audio normalization applys a constant amount of gain to boost the volume, it won't change the dynamic range of your audio. In other words, the altering of the amount of gain are in proportion, if you select:

  • Normalize Max Peak to. This will find the loudest peak among all your clips, boost the gain of the loudest peak to -3dB or any value you set, and also alters the rest of the clips by the same amount of gain. To illustrate, if your max peak used to be -12.9 dB, and the value you set is -3dB, it will adjust gain by +9.9 dB for all the clips. Yet, you don't need to do all the math here, as you only need to set the value as -3dB, and Premiere Pro will figure out the rest.

As you can see, this method shares the same dilemma we discussed in the previous part. If almost all the clips sound soft, with only a few sudden spike, there won't be much gain applied to the overall audio level. That's why we have another option:

  • Normalize All Peaks to. This will find the loudest peak in each clip, and boost all the clips by different amount of gain (and of course same amount within each clip) so that the peaks in each clip all reach the specified decibel. Let's say clip I has a loudest peak being -5 dB, clip II -6dB, clip III -7dB; setting normalize all peaks to -3dB, it will alter the gain in clip I by +2dB, clip II +3dB, clip III +4d, so that all the clips peak at -3dB.

How to Normalize Audio in Premiere Audition

So the idea is the same as you normalize audio in Premiere Pro. You can create new files in Premiere Audio, or edit audio files via the dynamic link to open Audition within Premiere Pro.

Step 1. Launch Premiere Audition and create new audio file.

Step 2. Record your audio file.

Step 3. Go to Window > Amplitude Statistics.

Here you can read off statistics such as current peak amplitude, TPA, RMS amplitude. You can use these as reference before adjusting the value.

Step 4. Click Effects > Amplitude and compression > Normalize (process)…

Step 5. Enter the value to Normalize To. Tick dB as you might be confused by the percentage option.

How to Normalize Audio in Audacity

As the most popular free audio editing software, Audacity boasts of robust features that are otherwise charged in paid tools. You can use Audacity's normalize effect to boost volume without changing the dynamic range.

When you shouldn't use normalization in Audacity?

If you have multiple tracks to deal with, yet there are differences in peak levels on purpose, it is a bad idea to normalize any of them. If you want to keep the proportional balance between them, the optimal way is to select all the clips and Amplify them to a same amount of gain.

Assuming you have already import or record audio in Audacity, follow these steps:

Step 1. Go to Effect > Normalize from menu.

Step 2. Tick normalize peak amplitude to the target value (-3dB to -6dB recommended).

Step 3. You can click the Preview option to listen to a 6-second playback.

For instance, if you set the value at -3dB, it will guarantee that the loudest part won't go beyond -3.0 dB. Otherwise, your audio might get chopped off, resulting in unpleasant clipping and distortion.

This is the simplest way to understand normalization. However, things could get more complex. It has to do with how you measure the loudness.

Understanding Normalization: How We Are Measuring the Volume

Again, the example of sudden spikes in an audio file: imagine you have a recording of soft piano with occasional drum hits. If you go with peak normalization, the drum hits leave us less amount (the headroom) to boost to the -3dB, and the rest of the instruments might be too soft. If the audio is measured by the average loudness, then you can boost those soft music arrangements too. That's why we have RMS (root-mean-square), a mathematical calculation to measure the average decibels of a signal over a period of time, as opposed to peak measurement.

In this example, we can see that when dealing with normalization, we have to discriminate between how the loudness is measured:

  • Peak volume measurement
  • RMS volume measurement

Our brain judge loudness in terms of overall levels, not peak levels. And how it perceives the loudness is so complex, and any mathematic calculation is only an approach to represent that perception, with pitfalls and flaws. Organizations such as EBU and ITU then introduced other factors in the algorithm to analyze loudness, and today we have LUFS (loudness Units Full Scale) and LKFS (Loudness, K-weighted, relative to Full Scale).

Also read: What Is LUFS, and Why Should I Care >>

For that momentary peak, normalization is not enough to achieve a desired result. That's why you will be using compressors and limiters. That is out of the scope of this post, but the basic idea can be illustrated by the following example:

Suppose our audio file is:

  • RMS -19dB
  • Peak -8dB

Now we want to boost its volume, and as the peak cannot exceed 3-dB (TV standard), so:

  • we can normalize peak to -3dB

And as normalization won't change dynamic range, so:

  • RMS would be -14dB (simple math).

With compressor and limiters, you can keep the peak at -3dB, while bring RMS up, louder than -14dB. That's what normalization can't do.

Lkfs Premiere Pro

Normalization Cheat Sheet for Beginners

Music arrangement has more to do with how you want the production to be, with your taste and skills considered. Still, you can follow what most people do in the industry, and that would be:

  • Music RMS = -16dB (peak up to -3dB).
  • Voice/Speech RMS = -12dB (peak up to -3dB).

Here are other value sets for your reference:

  • Classical CDs in the 90s have an average RMS at -21dB.
  • Most Hollywood movie production settled at -20dB or -24dBLKFS for the final mixing track.
  • Audio distortion or clipping occur when the sample audio reaches -8dB in mobile phone speaker.
  • Spotify submissions is specified as streaming at -14LUFS - I (integrated), peak up to - 1RMS.
  • Apple Music = -16 LUFS.
  • YouTube = -13 LUFS.
  • TIDAL = -16 LUFS.
There are plenty of myths, rumors and misperceptions flying about when it comes to mixing, mastering and specifically, loudness, some of these myths need debunking. Shane Berry provides the facts.

Myth #1: Loudness is Measured Using a Standard Called LKFS, LUFS and R128.

Fact #1: LUFS and LKFS are reference units: R128 is a standard.

As of this writing, the current loudness standards are based on a document called the ITU-R BS.1770-4 which is a recommendation by the International Telecommunications Union on the implementation of a series of algorithms that measure perceived loudness and true peak levels.

The title of the paper is literally “Algorithms to measure audio program loudness and true-peak audio level”.

The EBU R128 is a document (among several others) outlining the European response to that recommendation.

The ATSC A/85 in the US, and the TR-B32 in Japan are similar documents/standards and are all in close compliance with ITU-R BS.1770-4, with minor differences.

So, EBU R128 is not equivalent to LUFS or LKFS. It’s like saying decibels are the same as the manual explaining them.

LUFS and LKFS are a new reference unit of loudness measurement, but they are not the standard itself.

Myth #2: LU, LKFS & LUFS Measure Different Things.

FACT #2: LUFS and LKFS are terms which mean Loudness Units referenced to Digital Full Scale (dBFS) with K-weighting. (For more on K-weighting see Myth #4). As of 2016, LKFS and LUFS are exactly the same thing.

The Loudness Unit (LU) is equivalent to 1dB—that is, an increase (or decrease) of one LU is the same as raising or lowering by 1dB.

Here’s a bonus myth debunk: LU’s are NOT louder than dBs!

On an EBU R128 compliant loudness meter, a stereo -18dBFS sine tone at 1kHz measures -18 LUFS.

On an EBU R128 compliant loudness meter the scale can be absolute or relative meaning that on the meter itself you can set a specific target level to equal 0 LU or measure directly in LUFS.

On an “EBU Mode” loudness meter 0 LU = -23 LUFS (relative scale) or you can set it so that -23 dBFS/LUFS = -23 LUFS (absolute scale).

Here is a stereo -23 dBFS reference sine tone at 1kHz being measured by an EBU Mode meter on a relative scale reading -0.1 LU:

Here is the same stereo -23 dBFS reference sine tone at 1kHz being measured by an EBU Mode meter on an absolute scale reading -23.1 LUFS.

This is not as confusing as it first seems—on VU meters 0 can be calibrated to any desired reference level too, but typically 0VU is equal to +4dBu, which is equal to -20dBFS.

Myth #3: The new loudness standards are only for TV and post production.

Fact #3: Well, yes and no. It is true that the documents described above are primarily outlining broadcast standards (that’s what the “BS” in ITU-R BS.1770-4 refers to) but there is growing evidence that YouTube, iTunes and other major online music streamers are implementing some kind of loudness averaging.

They are not necessarily adhering to any of the broadcast standards though, YouTube seems to be normalizing audio on some official videos to between -14 and -12 LUFS...

... and iTunes’ “soundcheck “ feature appears to be leveling audio to around -16 LUFS.

Radio has yet to get on board with the standards, but when it does, the need for any musician, bedroom producer, mix engineer or mastering engineer to maximize loudness via brickwall limiting, or mix to arbitrary peak levels will come to an end. (See Myth #5 and Conclusion.)

Myth #4 Loudness Normalization will add more processing to my track and change it.

Fact #4: The loudness algorithms measure audio and adjust overall gain accordingly, they don’t process it.

Loudness Normalization uses EQ curves (designated K weighting) that closely resemble how the human ear perceives loudness, it then measures the average peak to trough difference of the entire “program material”, ignoring levels below a certain threshold, and then calculates a value called an integrated loudness level.

This Integrated level is then used to determine the overall loudness of the material and the levels of the whole program are turned up or down to comply with the various loudness standards mentioned above.

Furthermore, don’t confuse Loudness Normalization with Peak Normalization.

With Peak Normalization an audio file’s total gain is raised to specified amount (usually to 0 dBFS), but only based on the highest measured peak in the audio.

Here’s a plastic example; you have an audio file in which two characters are talking, and they are interrupted by loud gunshot. The gunshot nearly clips at -1 dBFS (i.e., the waveform nearly reaches 0dBFS) so peak normalizing the track will only raise the whole file by 1dB to 0dBFS, and leave the gain of the two characters talking perceptually unchanged — remember, raising gain by 1dB is barely perceptible to the human ear.

With loudness normalization the whole file is measured using the aforementioned set of algorithms and noise gates to determine the average or Integrated Loudness of the whole file. The algorithms and gates take into account the loud and quiet parts of the “program material” ignoring quiet parts (below a threshold of -70 LUFS as defined in the documents) and allowing for louder parts momentarily, and then spit out an Integrated Loudness value (I) after the whole file has been analyzed.

The Integrated Loudness level is the value that will determine the perceived loudness of the whole track.

When program material needs to be -23LUFs +/- 0.5 this is the value to check—it is important to understand that during playback parts of the material (music/dialog/EFX etc.) can be louder or softer than the target (I), but again, it’s the overall average loudness which is taken into account.

Referring to our plastic example above; as long as the gunshot peaks at or below a permitted momentary loudness level (EBU R128 specifies a maximum short term loudness (3 seconds or less) of +/- 5LU or -18LUFS) the whole file will be raised (or lowered) in volume “x” amount of LU so that the dialog (average loudness) sits at -23 LUFS while the gunshot (outside the average) has heaps of headroom to play into.

This will have more of an impact on the audience because the dialog is now audible and the gunshot has not been squashed down by a limiter or compression and thereby lessening the dynamics (and drama) between the two sonic elements.

The implications for more dynamics in music are also apparent.

Myth #5: dBFS peaks and RMS are more important to monitor than true peak or LUFS/LU readings.

Fact #5: Peak metering is rapidly becoming unnecessary, and essentially never gave us useful information to begin with. Intersample peaks are not correctly registered by peak-sample meters. For example, a traditional sample-peak meter that displays a max of -0.2 dB could read as high as +3 dB on a true-peak meter.

With the new Maximum True Peak Level of -1 dBTP, the previous PML (Permitted Maximum Level) -9 dBFS (as defined in ITU-R BS.645-2) is effectively obsolete and potentially replaces the previous music mixing standard for CD and online material of peaks no higher than -0.3 - 0.5 dBFS (once mastered).

As of this writing Logic X 10.2.2 has integrated True Peak measuring into all of its native meters and it is highly recommended to use true peak measurements from here on out.

As for RMS—RMS is much more useful for gauging the actual, longer term, levels of a given waveform, but RMS is only a measurement (or display) of signal voltage, so it doesn't really give us an idea of perceived loudness. Two music tracks measuring the same RMS values may not necessarily have the same perceived loudness because RMS does not take into account the psychoacoustic nature of apparent loudness as heard by the human ear, specifically that low, mid and high frequencies of the same level are not perceived as being the same loudness.

The Integrated loudness measurement specifically takes into account this aspect of human hearing perception of loudness and adjusts accordingly.

Conclusion

So what does all this mean for music?

In the EBU-R128 documentation it is explicitly suggested that no major changes to current mixing styles (as of 2016) are immediately necessary, but it is strongly recommended to consider the implications.

Lkfs Premiere Pro After Effects

For music producers and engineers there are two choices:

  1. Mix as you always have, and have your music turned down later by loudness compliant playback systems
  2. Mix to the new loudness standard of -23 LUFS/-1 dBTP and utilize the large headroom and dynamic range it affords.

When you mix/master your music track to the current standard of 16 bit 44.1kHz with peaks at between -0.3 and -0.5 dBFS with an average RMS of say -12dB to - 6dB (brickwall limited and loud) this track when measured with a EBU compliant meter will show levels way above -23 LUFs (and possibly true peaks upwards of +3dB) and thus will be turned down until it has an integrated loudness of -23 LUFS.

No compression, no further processing, just literally turned down.

What this means is that pushing for high RMS values and squashing out dynamic range will now actually work against your music when your “sausage” is played against music mixed to utilize the dynamic range afforded by the -23 LUFS mix headroom.

Lkfs Premiere Pro 2020

“Loud” over compressed and brickwall limited music - read: music with no dynamics - really cannot compete sonically with more dynamic material in the new standards.

Lkfs Premiere Pro Video Editor

Related Videos