Loudness Analysis: How to fix?

(I apologize in advance for being such a newb.)

When I export I see this awesome analysis that shows me parts of the song which need some replay gain adjustments. I’d like to manually adjust which tracks are made quieter to achieve the intended sound. But is there any way to see this analysis actually shown in the editor view? It’s very hard to take a screenshot of the analysis and then try to find the places that were too loud. And by “hard” I mean “nearly impossible.” I tried this method for an hour or two on my last project and never seemed to succeed. What is the “proper” (or better) way to go about this?

PS: Or… Am I vein to think an automated normalize isn’t sufficient?

1 Like

Is your project a single track song, or a collection of tracks songs?

By the way, “Replay gain” is a gain adjustment which is supposed to be made on playback by an external player. It’s done using tags which are attached to each track song which describe the track’s song’s characteristics so that the player can make gain adjustments to fit the track song with other tracks songs that are being played.

Replay gain is not something you specifically adjust as part of the track song when recording or mixing.

Cheers,

Keith

1 Like

The project has four to eight tracks playing at any given time, which was why I was thinking manually adjusting which tracks are reduced in volume might be desirable.

(Ahh, I didn’t realize the term replay gain was specifically for inter-track playback.)

Sorry, I think I may have confused things with my poor use of the word “track”.

What I meant to say is, is the project a single song, or a collection of songs, like an album?

Cheers,

Keith

1 Like

This might sound like a stupid question, but are you using any plugins on you tracks?
-Like EQs, compressors, limiters, etc.?

You can adjust track gain via each track’s:

  1. Fader. (-Which you can also automate.)
  2. Trim knob. (-That tiny knob/wheel near the top of a ‘track mixer strip’ (-left-side).)
  3. Gain plugins.

These:



Now, were you saying that you were looking at something like this?:

Making artistic changes in the mix to satisfy medium criteria is not usually the correct choice.

I’d recommend adding a limiter directly to the master-bus. And likely a compressor to select tracks.

3 Likes

I’m going to use the word “song” from now on, on the assumption it’s music you are producing (it could, of course, be spoken word, foley, or something else, but I’m going to assume music for the sake of discussion).

If you take a single song, you have many ways of adjusting the level of that song. Some are more complex than others.

You can simply adjust the faders on the mix screen to make everything louder. You can adjust the faders for some specific parts of the song, and this can be done using automation so that it gets louder during the chorus but goes back to a different level during the verses. This can be done (and is more typically done) for specific tracks (by which I mean Ardour tracks; individual channels or instruments in Ardour). So, for instance, you may wish to make the guitar track louder during a guitar solo.

You can also apply dynamics plugins. The most common ones here are compressors, which reduce the dynamic range (the difference between the quiet parts and the loud parts) and limiters, which limit how loud transients can get.

These are all common mixing techniques that have been used on every commercial recording in the last several decades regardless of analogue, digital, or software applications used.

If you have a project which has multiple songs, then you would typically perform a general mix across all of them to get things sounding roughly balanced across the songs, before focusing on individual songs to tweak their mix.

Then you can use fader automation on the master bus to make an individual song louder if the level seems particularly low to you. And you can also put compressors and limiters on the master bus,and automate those if necessary.

But, as @x42 suggests, trying to hit a specific “loudness” target isn’t always the best thing to do, depending on the context. Many excellent classic albums have songs which have much lower loudness than other songs on the album, and this was a deliberate choice made by the artists and the mixing and mastering engineers.

It all depends on what your material is and what your artistic intent is.

If your aim is to have a collection of songs which match the loudness of similar songs on (say) Spotify, then You may want to consider separating individual songs into their own projects and mixing/mastering them separately, as this will probably be easier than dealing with multiple songs in one go.

And if you are going to master every song to (say) -14 LUFS, then I don’t see any benefit in keeping the songs together in the same project.

Cheers,

Keith

2 Likes

I’m just an amateur here but I have been using this approach on my last several mixes and have been very satisfied with the results. I have sent mixes out to a professional mastering service in the past and just using the normalization presets in Ardour (I use the one for YouTube/Deezer) yields similar loudness, perhaps missing the “sweetness” that a pro master has. But still good enough for me.

I would encourage you to give it a try, after you have optimized the individual track levels using compression and limiting. I also use a very light compression on the master bus and sometimes add a tape saturation model on the master as well.

1 Like

Thank you everyone for your thoughtful replies. Even the ways in which my questions were misunderstood was highly educational for me. For example, it never occurred to me that several songs might be in the same project at once. And, also, it totally went over my head (at first) that we use the word “track” both for individual files, and for stems making up songs.

I ended up asking ChatGPT for help with understanding a lot of terms and normalizing to -1 dBFS for the true peak.

@GhostsonAcid I am not using any plug-ins yet. I’m still in a huge learning curve with the base software. I’ve used VegasPro for making videos for years, and for the most part, I could manipulate stems within that software to my contentment for my videos. I’ve used Audacity for even longer, but mostly just for audiobook creation and podcasts. When I tried using Audacity for music I was baffled it didn’t have the basic and critical ripple-and-drag features found in VegasPro.

Anyhow . . . I still feel like there ought to be a feature that shows where the loudness peaks are in real time on the editor screen. Like on the same screen that I’m doing the editing on. It’s not useful for the information to be hidden on an analysis which I can’t interact with.

@x42 I don’t understand why I wouldn’t want to fix loudness spikes by choosing which instruments/voices need to come down a nudge? A friend of mine helped me out a lot with one track by simply pointing out that the vocals were way too loud relative to everything else, and I simply hadn’t noticed because I personally loved the vocals so much, but simply bringing down their volume a bit really did help the overall piece.

1 Like

@TonyBKDE I think I’m a little lost on what’s “good enough,” really. I feel like I could keep tweaking a song forever, and I’m unsure when I’ve gotten to the “good enough” place. With visual arts I have over two decades experience and I feel confident about where the “good enough” place is, and I can stop there and if others don’t prefer it, I can shrug it off. But I feel so unsure and insecure in the area of music that I’m quite afraid that my own satisfaction is truly insufficient.

I guess I’m trying to figure out how to refine my workflow on the album I’m making so that I have consistent standards. @Majik It may make sense for experienced artists to make creative calls about some songs being quieter on their albums, but I doubt this is something I yet have the discernment for? For a beginner making their first album vaguely in the metal genre, I am imagining a consistent standard makes the most sense?

Ah, well that’s a whole different problem than just lowering the level of part of a track that’s too loud. The Loudness analyser really isn’t going to help you much with that, as it only shows the loudness of the combined, mixed track.

What you are talking about here is altering the mix which is about the relative level of each of the tracks. There’s no magic-bullet tool that can tell you the answer to this, as a lot of it is about artistic intent. But there’s also a lot of experience involved.

One useful tool is to use reference songs: songs from other artists in the genre that you like and think have a mix or “vibe” that you would like to emulate. Load one or two of these up in separate tracks in Ardour and use mute/solo to A/B them to your song, and try to consider where your mix differs.

Maybe not. But I would consider that to be an artistic decision too. In which case, I would suggest keeping each track as a separate song whilst you are editing/mixing them. If the genre is metal, then that is also drives the artistic intent when it comes to dynamic range and loudness.

If you were doing instrumental Jazz, you would probably take a different approach.

But it is important to consider that there are multiple steps in the production process which normally include “mixing” and “mastering” as separate steps.

Mixing is about getting the relative levels, EQ, stereo panning, and “space”, etc. of your song as you want it. This includes stuff like automation (e.g. to push the guitar louder during the solo), effects (reverb, etc.) and editing to fix things that you can’t easily re-record (e.g. cutting out background noise on the vocal mic when there’s no singing, or replacing part of the bass when a bad note was played.

Mixing can, and usually does, include the application of EQ and compression on each track (or groups of related tracks). This is so fundamental that recording studio consoles have built-in EQ and compression on every channel strip, as well as faders for level and pan controls.

Mastering is about taking the finished mix and getting it to sound good on the target playback medium and environment. In the analogue days where media like tape and vinyl had significant compromises, mastering made sure that the audio worked well within the physical constraints of those media.

In the modern digital streaming world, the same thing applies, but digital media has far fewer constraints.

Mastering primarily involves application of EQ, gain, compression, and limiting to the finished mix, rather than to individual instrument tracks.

If your vocal is too loud, then you have to address that in the mixing stage, not when you are mastering.

I hope that’s somewhat helpful.

Cheers,

Keith

1 Like

I think you need to start looking at plugins. As I said in my previous post, certain audio processing (EQ and compression) are so fundamental to music production that recording studios have them baked into every channel they record.

These are not baked into most DAWs (Ardour included). They are available as plugins, but these need to be added to each track, and you need to select which version of EQ, or compression you want.

To give you an idea, this is the mix view of a recent project I was working on:

As you can see, almost every channel has an EQ and compressor plugin (and some have more than one of each).

Cheers,

Keith

1 Like

That probably depends on the intent of the project. For typical studio workflow where songs are recorded individually, and you decide later if they will be grouped in an album, what order they will play, then having a single song per project is often the most convenient.

For a live concert recording where there is expected to be continuity of sound between the different songs, continuous audience response noise, etc. having one long project would be typical.

When making an album format, e.g. CD release, then pre-mixed individual songs would often be imported as stereo tracks and then the order of the individual songs arranged in the project, level adjustments between different songs, and often slight equalization and compression to make all the different songs have a similar sound impression (if desired).

That is more complicated than the problem may seem. The editor does show amplitude peaks, but loudness as perceived by the ear is not strictly related to only amplitude, Loudness perceived by the ear is determined by a combination of signal amplitude, frequency content, and what was heard previously.
The term “previously” is also variable, if you study the specifications for the loudness standards there is typically a calculation method for short term loudness, and long term or integrated loudness (which you might think of as kind of an average value of loudness over the entire song/album/movie etc.).

If you are interested in the background of best practices for mixing to a particular loudness standard these two articles are what I would consider the foundation of modern best practices (and the basis of the K-system meters used by default on the Ardour master bus).
Level Practices part 1
Level Practices part 2
The most succint summary would be set your monitoring levels using calibrated gain as specified for movie mixing per Dolby recommendations, possibly with an adjustment to accomodate differences in practice between music and soundtrack expectations, then just mix to what sounds good.

That depends on whether you are trying to fix a balance problem, i.e. one instrument or voice is too loud or too quiet relative to the other instruments at a particular place, or an overall level problem, meaning that the balance is what you want, but the total level is too high or too low.

That is where having another person to provide an opinion can be helpful. It can also be useful to get a song to a point where it it close to what you want, then leave it for a while and come back after resting or working on something else to see if your opinion has changed.

I think you may not give yourself enough credit, assuming you are a reasonably experienced musician, or even music listener. For an album I think the relevant thing would be have the songs in the order you expect them played, then actually listen to them all together in that order. It will be somewhat obvious if (to pick an extreme example) a slow ballad with one voice and acoustic guitar was as loud or louder than a song with full rock band and harmony vocals. It just would not sound natural (although most albums would not make those types of recordings differ in anything close to the actual natural difference in loudness you would hear in the room).

One advantage of having someone not involved in the music performing or writing listen is that as a performer you may get distracted by particular performance issues when listening, but what is really needed at that time is listening to the overall blend of the music without focusing on whether one particular note was in tune or on time.

1 Like