Exporting normalizing and mixing in general

I’m editing a podcast. I normally use Audacity but I really want to move editing to Ardour.

I’ve been playing around in Ardour for about 2 weeks and I’m getting the hang of it.

There are 6 tracks all with a different podcast guest on each track. We are all over the world, not in the same room.

After I apply the different effects via plugins what is the best way to get a better mix? Some guests are a little quieter than others. I use a compressor and a limiter but the overall levels still vary widely.

One way I’ve done it is, at the end, to switch to the Mixer view and adjust the gain for each track until they hover between -10 and -5 db. Something feels hack-y about that. I feel like after the compressor and limiter, and the makeup gain, I should already be in those levels. When I used Audacity, once in awhile I had to amplify a track to boost the gain, but not often. Is there a better way to get the output in the -10 to -5 range?

Also, 2 guests have pretty hot mics, I try to remind them to turn their mic input down a bit, but sometimes they forget and they have slight clipping. In Audacity I can use a hard limiter and chop everything off after say -2db but I can’t figure out how to do that in Ardour. I have been playing around with the LSP Limiter Stereo but maybe I need to use a different limiter. I have the threshold at -5 but I still hear some clipping and I have to turn the input level down in order to fix that and not get lots of popping from plosives.

Also, I’ve been hearing about the LUFS standard and that for podcasts, you want it to be -18 to -15. How do I see in advance what it will be? I have exported small sections to see it, and I’m getting more like -29 to -31 LUFS, viewable by selecting Analyzing Audio. The problem is that I can only see it after waiting for an export. Is there a way to know in advance what the output will be so I can make adjustments before exporting?

Sorry for all the questions, but I’m loving Ardour so far, but there are a couple of things I need to get better at to ditch Audacity.

Since I use a limiter, I feel like when I export, I shouldn’t need to normalize. When I export, it doesn’t seem to matter whether I check normalize or not. The sound stays the same and the LUFS is always within that -31 to -29 range and the tracks still vary in their levels.

Any help is greatly appreciated.

I’d do it the other way round. Adjust the gain for each input until the levels are roughly the same, then if necessary use compressors after manual gain stage if there are big variations in level within one channel. One quick way to do that might be to normalize the regions for each track - that brings the peaks to the same level for each track.
Also if there are sections in the time line where the gain of a channel needs boosting or cutting overall, consider using automation to do that.
Ardour has many different places to control gain, including

  • setting boost or cut per region (also normalize region)
  • gain trim of +/- 20dB on each input
  • input gain automation
  • fader
1 Like

Thanks, that makes sense. I’ll try getting the tracks to a similar level first, then add effects.

I haven’t explored automation yet, but it keeps popping up as a suggestion, so that may be next.

If I normalize the regions for each track, then I imagine I should uncheck normalize when exporting? I’m under the impression one should only normalize once.

Is using the fader in edit mode the same as when in mixer mode and I adjust the levels? Is that the fader too?

Try the new Loudness Assistant that has been around since 6.3. It is available from the Session menu and then subsequently by clicking on the LAN button on the master bus. It will not only analyze everything coming out of the master bus but also conform to whatever standard you desire. If you need a certain LUFS level but have stray peaks, use “custom LAN position” (appears as LAN amp in the master bus processor box) and place a true peak limiter after it (set to -1dBTP, for example). In this instance, just select the loudness and not the peak in the Loudness assistant. In either case, when exporting don’t forget to disable any further normalization in the export presets.

1 Like

The Loudness Assistant is indeed a great feature, I love it!

I also do some work on podcasts and here are a few other tips to consider:

  1. As Anahata said, you can normalize the regions for each track: I usually normalize them to -6dbfs and see how that works; it will get everything on a level playing field while still likely giving you more than enough headroom.

  2. To minimize the need for compression you can look through a region’s waveforms for the highest peaks and reduce them by hand, which is fast and easy if you use the “smart” mode in the editor: just hover your mouse in the upper half of a region, select a range around the peak, type d to enter draw mode, and drag down on the horizontal line above the peak. Drag down to where the horizontal line matches the general level on either side of the area you selected. Now type g to go back to normal edit mode. Unlike some other DAWs, Ardour doesn’t redraw the waveforms to show the effect of your gain-lowering, but if you type d again you can see all the places where you’ve lowered gain this way. If you use this approach to reduce all the highest peaks, you won’t need to apply as much compression.

  3. Plosives: there are many different valid techniques for attenuating plosives; plosives are not just about volume but also frequency so I usually attack them with EQ automation (applying a higher high pass/low cut filter on a plosive will usually attenuate it effectively and realistically). But you can also attenuate them using volume alone, using for example the same technique I described in item 2 above. Some people even make a split before a plosive and apply a fade in. I’ve seen at least 6 or 7 different approaches to dealing with plosives.

  4. Breath sounds: There are also different approaches to dealing with this; I don’t mind normal breath sounds but for the louder gasps I’ll attenuate them using the same method described in item 2 above. I suppose a brute-force method would be to apply a gate. Depending on where you apply compression (pre or post fader) and how much, breath sounds can become more evident again when you add compression.

That’s a general rule when mastering: only use normalize once and only as the last step in processing (getting Ardour to normalize on export will take care of that for you), but in this case I’m suggesting using it per track as a simple way to choose a gain setting that will get get levels in the same ballpark.

Thanks again. I played around with this but couldn’t quite figure it out. At the very least it allowed me to see before the export what the loudness would be. I also chose a smaller region, as long as it included all guests, so it didn’t take as long to generate. That really helped to show me if what I was doing was working as intended. I didn’t understand what the Apply button did. I mean I thought it was just analyzing the loudness, but on different snapshots of the session, I tried it, but it was super loud. In the end, just having this analysis tool while I’m working, instead of just at the end, was super helpful.

If you select one of the loudness presets, “apply” will adjust the gain to achieve that loudness or peak (whichever comes first). If you run the tool again, you will hopefuly find the loudness exactly at the preset level. As I mentioned previously, there’s also the option to apply as custom gain position so you can place it before a true peak limiter to a fulfill both loudness and peak requirements (assuming a mix with large dynamic range).

Instead of normalizing each track I wound up using a limiter on the master bus, set to -3. That combined with adjusting the fader for each track to achieve the right mix and avoid peaking meant the output stayed below -6 or so.

This week in between episodes, I’m going to take one of the tracks with the largest peaks and valleys and try out the smart mode gain reduction you mention in #2. Is that just reducing the gain for the region you select or is it something else? I’ve seen some Youtube videos where they do this, but instead of customized for each peak, they just do it the whole length of the track at the same level, which has the effect of chopping off the peaks. Not sure if this is that different from doing it for each peak, although it seems more like what a limiter does.

For #3, I thought I saw somewhere that you can set in Ardour preferences, an automatic fade in when you make a cut. I generally cut right before a plosive or a breath, if it’s egregious, smaller ones don’t bother me. It’s not that big of a deal to grab the corner of the clip and make a fade in, but it would save some time and mouse clicks if I could make it standardized.

Thanks again for the tips, I"m trying them all out.

See this video starting around 2:15 (it’s for an old version of Harrison Mixbus, which is built on Ardour; the same procedure works in Ardour): https://youtu.be/DjlBbao5gMQ
You’re changing gain (either raising or lowering it) in the selected region of an item; once you’re done making these region gain changes to a track, you won’t need to use as much compression (I avoid using compression unless I have to, as I think everything sounds more natural without it). It’s probably closer to the leveler than the limiter in terms of what it does.

Thanks, that was a perfect explanation. I also didn’t know Mixbus was built on Ardour. I’ve seen people share Mixbus videos, but have not watched them because I didn’t know how similar it was to Ardour. I’ll take a look at a few more.

I’m still not sure why I wouldn’t just use automation across the whole bus, as opposed to just on the peaks I’m trying to wrangle back?

I guess I’m assuming that if the track is below the green line nothing happens to it, but it might also be the case that the below the green threshold areas are also reduced in volume so that only reduces the overall volume for the track, not solving the peak problem.

Thanks for everything you have been sharing.

The editor in Mixbus is largely identical to the editor in Ardour (there are a few differences, sometimes Mixbus is ahead of Ardour with new features and sometimes Ardour is ahead), but the mixer is completely different.

You can raise or lower the green line to increase or decrease volume overall (the alternative is to use control-6 or control-7 to raise or lower volume respectively), but that’s going to make the whole track louder or softer. The approach I was describing works similar to compression: you can make quiet parts louder and louder parts quieter, with finer control and more transparency than by applying compression. On the other hand, it’s a lot more work to do these adjustments by hand! For a podcast, I’d have no qualms about using compression (although I might tame some of the highest peaks first and bring up any very quiet areas before applying compression). For music, at least the kind of music I do, I’d rather take the time to adjust dynamic range by hand first and then use compression only if I need it.

Thanks again. Yes, it’ll be useful for quick reductions of peaks, without spending too much time doing it. I do apply compression in the podcast, we’re trying to go for a studio sound now, we’re 700 episodes in and we thought, let’s make it better, although I’ve only been the audio editor for the last 40 or so episodes.

That’s 37 episodes more than I’ve done. :wink: I’m hosting a small sporadic podcast as part of my regular day job; I composed the opening and closing music, which was the most fun part of it all, and I line up and conduct the interviews, edit, and mix. We’ve released three episodes to date…only one was an in-person interview and the others were over Skype so the audio quality isn’t great.

I just get the audio files and make the podcast, I’ve been a guest once on the podcast, but other than that, I don’t really get involved, except for a few New Year’s predictions and feedback about mic technique.

We talk over Mumble during the livestream, but each guest also records a local copy using Audacity so we can get decent sound. They share the FLAC files with me via a Nextcloud instance.

I used to use Audacity, but ARdour is making everything so much easier and sound better. The podcast comes out weekly so I appreciate the workflow improvements.

1 Like

This is the plugin stack I use when recording our podcasts. I’m able to get everything “close enough” with compression. I throw a just in case limiter on the end since we’re streaming live while recording.

From there I mix everything down to a mono track and hit it with the Level Speech plugin in Audacity.

It’s not fancy, but it’s quick.

1 Like

That’s super helpful, thanks.

I had a question about Noise Repellent. I find it not super clear when and if it is working. The wave form isn’t rewritten from what I can tell. I wind up looking at the levels meter for the track I applied it to, in order to see if there is no sound coming through the quiet parts.

I normally loop play a small section of “quite” from a track and click on Learn Noise Profile. Then I uncheck it, and adjust the reduction amount and the next 2 options. Then I listen for artifacts in spoken parts and adjust the reduction amount until there is no more artifacting. Sometimes there is no artifacting. Sometimes I cannot tell if anything is happening as a result of the plugin.

On some tracks, there is no audible or visual noise but I apply it anyway bc there is always some ambient noise.

I’m figuring out how the plugin works, but it’s not always obvious if it’s working and how effective. It would be great to see more of a spectral visual to see in real time the reduction of noise.

If you are referring to the waveforms in Ardour, that is by design. Ardour runs all plugins in realtime, and is non-destructive. What you have described is a destructive workflow, which is better suited for other tools. The same thing is true when I run Izotope RX or WaveARTs MR Noise. In the case of Izotope they also ship an entire destructive editor to utilize their tools in the manner you describe for this reason and because that destructive non-realtime workflow is actually better in many cases for this purpose, but it depends on the specific needs.

  Seablade

Thanks, I understand the non-destructive idea, it’s why I switched from Audacity to Ardour.

What is the best way to visually see an effect like Noise Repellent given that the noise wave form doesn’t change?

I can hear it, sometimes I barely hear it, but I’d like a visual sign too.

If you really need to, I suppose you could use a spectral analyzer before and after Noise Repellent…

As an aside, I have often disabled real-time waveform creation when recording in a DAW as at some point you just trust it is capturing everything (and enjoy the freed-up resources) :wink: