Vocals are defeating me - I can never make them sound right. The meter will hit +5 dB but you can barely hear them in the mix.
My platform is Ubuntu Studio 24.04, going through a spanking new Scarlett 4i4 with a MXL 990 condenser.
My current hack is to put a compressor in the chain after the fader, crank the threshold way down, the ratio way up, and add a ton of makeup gain. This gets me something I can hear, but it doesn’t necessarily sound good, and I can’t believe that’s standard practice.
I mean, my voice is crap, no amount of postprocessing is going to fix it, but I’d at least like to be able to hear it.
I’ve also played with EQs and filters to get some band separation from the guitars, but it doesn’t help much.
So, questions:
What plugins do you use on vocals?
Where do they go in the signal chain? Pre-fader, post-fader?
What order do you put them? Does the compressor come before the filter/eq, after, does something come between them?
How do you keep vocals from sounding “thin” without them sounding muddy?
Waow!!!
Vocals are, often, what makes the gap between a tune I will listen to until the end, and a tune I skip just after the instrumental intro.
No plug-in neither mike placement can make a vocal sound « good » if It is not…
Sounding good doesn’t mean, though, being a classical learicist.
Take time, maybe find good teachers? Know what you want to translate with your voice, and then, get your own craft in your style…
Once your voice is here, plug-ins, compression and equalisation will be really simple.
Because you eq, compress and « produce » ( add plug-ins) according to the inner style of your singing…
The genre of music you are working on will have a large bearing of how your untreated vocal will sound as well as the level you recorded it at. As an example in an acoustic based arrangement if your vocal is recorded in a good safe range where your levels are taking advantage of the recording bits without overshooting you may find it sounds pretty present in the mix as is. Conversely if you have a busy arrangement with drums bass and electric guitars an untreated vocal will sound like it’s miles away and as you’ve noted you’ll need compression to artificially bring it up to a level where it finds a place in the mix. You’re not on the wrong track but it’s a matter of trying different compressors and different settings to keep the tonal character of your vocal intact.
Some compressors I personally prefer specifically for vocals are:
Applied Computer Music Technology 510X1
U-he Presswerk
AudioThing ‘Voice’
Those are commercial options but I’ve found them to always just sound good, that doesn’t mean there are not some terrific open-source options and I’ll leave it to others to comment on those.
I think the placement of Plugins will depend and is flexible, I almost never use an EQ on Vocals and I recommend a decent quality condenser mic with a pop-screen for the best raw capture. I usually place compression pre-fader after an EQ (if used). If I use an in-channel Reverb or Delay I usually place these pre-fader as well and if I use a limiter it goes post-fader. Effects are usually better handled with busses though but there are many ways of doing things, good ears will get good results even if the methodology is different.
If your vocal track peaks at +5db it is definitely not too quite. I would say that the instrument tracks are too loud.
My advice would be:
Take a deep breath. If there is any processing on some of the tracks already, turn it off. Bring all the faders down. Breath, and bring up the vocal track to a more reasonable level (peaking at -5db or less).
If that sounds too quite on its own in your monitoring system, bring up the output of your interface.
Now bring up your instrumental tracks in the order you see fit one at a time until they are at a decent level in your songs arrangement.
Sounds to me like a arrangement problem, not so much a technical problem. Make room for the vocals. Look which frequency range the voice uses mostly and try to keep other instruments out of that range - at least a bit. Clean up the mix! I would use a compressor before the eq, both before the fader. Compression after the fader is not recommendable, because every fader move would change the compression.
My advice:
Before throwing any plugin to any track, get the “basic mix” right (courtesy to Joe Gilder, who also always says this). That is: just the raw tracks and you only use gain, pan and fader controls.
From a level perspective (some people may call this “gain staging”): when I look at the Master strip on my own songs, every instrument group (drums, guitars, vocals) in solo usually pops up inside the light green area (between -20 and -12dB RMS) and peaks (e.g. on drums) may reach around 0-3dB (yellow area). This way you should have enough headroom to actually do the “basic mix” without running into any clipping issues.
Usually always a parameterized EQ, having at least a high pass (cutting off any rumbling noises from the floor or mic stand) and on-demand some bell-curve points to cut nasty frequency ranges that pop up too loud. Usually I also boost the highs using a high-shelf point for more “clarity”. YMMV
After that, if the vocals sometimes pop up too loud and in other places too low, usually I split up verse/chorus or clean/scream/growl parts into separate tracks so I can adapt the gain/fader on those to fix the biggest loudness differences. However, a compressor might usually be needed to flatten down the “dynamic range” of the remaining vocals on the particular tracks.
Third, due to compression, I usually need a de-esser to tame down the “s” consonants that might pop out too harsh.
I usually prefer pre-fader. On tracks containing wave regions I usually also switch the meter to show the pre-fader level.
Whatever sounds better in the end.
I do usually EQ first with the reasoning, that I don’t want e.g. low frequencies (which are cut down in the EQ) to trigger the compressor more than the filtered vocals would do.
By finding the right frequency balance (EQ). Sometimes even “thin” vocals may sound great in a mix. Whatever you do (or think you might have to do), always listen to the full mix and double-check by toggling the plugin (or the EQ point) you added on/off that it actually solves or at least improves the issue you wanted to fix in the first place.
Some production / arrangement tricks to make vocals sound more “full” (especially in choruses):
vocal doubling: have one “main” track in the center, and have the 2nd and 3rd best take of exactly the same vocal line on additional tracks panned to the left and right and turned down by around e.g. 12dB compared to the main track (play around with the relative level difference until it sounds good)
vocal harmonics: have additional tracks where the singer sings a lower/higher line, matching the main line. Depending on the song and if it’s the first or last chorus, there might be even 2 or more of those…
background vocals: have additional tracks where the singer only does some "aaah"s or "oooh"s. You may have multiple of those tracks to form full chords out of e.g. 3 (or more) of those tracks.
maybe even have some padding synths following the background vocals to add kind of a different “texture” there.
I’m not saying that you always need all of those, but at least it’s good to know those tricks. In doubt try out each point in your next song. If it improves the song, keep it. Otherwise try something else…
Which meter, the meter on the audio interface, or the track meter in Ardour, or the bus meter in Ardour?
If you are referring to Ardour meters, the default track meter is digital peak meter, and +5 is well into hard clipping, but the master bus default is K20, and +5 is fairly low level.
If you are mixing rock or pop music, try switching the master bus meter to K14 or K12, that is arranged in a way that is easier to see for dynamic range more typical of pop and rock styles.
Not to state the obvious, but if the mix doesn’t sound right to you then change the mix. If you can barely hear the vocals, perhaps you should lower the levels of everything else.
A traditonal vocal chain would be EQ then compressor early in the channel (i.e. pre-fader). Depending on the skill of the singer you may need just a slight amount of compression (low ratio, relatively high threshold), or you may need a bit higher ratio to tame the dynamics if there is a lot of variation.
“Look which frequency range the voice uses mostly and try to keep other instruments out of that range”.
This.
Sounds compete for acoustic “space” on the frequency spectrum. Think about whether your vocal frequencies are competing with those from your instruments. If so, try adjusting tones of one or the other to reduce this competion somewhat.
Another trick to make any track stand out is to duplicate the track and pan each one in an opposite direction by roughly equivalent amounts. 65/35 seems to work well in my experience, but experiment to find what works.
The process of adjusting levels, frequencies, and panning is intended to recreate a soundscape that you might encounter at a concert with instruments in different positions on the stage. This is a nice video explaining this concept:
Also pay attention to your listening environment, it can really lie to your ears due to unequal sound reflections and resonances within and across the room space. This is why acoustic treatment is often added to mixing and mastering environments.
Best of luck and please consider posting your results when you’re ready to share your work.
Essentially yes. What is typically done on recordings with professional musicians is to re-record the track multiple times. A good musician can play again close enough that you don’t notice the two parts, but the sound of an instrument or voice can never start and stop exactly the same each time, so you get an effect like an ensemble playing together.
If you do not want to play the part again, then when copying the track you can add a very slight pitch shift of just a few cents, change the eq noticeably, or add some other processing so that the left and right panned copies are no longer exactly the same.
There is or was a plugin with a name something like autotalent which performed the pitch shifting and doubling for you.
There are also commercial plugins which implement the idea, but the first link describes a way to do it without a plugin, and the next two links are free plugins you can try.
“duplicate a track” to me means doing a “copy & paste” of the one track/take that you have.
Panning those two tracks left and right is pointless, because the sum of it would be mono again and you would have achieved the exact same result by just adding ~6dB to the gain knob of the original mono track…
“doubling” means (also as Chris pointed out) that you have two (or more) different takes of the same melody on different tracks. Even if those takes may sound “the same” at first, they will (due to human performance) contain little differences in volume, timing, pitch, and tone, which is enough to get an actual stereo image when panned left/right.
=> this is what you should do, maybe even using three tracks/takes in total (main vocal on center, the other two turned down in volume and panned hard left/right). On the other hand, you shouldn’t do this during the whole song, but rather just on the chorus and on to-be-emphasized phrases during the verse…
This is what the chorus-effect tries to simulate. Most chorus-plugins are stereo based and have more of a “lushness” sort of sound where the original and effected signal are blended. But panned hard to one side with the original to the other it can double the track.
https://chrisarndt.de/plugins/adt is just the LV2 Plugin ID URI. It is not a “real” URL, the homepage of my plugin projects is usually their GitHub repo page. However, I I have now set up a redirection from the plugin URI to the respective GitHub page for all my plugin projects.