I saw this post about hi sample rates and the post was talking about how unnecessary hi sample rates are eg. 192khZ. The website posted audio files and the post that that the files should have no audible distortion sounds from a sine wave I think, and if you heard sounds that means that there is something going on with your hardware giving back hi frequency information etc. I did the test and I heard sounds but have others noticed any small amounts of noises generated from there hi res sessions, I’m not a expert on how hi res audio should be analyzed but it makes me wonder if I should even bother trying hi sample rates.
To cut a long story short: never use sample rates higher than 48 kHz, there is no benefit what so ever in doing that, only drawbacks.
Long story: a human can not hear anything beyond 20 kHz. The main selling point of higher than 48 kHz sample rates is that there is some kind of magic frequencies there that gives some extra clarity / air in the recording. They always fail to mention that to record and playback 96 - 192 kHz your mic, every amplifier in the signal chain and your speakers need to be able to reproduce these frequencies. Amplifiers that are not designed to do this will probably generate noise in the audible frequencies trying to do that. And besides that 96 - 192 kHz recordings take lots of storage space and playing them back or processing with plugins takes extra processing power.
Since it happens with two different soundcards (Presonus 1818VSL and Focusrite 18i6), it’s likely the Speakers (Behringer Truth B2031A) that can’t handle it and fold down the frequency into the audible range. Then again, i also hear it in the headphones (DT 770Pro).
I understand the arguments against hi sample rates but I hear many say they the eq, compression sounds more detailed, reverbs are more clear or 3D sounding, more character In plugins etc, im wondering what are people hearing if hi sample rates are useless.
EQs are fine (if they’re decramped). I can see that some reverbs may produce audible artifacts from high-pitched content, but calling that “more 3D sounding” smells like subjective bullshit. And compression should not be affected at all.
In some cases synths can benefit from higher rates to avoid aliasing (in the frequency domain, e.g. harmonic distortion when waveforms don’t fit exactly in a table), but in those cases the synth can oversample internally in their DSP or use similar techniques.
The fun part is that mp3 cuts everything above about 19kHz anyway and most consumer hardware cannot reproduce these frequencies in the first place.
If you have a proper recording and re-play system, listeners cannot distinguish it (see Listening Tests in the referenced article)
If I could get a dollar for every stupid, unsubstantiated, false, provably wrong thing that people say about audio, I could stop working on Ardour and write a new DAW.
There’s only 1 argument in favor of higher sampling rates: the anti-aliasing filter used to prevent signal above the Nyquist value from folding back around and becoming noise can be steeper (more dB/octave) and thus closer to the ideal (which is a vertical brickwall filter: infinite dB/octave). Whether or not the improvement between actual anti-aliasing filters at different rates is worth the extra disk space and extra CPU space is not really clear.
There is one other possible argument too: it is theoretically possible that humans can experience frequencies above the traditional hearing cutoff (around 20kHz in young people) via sensory mechanisms that do not involve the auditory system. There is almost no evidence of this at this time, and even if it turns out to be true, most modern audio equipment (analog as well as digital) is not built with this concept in mind. There are very few speakers that would not cause horrible distortion up in the ultrasonic range, certainly not ones you’d find in gear meant for listening to music.
Robin hit the nail on the head with his post on it. There are very specific cases where it can make a difference, depending on the specific processing involved, but those cases are few, far between, and very often misunderstood.
I use 96k on occasion, if for no other reason than to make sure nyquist doesn’t bite me, especially given I can’t guarantee all DSP manufacturers work to make sure it doesn’t present a problem. But the vast majority of content is still recorded and produced at 48k and 44.1k for good reason. I do not use 192k at all honestly.
Also as Robin mentioned, keep in mind that most sound systems cannot produce 20Hz-20kHz evenly. Live systems in particular, which I get paid to design, can be very iffy about this, and the larger listening areas make this even more difficult. Combined with most live mixing consoles still run at 48k, honestly people overthink this way to much:)
No, plugins can’t provide “higher fidelity” simply by increasing the sample-rate.
Check with a spectrum analyzer, you either feed silence or noise in the frequency spectrum above 24kHz into the DSP and that won’t magically increase quality. – You do however double the CPU load (more samples need to be processed).
In any case I’d worry a lot more about mic-placement, and room-treatment that is a lot more significant. And obviously the composition and performance.
If you are simply recording audio, and playing it back, there is little to be gained by using higher sample rates - the limit of human hearing is 20kHz (and generally gets worse with age). You can completely reproduce the entire audio spectrum by sampling at 44.1kHz.
When it comes to plug-ins, “it depends” - a plug-in might need to run at a higher sample rate internally, if for example it does some form of waveshaping - and the mechanism used for the up / down sampling might be different dependent on the host sample rate, which could account for perceived differences - though you might expect them to be slight. It depends on the design of the plug-in.
upsampling, oversampling or just running everything at a higher rate does not automatically make a plug-in better, in many cases the plug-in’s sample rate conversion to a higher internal rate can add its own artefacts too. Like most of engineering, its a compromise, and different designs / designers choose different trade-offs. It doesn’t automatically mean that the plug-in with the higher sample rate is better.
And be very careful to compare like for like - even a fraction of a dB louder can make things sound subjectively ‘better’, (in fact even a nice fancy GUI can make things sound better…)
Also beware that often obsessing about ‘is the sample rate high enough’ is a distraction from ‘the recording doesn’t sound good, because the recording doesn’t sound good’ - spend more time on capturing the best performance (some of the best analogue recordings still stand up against their modern digital counterparts, and could well have been recorded on (vintage) analogue multi-track equipment with a dynamic range approx. equivalent to 12Bit digital and a top end response below 15K). If your recording sounds bad, its almost certainly not because your using only 24Bit at only 44.1kHz.
I m happy all my life with recording 48KHz / 24bit. ONLY thing that bothers me sometimes is when I m looking for experimental sounds and doing sound design I draw from fieldrecordings: I use a lot of pitch shifting and layering for that and that is the moment I wish I would have made a 192khz session instead and recorded at that as well… as i guess by pitch shifting everything down one octave I would have more material to “draw” into the audible spectrum (in case the pitch shifter would use that info +20kHz, [does ardour use that for the internal pitch shift?] ) and there is a lot of sound happening >20Khz… but for now I just do that ocasionally and then in another session, as it does not justify the overall higher DSP load and Space usage.
It might be no bug but imperfect design. Amp simulation is mostly a saturation effect. Let’s say i want to code a simple tape saturation, so I choose a saturation function f and the plugin just returns S_out = f(S_in) sample-wise. Cheap and no latency. This will produce harmonics - below and beyond(!) nyquist frequency that get folded back into the audible spectrum and introduce bad non-harmonic content. If you have massive saturation this can be quite disturbing (especially for pure high-pitch input sounds such as a synth). One way to work around is to increase nyquist frequency with oversampling/choosing a higher sampling rate.
Anyway it is strange to find that behavior in a plugin that requires hard cash for running.
So I would say this is the 2nd argument for higher sampling rates (Anyway I always choose 48kHz because of the intermodulation effects and other downsides.) @x42 How can a reverb produce that kind of artifact? Convolution reverbs certainly do not (if IR is antialiased properly). Sorry I’m no expert just curious. I always thought that reverbs are a good example for a digital effect strictly not suffering from quantization.
I was thinking that low frequencies can produce ultrasonic resonances when using higher-sample-rates.
In a natural space those would just be superpositions and inaudible, while in a digital or digital/analog environment they may cause issues.
I also had algorithmic reverbs in mind when I wrote that. Those may subjectively be more 3D sounding at different rates. Depending on delay-lines, comb-filter, feedback, modulation, internal state, etc the implementation may not be SR invariant.
As for convolution reverbs, you’re right. If you have a band-limited IR, higher-sample-rate won’t make any difference whatsoever since the signal is already band-limited. Depending on implementation there are perhaps some re-sampling artifacts though.