What are some steps that I could take on my stock Debian 12 system to improve performance and functionality making music in Ardour 8?
Alternative kernel?
Audio backend settings?
Additional packages?
I’m considering hopping to something like AVlinux but I really like my existing experience on Debian, so i’m exploring how I might be able to improve it first.
Debian audio user here. Generally speaking, Debian is as capable for audio production as other major distributions, but of course isn’t tuned for audio performance out of the box. Also keep in mind, if you use a stable version that some packages may get old, because its release cycle is quite long.
Your other questions are difficult to answer without more details, like: Why do you want to improve the performance? What are your goals? What kind of audio hardware do you have? Is low latency critical for you? And so on.
However, if you haven’t done any tuning yet, I would suggest running this script:
It should tell you what improvements need to be made to get your system ready for pro audio.
Hi Krzysztof, that was extremely helpful - thank you. I started making changes based on the output of that and am looking forward to testing any differences in Ardour.
The main problems I have had so far are latency when recording audio, sound glitching during playback (making it difficult to mix, seems worse with more effects plugins) and difficulty setting up my midi keyboard’s transport controls. I suspect that the first two are performance related. I’m not using very powerful hardware but I want to eliminate weakness in my system’s configuration before spending lots of money on a physical upgrade.
Don’t bother with a realtime kernel unless you really have to. Though Debian makes it relatively easy, since they ship one.
Some further hints: If you use USB devices: do no use hubs for any of the equipment. Always use direct connections to the PC. Check with lsusb -t that Audio devices do not share the same root hub as other “slow” devices (keyboard, mouse etc).
In /etc/default/grub I now have (following this guide) :
GRUB_CMDLINE_LINUX=“preempt=full threadirqs cpufreq.default_governor=performance”
Is that right?
jack2 isn’t showing up in my Debian 12 repo… I have jackd2 installed but I dont remember it prompting me to set up any privileges.
I think I’ve got 3. and 4. done though.
Below is what rtcqs is currently showing me. Does it confirm that I’m properly set up?
Many thanks
Root User
[ OK ] Not running as root.
Group Limits
[ OK ] User is member of a group that has sufficient rtprio (95) and memlock (unlimited) limits set.
CPU Frequency Scaling
[ OK ] The scaling governor of all CPUs is set to performance.
Kernel Configuration
[ OK ] Valid kernel configuration found.
High Resolution Timers
[ OK ] High resolution timers are enabled.
Tickless Kernel
[ OK ] System is using a tickless kernel.
Preempt RT
[ OK ] Kernel 6.1.0-26-amd64 is using threaded IRQs.
Spectre/Meltdown Mitigations
[ WARNING ] Kernel with Spectre/Meltdown mitigations found. This could have a negative impact on the performance of your system. See also System configuration [Linux-Sound]
RT Priorities
[ OK ] Realtime priorities can be set.
Swappiness
[ OK ] Swappiness is set at 10.
Filesystems
[ OK ] The following mounts can be used for audio purposes: /, /home
[ WARNING ] The following mounts should be avoided for audio purposes: /boot. See also System configuration [Linux-Sound]
IRQs
[ OK ] Soundcard snd_hda_intel:card0 with IRQ 129 does not share its IRQ.
[ OK ] USB port xhci_hcd with IRQ 128 does not share its IRQ.
Power Management
[ OK ] Power management can be controlled from user space. This enables DAWs like Ardour and Reaper to set CPU DMA latency which could help prevent xruns.
This has been so helpful. After making these changes to my budget laptop, I created a session and simultaneously recorded 8 tracks of audio. Then I put an amp sim in each track, created 8 stereo aux busses and put ace reverb in each, made the room sizes big and auxed each track to each bus. Then i put a saturating limiter on the master. When I run it on loop, the dsp is about 15%! Amazing!
Thank you
Output of the rtcqs shows you did most important tweaks and your OS is configured to handle audio tasks pretty well. Do you see an improvement?
If not, please tell us which audio backend do you use with Ardour, provide your working samplerate/buffers/periods and type/model of your audio hardware.
Great Yes, I’ve noticed an improvement; my system is no longer getting overwhelmed during playback of the projects I’ve checked and latency seems better, especially after experimenting with sample rate and buffer. The latency is still noticeable but it’s definitely usable. My audio interface supports direct monitoring too but I find getting the sound nice really helps to inspire a good take, particularly with vocals.
The best results I’ve found so far are from these settings:
ALSA
96khz
768 samples (8.0 ms)
2 periods
Hardware Input Latency: 28
Hardware Output Latency: 28
When I run the Latency Measuring Tool with these settings I get:
Round trip latency: 1592 samples (16.583 ms)
Systemic latency: 56 samples (0.583 ms)
I haven’t experimented with changing the ‘periods’ number and am not sure what that refers to…
My audio interface is a Focusrite Scarlett 2i2. I think it’s 2nd gen but am not 100% sure.
Do you think I can reduce latency any further or increase performance on my current hardware?
This whole conversation has been very valuable and I really appreciate the help.
This thread is EXCELLENT and I wish I had found it about 6 months ago, when I was chasing random xruns. Bookmarking for future reference!
Referencing your last post @WillowMeWunder, there is probably no need for 96khz. Modern music is typically recorded at half that, 48k; and CD quality is 44.1k. Reducing your 96k to 48k or 44.1k will help TREMENDOUSLY, and I seriously doubt your ear will hear any difference. What you’re doing is analogous to adding a “dog whistle” to your recording…it’s there, but you can’t hear it and you never will. Dial back your sample rate to a range that is more “human”.
With your sample rate changed, you can adjust your samples/buffer to a lower setting. 256 seems to be a sweet spot for me, yielding 5.8 ms latency. Your mileage may vary, but that’s a good target for you. If you can go even lower, 128 or 64 will reduce your latency by 1/2 for each tier you can reduce to. I might be able to go lower, but I got the results I wanted at 256 samples, and latency was okay, so I stopped there. Been busy making music, but I’ll probably tinker and tweak to see what my limits are, when I’m not so busy.
I can’t speak to your Hardware I/O latency, as I just leave mine at 0. The latency you are hearing is coming from your round trip latency (16.583ms). Wow! I can’t imagine getting anything done with that kind of latency…that must be VERY distracting. Ideally, you want that round trip latency to be around 10ms or less. The lower the better. If you can use the settings above, yielding 5.8ms latency, then the round trip latency will be 11.6ms. That’s pretty close to 10ms and may be good enough for you. Of course, lower is better, but you’ll be bound to your hardware capabilities. Reducing sample latency will automatically reduce Round Trip Latency, so focus on that and getting Round Trip to 10ms or less and I think you’ll be very pleased with the results!
Yeah me too! It’s been extremely useful and illuminating!
I’ve been tinkering with my settings a bit more and have found that I can pretty much eliminate audible latency by lowering the sample buffer, like you say. Interestingly though, the more I lower it the more it strains my system during playback. So, it seems that the best way forward for me (on my current hardware at least) is to lower it as much as poss when recording audio and raise it the rest of the time, particularly when mixing larger projects.
Yes, you’re right about high samples rates. I probably won’t be making any music for bats or porpoises.
There is the argument that higher sample rates actually lower the latency. This is because latency is caused, in part, by buffering; specifically waiting for the buffer to fill.
There’s two ways to lower this buffering latency: make the buffer smaller, or fill it up quicker. Using a higher sample rate fills it up quicker for a given period of time.
Of course, the upshot is your computer then has twice as much data to process, which then contributes to the CPU load.
So (for instance) doubling the sample rate is roughly equivalent to halving the buffer size. but I have no idea which of these is better in practice, in terms of CPU load, xruns, etc.
I have never heard that before. I’m not saying you’re wrong, just that I’ve never come across this advice before. It seems counter-intuitive, but I can follow your logic, and track along to that conclusion. I appreciate the fresh perspective and will keep that in mind when I’m tinkering with my system, trying to get it optimized.
That is like playing guitar at the front of the stage while your amp is at the back of the stage (sound travels about 5.6m in 16.6ms). Plenty of people get stuff done when they are 5m or 6m in front of their amp.
Nothing wrong with lower latency if your system can handle it, but make sure you actually notice a difference and are not just obsessing over numbers.
I’m fine at 5.8ms (11.6 round trip), but definitely notice a big issue with 11.6ms (23.2 round trip). 16ms is right in the middle of that range, so I was interpolating the difference between “no problem”, and “impossible to record” with. don’t think I could work with that, so kudos to anyone who can.
any maybe it’s a matter of getting used to? I’ve never been 15-18ft from my amp. Maybe 10 at most? So no frame of reference for 15-18ft.
You are correct though…don’t obsess over the numbers. But they are useful as benchmarks for comparison. everybody’s different.
It’s not that counter-intuitive as it might seem at first glance. The latency is basically buffer_size/sample_rate (not taking into account latency generated by audio hardware, of course). So lowering the buffer size or raising the sample rate by the same amount will yield exact the same latency. In contrary to the hardware latency (which must be measured) it is a pure math.
And which is better? Lowering the buffers or raising the sample rate? I think the former. Less data, less stress on i/o, etc.
I very much suspect you are right in the general case. But I do wonder if it’s system specific. For instance, it has been my experience that certain interfaces, and certain drivers, do not work well at lower buffer sizes. I wonder whether, in those cases, having a larger buffer with a higher sample rate would yield a more stable setup.
I am speculating here though. I wonder if anyone in the community has ever tested this.
Good point. It can be system dependent as well. There is a myriad of devices and some behave really strange. We all know the cases where dialing some crazy numbers into the device configuration gives an unexpected performance boost. Some devices don’t like a certain buffer size, some USB devices work better with 2 periods/buffer than recommended 3 and so on. Your never know.
To give some reference based on research from a few years back (Going off memory so take with some salt). WELL trained professional musicians tend to be able to hear down to about 10mS. Some of the best drummers in the world can hear down to 3mS. Most musicians are well above 10mS for a threshold, but it varies, by about 30mS I think most musicians would notice something off.