Is it really possible to get down to 0 Xruns?

Hi,

FWIW, the kernel I get the best performance with is this one: 4.0.0-040000-lowlatency #201504121935 SMP PREEMPT. I honestly can’t remember where I got it - some googling might be able to retrieve it. My system is an AMD FX 4300 on an ASUS M5A78L-M motherboard, running Linux Mint with the usual audio tweaks.
I’ve compiled a good few rt kernels, installed pre-compiled rt kernels… and with the same settings in jack (64 frames/period; 2 periods/buffer, onboard sound card) I always get xruns; for some reason the above lowlatency kernel is the one that performs the best (no xruns), and I don’t even have to set the cpu governor to ‘performance’. Also, rt kernels tend to freeze the pc on me at random moments, so I’ve ended up staying away from them.

I just run Addictive Drums with vsthost to play an electronic drum kit via MIDI, but I think zero xruns with the above settings and with an onboard sound card is pretty impressive.

I have the same feeling… There is something fishy going on regarding low latency andmodern linux distros. Some people suggests avoiding systemd and go for Devuan witch is basically Debian without systemd
I don’t know if that is a solution but none the less, Modern distros does not seem to work well out of the box. Have been using Lenovo/IBM laptops for a long time and used to be able to go down to 32 frames/period without x-runs but nowdays 256 still makes x-runs occasionally even without any significant load.

To make things even more difficult, Information on the web is scattered with knowledge and tips and tricks that are no longer valid.

What we would need is a tool that does what realtimeconfigquickscan does but with up2date and monitoring capabilities. Prefferable a GUI based or wizard style or ncurses tool.

Launch the tool and let it make basic suggestions how to improve low latency. Kernel, RT-prio, Soundcard-prio, Frequency scaling and so on.

When every suggestion is dealt with, one can launch the tool again and let it monitor what happens during the session and if x-runs occur, it can tell what other things did happen at the same time, like lots of interrupts from other devices.

I guess that is a similar approach as Dtrace in Solaris systems…

If I was a developer I would try to make this a reality since the number one issue Using Linux as a professional DAW platform is IMO “Glitch free Audio on modern computers using up2date distros”

Used to run Gentoo many years ago but that should not be necessary 2019 to make good use of a decent computer.

1 Like

there’s UbuntuStudio and AvlLinux that are RT-kernel ready for the end-user. Normally the mainstream distributions do not come with an RT-kernel installed and kernel sysctl settings need to be tuned to meet RT things…

Also allank last posted up on here in 2015 :slight_smile:

1 Like

I’m only adding to the conversation because like the OP I still use an AMD FX-6300 system. By contrast I have 16 GB of ram but use 7200-rpm drives. I run antiX 17, essentially vanilla but added the Liquorix low-latency kernel (not the most recent at the time as it interfered with mouse cursor fluidity). I have also run Ardour on MXLinux with the same kernel. In addition, I have also installed various Ubuntu-based distros and tried running Ardour for a short time.

My own results are as follows:

Each and every Ubuntu variant presented problems related directly to audio performance. Whether it was overly-high CPU usage in an empty Mixbus session or regular x-runs in both Ardour and Mixbus the systems were unacceptable for my usage.

AVLinux worked out of the box with zero x-runs. The same for antiX and MXLinux. The interface used for testing was a UMC204HD. I now use both an M-Audio 192 and most recently an Audient iD44 with zero issues. I will go as far as to say that antiX (and AVLinux) is superior to Win10 ASIO performance at or under 256fpp and 3ppb (2ppb for my non-USB device though not sure this is critical). The only reason I ended up with antiX as my OS is because I didn’t need the vast amount of software included with AVLinux so chose to install a lean distro and install only what was necessary. I align with the advice that real-time kernels are unnecessary in 2019 (I re-posted the reasoning elsewhere on this forum, I believe).

Disclaimer: I might just be lucky with my particular combination of hardware and choice of OS. It works for me so I thought I’d pass it on given I have the same CPU and having gone through hours of testing and subsequent rejection of Ubuntu-based for audio work (at least valid for distros of 2018).

A second disclaimer: I don’t do any projects with huge numbers of tracks. My usage is generally 2-8 audio tracks with modest number of effects on the tracks and master bus.

I’ve used Manjaro Linux (Xfce) for audio for a couple of years now and there’s been zero problems. No xruns either, but I always use 1024 / 3 buffers. Manjaro lets you install several kernels (normal and realtime). You choose the kernel when booting up.

Thanks. I should clarify that I increase the buffers in Ardour and Mixbus for mixing/mastering. I only use lower when I’m recording virtual instruments (via Jack and Grandorgue). I used to do the same in Samplitude/Sequoia too, switching from “hybrid” low-latency to the “economy” engine. That said, Magix preferences/settings for audio are now simply mind-boggling and who knows what is the best setting these days… Thanks to Ardour developers for keeping things simple with regards audio settings!

The difference between a low latency and a real time kernel is their respective kernel scheduling latencies. A real time kernel has a lower maximum scheduling latency.

Kernel scheduling latency is the amount of time it takes a thread to wake up – the amount of time between a thread being asked for a result and the kernel starting to process it. This is measured in microseconds, so you’d think it wouldn’t matter. But if you are running at a low latency 500 us can matter.

To make the numbers easier say we are using a buffer that must be calculated within 5ms. If the DSP load is 90% then the audio process took 4.5ms to calculate a buffer. 500us = 0.5ms, so a kernel scheduling latency of 500us would cause an Xrun.

Most of the time kernel scheduling latency is low – under 100us, but it can spike and these spikes can cause Xruns. A real time kernel has less spikes and the maximum value is lower.

It’s possible to investigate this with cyclictest, part of the rt-tests package. Or you could draw a graph with this script :

http://www.osadl.org/Create-a-latency-plot-from-cyclictest-hi.bash-script-for-latency-plot.0.html

This is a graph of a low latency kernel :

0-arch1-1-ARCH%20%231%20SMP%20PREEMPT

This is a graph of a real time kernel :

3-rt1-1-rt%20%231%20SMP%20PREEMPT%20RT

A real time kernel helps get latency to its minimum without Xruns.

Could you explain why it could spike in a bare bones system with networking etc disabled? An honest question…If indeed this is the case, I might consider switching to RT. As I said, I’ve had zero problems so far but there is, of course, some law in the universe that states when I most need zero x-runs, there will be an x-run.

With all due respect, I don’t know what you have running on your system and the kernel numbers don’t match. Are these from your own system? I might be missing something. Have you tried graphing Liquorix low-latency? If similar to Windows LatencyMon recommendations anything below 500us you are fine for real time audio work.

EDIT: In a nutshell, laws of the universe withstanding, If I can get 128 and 256 buffers at my 3 periods for USB interface with low-latency kernel, why would I need to do any better than this for typical recording and mixing/mastering tasks?

256 frames per period with 3 periods is a noticeable amount of latency. At 44.1k that’s around 17ms. If you were playing a virtual instrument the latency would be 34ms + hardware latency. That would be noticeable. (EDIT : correct figure for jack2 in async mode : 29ms + hardware latency. )

A low latency would be 5ms round trip – that wouldn’t be perceptible.

For the settings you’re using a low latency kernel is fine .Computers have been able to handle eight tracks of audio since 1999. :slight_smile: For applications like live virtual instruments or monitoring through the computer where low latency is important a real time kernel could improve performance.

Maybe I’m mistaken about what settings I use for recording virtual instruments. The latency really is imperceptible (and better than I can achieve in ASIO land). Whatever the actual values (lower than 128?) I can record without any distracting latency (and more importantly without x-runs). I’ll report back with actual settings at some point.

No doubt I am confused about the numbers. I loaded Cadence with my pre-existing settings and I see 44.1k, 128 samples (with 3 periods). Block latency says 2.9ms. That’s the number I’ve always understood to be the same as reading a latency number in Windows. Clearly I am incorrect if @merlyn is suggesting that 256 buffers equates to 34ms. I know what sub 10ms feels like on Windows (according to the control panel of my interface) and the performance I get on antiX is better.

@anon60445789 if you can show me a link to where I can find that Zen dorky tool I’d be interested to look more into it. There isn’t any… Maybe that stat is just a copy-paste text fake … :slight_smile: All the statistics around MuQSS and Liquorix against stocked things shows no beneficial difference… I offered a link – but I had a little spew with development because we’re not allowed to elaborate into the topic… and I was a little upset that I wasn’t allowed to get more feedback into it… but we’re all cool…

:cowboy_hat_face:

Yes, there are a few ‘latencies’.

Cadence reports ‘block latency’ which is (number of frames per period)/(sample rate). In this case 128/44100 = 2.9 ms.

If you were using QjackCtl latency is reported as (block latency)*(number of periods). In this case 8.7 ms.

When going through the computer there is an input buffer and an output buffer so the total round trip latency is 2*(block latency)*(number of periods). In this case 17.4 ms. (EDIT : the correct formula for jack2 in async mode is (block latency) * (2 + number of periods) giving 14.5ms. See @x42’s post below.)

Then there is hardware latency which on my soundcard is 30 samples or 0.7 ms at 44100 on the input and output making a total of 18.8 ms. (EDIT : correct figure 15.9ms)

1 Like

I’m so totally and utterly confused by both your long post and this following one. I’m simply repeating how people on the web refer to Liquorix: as low-latency. For example, here: https://forum.mxlinux.org/viewtopic.php?t=45544

Now, the post of this topic was about reaching zero x-runs which I do, comfortably while recording music that requires zero perception of latency. I do that with a combination of antiX 17 (soon to be 19), a lowly UMC204HD or other similar device and the Liquorix kernel. I stated I might just be lucky and it obviously depends on what else is going on on my OS and particular hardware combination.

Thanks. So would this be the same in Windows if it reported under 10ms in the interface control panel? If so, 18.8ms seems absolutely fine for my harpsichord and organ recordings. It feels instantaneous!

You’ll need statistics and benchmarks. When it comes to performance, you need charts to back up the claim there is a difference, so far when I look at things on phoronix I don’t see any gain with the Liquorix kernel.

Phoronix is a pretty established name for Linux benchmarks, I am kind of surprised you never heard or mention them…

And this is how you drive good people away from forums.

https://liquorix.net/

" Hard Kernel Preemption : Most aggressive kernel preemption before requiring real-time patches."

Sure.

You’re probably using an RT kernel and you spew to everybody that you are not using an RT kernel.

Yes, if you don’t notice it it’s good.

You can measure the actual latency with jack_iodelay by connecting a lead from the input to the output of your soundcard. That also tells you the hardware latency which you can put into the fields in Cadence called ‘Input Latency’ and ‘Output Latency’.

Once again another absurd post to these forums. I’ve deleted it.

It helps no one whatsoever for you to do this sort of “brain dump” about kernels (or whatever). If you want to write up a post about RT kernels, that’s entirely welcome, but it should be structured, accurate, and actually helpful to real people. It should probably have its own thread. It must be kept up to date. Etc. etc.