New user wanting to understand…
In the Audio/Midi Setup window, setting the buffer size gives a delay figure in ms. This figure is different to the I/O latency on the control bar of the main Ardour window. Then when I use the Audio/Midi Setup window button to Calibrate Audio and apply the adjustment, this reports different figures for round trip and systemic latencies, none of which match either the updated or earlier I/O figure on the main screen.
My question is: I’d like to know what all these different latency labels in Ardour mean, and how they relate to each other:
- Buffer Size latency (ms)
- Calibrated round trip (ms)
- Calibrated systemic (ms)
- Main screen I/O latency (ms)
- Hardware Input Latency (samples)
- Hardware Output Latency (samples)
I’ve looked in the manual and searched extensively online and am still not sure. Eventually I resorted to looking for clues in Dr Gareus’ 2017 paper which may or may not reflect my Ardour version (Ardour 7.3.0~ds0 “Nerve Net” (rev 7.3.0~ds0-1) Intel 64-bit) running on Debian Bookworm installed from Debian repos. The thesis seemed to suggest (by my unqualified reading and admittedly in a more general context than Ardour itself) that round trip means buffer delay plus all other delays (PC hardware and software including USB audio interface) in both directions: so from mic, through the computer and out to the speakers or headphones.
By the way - I’m not using Jack (the round trip diagrams I’ve found for Ardour don’t explicitly include ALSA or Ardour) and for what it’s worth I’m sticking with a standard kernel - not RT.
I tried doing some calcs on all the figures that appear in Ardour and they don’t add up for me, at least not by my understanding of what to multiply, divide and add up! But really what I want to know is which of the figures in the GUI actually express the delay that I get in my setup. Like is the main screen I/O latency the same as round trip, and if not, what is it?
Ultimately I want to know so as to make informed decisions on sample rate, buffer size and periods settings in relation to what kind of uses I might be able to successfully use Ardour on my computer for, and where I might need to compromise.
buffer size: number of samples between (effective) cpu interrupts from audio interface … the number of samples processed as a single block by the application. Larger numbers: more latency, less CPU load.
calibrated round trip: measured value for external signal → audio interface → CPU → audio interface → external signal
calibrated systemic: part of previous value that is not caused by buffer size choices (i.e. internal hardware latency in your audio interface)
I/O latency: buffer size * 2 (technically, on Linux where you can have any number of buffers, buffer size * number of buffers). You can also think of as the time from when an input sample is first accessible to software on the CPU to when the corresponding output sample is delivered back to the audio interface.
hardware (input|output) latency: the two halves of calibrated systemic, normally we can only measure the sum and divide by two on the assumption that they are equal
None have anything specifically to do with JACK.
Thanks for the quick reply Paul,
I wonder if you can indulge looking at this worked example, which hopes to show where I’m still struggling with this.
Example is using a 96mhz sampling rate, 3 periods and 256 buffer size (with no claims these are good settings!):
Buffer Size latency (ms) [ pre calibration 2.7 ] [ after calibration still 2.7 ]
Calibrated round trip (ms) [ 22.354 ]
Calibrated systemic (ms) [ 14.354 ]
Hardware Input Latency (samples) [ calibrated 689 samples ]
Hardware Output Latency (samples) [ calibrated 689 samples ]
Main screen I/O latency (ms) [ pre calibration 8.33 ] [ after calibration 22.69 ]
Since you said about I/O latency:
“buffer size * 2 (technically, on Linux where you can have any number of buffers, buffer size * number of buffers)”
…and I am using three in this example, then I figure:
(256*3)/69= 11.130434782608695 ms
I note the main screen figure 22.69/2= 11.345 is close to the above, so guessing it shows I/O “there and back” somehow calculated slightly differently. Is that right?
(256*3)/96= 8.0 ms
whereas I note the main screen figure comes out at 22.69/2= 11.345ms
Then looking at the calibrated round trip latency = 22.354
This is said to be the sum of:
- systemic (audio interface and USB) = 14.354
- and buffers, the remainder = 22.354-14.354= 8.0 pretty much represents buffers (and matches the figure calculated in the paragraph above)
… since buffers take = 2.7ms (actually 256 samples / 96 samples per ms = 2.66ms)
I note that 2.666*3= 7.99
so it seems stated buffer size latency is “per period”, is that right?
Regarding the calibrated adjustment of 689 samples in and out, at 69 samples/ms takes 689/69= 9.986ms
Whereas to adjust for the measured systemic delays of 14.354ms at 69 samples/ms would seem to require 14.354 * 69= 990.426 samples
split into i/o that’s 990.426 / 2 = 495.213 samples in each direction
Why is this different to the 689 reported in the Ardour calculation?
I think it is just a typo.
It appears you simply transposed 96 to 69 samples per ms. The same math using 14.354*96 works out as expected.
"It appears you simply transposed 96 to 69 samples per ms. "
Thanks so much for checking this through this thoroughly. I’ve struck the paragraph out so your reply will make sense in the thread.
I’m still puzzled by the remaining points, and especially that the I/O latency 22.69 is larger than the whole round trip calibrated measurement 22.354 (this last detail is something I didn’t highlight before).
The I/O latency number (basically, buffer_size * nbuffers) is a worst case number. It is the maximum delay possible between a sample being ready to be read by the CPU and the corresponding processed/generated sample being ready to be read by the audio interface. There will be some jitter in the actual I/O roundtrip, which is of no real-world consequence, because the audio interface handles a continuous stream of samples. The fact that the CPU managed to deliver samples before they were required will not affect when they are actually played. However, if they are ever delivered late, you will get an underrun.
Since the calibration process isn’t subject to worst case scenarios (certainly not constantly), it can be expected to be different. In this case, about 33 samples different, which likely corresponds to the “window” used by the calibration code to detect and sync the loopback signal.
I’m obviously out of my depth here, but after making the correction Chris kindly pointed out, the I/O comparison to my calculations seems to be a bit larger than I’d initially sketched out.
Just saying this as your comment may have crossed in time with my correcting edits.
I’m comparing I/O latency of 22.69 msec to a measured roundtrip of 22.35 msec, so my numbers are current, But I still may need to think my explanation to you.
Thanks again Paul. Are you saying the worst case would be I/O + systemic, so 22.69+14 ish? And more plugins would push the systemic up even higher?
Yes, worst case is I/O + systemic. Plugins cannot affect systemic latency in any way - it is a property of the hardware used in your audio interface. Plugins with their own latency can affect the effective I/O latency, for a given signal path, but they do not affect the “I/O Latency” that Ardour is reporting based on buffer size and number of buffers (periods in ALSA).
And worst case round trip would presumably be I/O + systemic + time signal going via Ardour + plugins used by Ardour.
It is really important to differentiate the latency caused by plugins from what we’re talking about here, although in the end, if you’re playing (or mixing) live, the end result can be the same.
Every processing cycle, an audio application reads a buffer of audio from the hardware and writes a buffer back. This is a synchronous process, and cannot be interrupted without audible effect. If the audio application was just doing pass through of audio, the samples written back to the h/w would be identical to those read from it, and in this case, effective latency is systemic + (buffer size * num_buffers). Systemic corresponds primarily to delays in the A/D and D/A converters in the audio signal path. Scheduling/CPU/OS “delays” (e.g. waking up Ardour to process audio) play no role in this.
Certain plugins may add latency to a particular signal path, but only in the sense that “the sample that arrived at time T does not affect the output until time T’”. Plugins do not have any direct impact on the flow through the audio hardware and CPU.