Quick question about JACK period size and number of periods

Hi all,

I’m continuously doing xrun tests in some form, and would like something verified.

Am I right in thinking that, in order to separate driver latency considerations (sound card quality, priority of the Jack audio server, factors like USB/internet/video usage) from audio processing considerations (plugin reliability, total processing load of plugins, number of concurrent audio streams), that I need to find the minimum reliable latency of both considerations?

If so, am I also right in thinking that driver latency is basically {period size}*({number of periods}-1)? Like, any driver-related hiccup that lasts less time than that is of no consequence?

And that {period size} is the only value that apps like LV2 plugins care about?

(~: Tom

And that

I’m not sure I understand the question, but I would like to comment that the only times you need to get down to low latency are:

  • You have a bad sound card without zero latency monitoring
  • You are using soft synths or realtime audio processing plugins like guitar amp simulation while you record

A sound card with zero latency monitoring lets you use a big latency value and this lets you have lots more tracks, plugins and processing in a session. This is because using a low latency value means the computer has a small “time window” to do all the processing in the session. Increasing latency means the computer has a bigger “time window” to do the processing. A small latency setting makes the computer work very hard and minor hickups like background processes starting up can cause xruns. With a big latency value you are less likely to have problems with xruns.

Zero latency monitoring means you can route audio inside the audio interface itself from the Mic input(s) to the headphone jack.

I hope this is not completely off the topic of your original question :slight_smile:

there’s no clear definition of latency. there are at least 3 semantic definitions

input latency: how long from a signal arriving at the connectors of the audio interface until the corresponding sample value is available to the CPU?
output latency: how long from the CPU delivering a sample value until the corresponding signal is present at the connectors of the audio interface?
through latency: how long from a signal arriving at the connectors, flowing through the CPU (and software), and back to the audio interface, until the corresponding signal is present at the (output connectors)

input latency is always 1 period
output latency is N-1 periods (technically consists of worst case and best case values of N-2 periods and N-1 periods respectively; worst case is the right value to use)
through latency is N-1 periods

Plugins might or might not see the period size at all. That is up to a host application. Using the “native” period size (on Windows or MacOS it would typically be called “buffer size”) is the most likely choice, but there are lots of reasons why that might not be the number of samples they are asked to process at any given point in time.

“Driver related hiccups” reduce the time available for the CPU (software) to process the data. If you’re close to using the full time available (1 period), then any driver related hiccup may result in an xrun. If you’re nowhere near using the full time available, then there’s more scope for “driver related hiccups” to have no effect.

@mhartzel

Yep, I want to be able to do realtime monitoring with effects. Including with guitar, which requires extra low latency. 3 periods of 32 samples at 96kHz seems to be stable for me, as long as Jack isn’t actually processing anything :’)

@paul

I think I’ve mostly got it now, apart from a few uncertainties:

Is that meant to be 1 period for through latency, identical to input latency?

If the plugin doesn’t see the period size, does that mean the host application must see the period size? As in, does any Jack app conform to the period size without trying to instead conform to something else like the output latency?

Thanks for your response :slight_smile:

JACK clients only know the period size, and they only see it (i.e. if they don’t explicitly ask for it) implicitly, via the argument to the process() callback.

Through latency is the combination of input+output latency, but they overlap in a conceptually hard to explain manner; the result is that it’s the same as output latency.

Understood. Thanks again.

I did a hardware stress test with my laptop and a USB card a while ago, and found that with 3 periods, I got xruns with period size 24, and no xruns with period size 32 or higher. So that should mean no solely hardware-caused xruns with only 2 periods of size 64 or higher. Perfect. Let the next round of tests commence.