Basis for delay compensation

I am less concerned about playback related material, and more specifically I/O and paths. According to Ben, there are MixBus and non-MixBus paths and are timed differently. Perhaps this is also a legacy comment, but not nearly as aged as Ardour 5 or 6.

While I understand the urge to push the “just works” approach, I am more interested to know in which situations things will just work and in which situations things are optimised toward playback vs signal flow.

That use to be the case before Ardour 6 was merged into Mixbus. IIRC back then major versions aligned.

MIxbus sends always had delaylines so that latent Fx in Mixbusses can be compensated for, before Ardour aux-send supported it.

Proposed rule: if you’re reading about Ardour online, and the date is more than 12 months before the day you’re reading it, be willing to discard everything you read.

3 Likes

That’d invalidate the Ph.D. thesis too :slight_smile: Luckily there is still recent source code available :supervillain:

2 Likes

1 - Ben’s guidance is 2025 Jan, so not exactly old. While I am willing to disregard, by virtue of this post I am seeking to gain a better understanding.

2 - Even more recently, Dingo’s guidance (consistent with the current MB user guide) is to stay away from the Audio Connections system and just use MixBus’s GUI knobs for paths.

3 - The basis for this post is to determine to what extent “playback” of audio has a role in compensation VS to what extent overall I/O consistency has a role in compensation. This seems not covered in the manual.

4 - I have and do encounter issues with routing/summing in MixBus that I do not encounter in Ardour.

h

On Delay Compensation & Recommendations for Routing is still a good overview

and yes, that’s also what Dingo echoes:

  • Prefer aux-sends (or Mixbus sends) over direct connections.
  • Avoid one-to-many direct connections (many-to-one is fine).
1 Like

Yes, great overview, and part of the basis for my repeated question. As this is playback-related, I am interested to know if all delay compensation is playback related. While my quote from Paul may be quite old, my question remains.

Dingo’s post as recently as December says sending a MixBus back into the system is problematic. I agree. Doesn’t sync with the “just works” claims.

I seek to understand thing better with respect to how Ardour is designed to handle signals, as opposed to playback.

I expect this is true. I want to know what the basis is for the compensation.

The latency reported by processors (mostly plugins) and hardware “ports” that exist in the signal routing.

Isn’t that just another way of saying don’t connect outputs back to inputs because it creates a feedback loop?

Can you explain the distinction you are trying to make? Are you trying to make a distinction between audio which originates from a file on disk vs. audio which originates from the audio interface inputs?

There is no distinction; you can change In/Disk monitoring any time regardless of transport state.

While I appreciate the flurry of responses, the query is simple, just being overlooked apparently.

Compensation of any kind is targeted to achieve something. Historically, and perhaps even currently, the basis has been to keep everything coming from PLAYBACK tracks in sync with each other. I am curious to know if that remains the basis for Ardour’s timing approach, since another way to approach timing would be to keep inputs and outputs in sync with each other.

As simply as I can say it. Under which circumstances will Ardour impose delays upon signals coming in, on their way to output, and being processed along the way.?

I don’t even understand what “keeping inputs & outputs in sync with each other” would mean, but I’ll leave it to @x42 to explain the highest level goal(s).

There is a distinction. Delaying a playback track to remain in sync with another playback track will involve compensation. Assuming that an incoming signal needs this same delay is entirely distinct from assuming that it needs to have low throughput timing. These two assumptions are unique. I have no idea which of these approaches Ardour takes. I am asking.

The answer is no, as I explained above. Ardour doesn’t care where the signal originates from. live input or disk, doesn’t matter.

Does this mean that LIVE signals will be delayed?

Think of it this way: an input signal becomes available to Ardour when the playhead is at time T (which indicates, hopefully) that the user is currently hearing the disk data for time T (or at worst, it is being delivered to the audio interface).

There is no way to make the newly-arrived input signal audible “right now”, because that’s not how block-structured audio processing works. So we have to pick a best alternate case strategy to align input material from disk material, and I’ll leave that to @x42 to comment on if he chooses to …

@Paul, all of these responses remain relative to playback. Is this the basis for timing? Playback?

Create a new session, add two mono busses A, B.
Both receive live input (say microphones) and pass it on to master for playback on speakers.

Now add a latent effect to Bus A.
Arodur then delays the signal though Bus B by the same amount as the delay introduced by the plugin on Bus A.

Now for tracks. Just think you have a disk reader and disk-writer on those busses. This then introduces absolute alignment with the clock, but is otherwise identical.

Ardour has separate disk-reader (aligned to output) and disk-writer (aligned to input when recording) on each track.

Output alignment is chosen to be able to synchronize with e.g. video-playback or external synthesizers. Capture alignment can be handled by simply moving the recorded data after the fact.

1 Like

One of the goals is that when the playhead is shown to be (visually, or via a clock widget ,or MTC or LTC or whatever) at time T, the user is, as close as possible given the data available, hearing data from disk that originates at time T (on the timeline).