While I appreciate the flurry of responses, the query is simple, just being overlooked apparently.
Compensation of any kind is targeted to achieve something. Historically, and perhaps even currently, the basis has been to keep everything coming from PLAYBACK tracks in sync with each other. I am curious to know if that remains the basis for Ardour’s timing approach, since another way to approach timing would be to keep inputs and outputs in sync with each other.
As simply as I can say it. Under which circumstances will Ardour impose delays upon signals coming in, on their way to output, and being processed along the way.?
I don’t even understand what “keeping inputs & outputs in sync with each other” would mean, but I’ll leave it to @x42 to explain the highest level goal(s).
There is a distinction. Delaying a playback track to remain in sync with another playback track will involve compensation. Assuming that an incoming signal needs this same delay is entirely distinct from assuming that it needs to have low throughput timing. These two assumptions are unique. I have no idea which of these approaches Ardour takes. I am asking.
Think of it this way: an input signal becomes available to Ardour when the playhead is at time T (which indicates, hopefully) that the user is currently hearing the disk data for time T (or at worst, it is being delivered to the audio interface).
There is no way to make the newly-arrived input signal audible “right now”, because that’s not how block-structured audio processing works. So we have to pick a best alternate case strategy to align input material from disk material, and I’ll leave that to @x42 to comment on if he chooses to …
Create a new session, add two mono busses A, B.
Both receive live input (say microphones) and pass it on to master for playback on speakers.
Now add a latent effect to Bus A.
Arodur then delays the signal though Bus B by the same amount as the delay introduced by the plugin on Bus A.
Now for tracks. Just think you have a disk reader and disk-writer on those busses. This then introduces absolute alignment with the clock, but is otherwise identical.
Ardour has separate disk-reader (aligned to output) and disk-writer (aligned to input when recording) on each track.
Output alignment is chosen to be able to synchronize with e.g. video-playback or external synthesizers. Capture alignment can be handled by simply moving the recorded data after the fact.
One of the goals is that when the playhead is shown to be (visually, or via a clock widget ,or MTC or LTC or whatever) at time T, the user is, as close as possible given the data available, hearing data from disk that originates at time T (on the timeline).
If at least one of the outputs report non-zero latencies, one of the signals will be delayed. We consider all hardware outputs to be in the same “time zone”.
Similarly, if one of the input ports reports non-zero latencies, one of the signals will be delayed. We assume all hardware inputs to be in the same “time zone”.
The reported latency is how an input would indicate that it is, for some reason, not time-aligned with the others. E.g. if you had a network audio input port that is receiving data over a WAN and there is a 100msec delay, we would expect the input port to report that. Its latency would be different from what was reported by (say) the local audio interface. We would delay audio from the local port so that it aligned with the network port.
Am I correct in assuming there is no way for a user to choose a particular path to have the shortest possible throughput timing while also allowing other inputs/outputs to be compensated with respect to each other or with respect to playback?
That is correct. Full graph latency compensation does not allow for options. There is a unique solution.
You can however indirectly influence this, Ardour allows you do disable PDC (assume all plugins have zero latency even if they do) or manually override reported latency of select processors (top left in each plugin’s toolbar), or you could add a plugin that reports a virtual latency…
@paul and @x42 - Thanks for all the exchanges. For the Ardour part of the puzzle, I have a clearer understanding now of what to expect.
There are quite a few conflicts here with respect to today’s comments and those from Ian and Ben as recent as MB 11. I will try to revive those inquiries over there, but have historically found a lack of clarity. Things are certainly not the same with Ardour and Mixbus, as can be confirmed by lots of trial and error that includes @John_E. Some issues are the same today as they were in v5, and Ben has certainly painted a picture of the MixBus part of the flow vs the Ardour part he was given to start with.
This fuzzy language has been echo’d by others over on the Harrison forum. Can you maybe explain what advantage I might have using in aux-send-style level control to route a signal as opposed to the Audio Connections Manager? I actually prefer on/off actions as opposed to routes with trims. Is the advice purely subjective?
Yes, the language used (by Robin and others) can be confusing at times, and it’s hard to know for sure if everyone means the same thing(s) when they use the same (or similar) wording. So, sorry this thread has been somewhat of a mess…
But yes, as @jean-emmanuel just pointed out, did you read Robin’s “Ardour 6 Delay Compensation & Recommendations for Routing” forum post yet? His graphics and explanations there paint a fairly solid basis for understanding how delay compensation works in Ardour(/Mixbus).
I reviewed that guide when forming my original post, have read it, and understand the points pursued. For the purpose of this particular request for clarification, the “prefer” aux sends is not supported technically, aside from avoiding ambiguity. For my purpose, I want to create summing points, and prefer “assigning” multiple strips to an input port of an aux bus in order to achieve this, as opposed to creating paths that include the redundant level controls. Technically, I can find no support for the “prefer” language, and I am hoping to find if there is an audio-flow issue behind the advice, or if it simply is posited to provide a clearer human understanding in situations where a signal actually has possible multiple paths to a destination. My needs do not create multiple paths.
h