When using standard tracks that have plugins that add latency which is automatically compensated, if I only route multiple tracks to a bus and only use plugins that don’t add latency, would that bus still be in good standing or does the bus need to be manually compensated when audio is routed to it from tracks that have latency on them?
Yes, this is a good approach, and alignment is correct in Ardour 5.12 when playing back audio.
(live signals are not aligned in Ardour5)
Sorry I edited my spelling mistakes, so then I would not need to manual compensate, I’m doing some tests and it seems to be fine, I was always aware of ardours non bus compensation and avoided mixing with it because I thought it would be a nightmare but most plugins I tested with have 0 latency added. Especially for a-eq plugin.
I have a related question then… If bus latency is aligned in Ardour (which always seemed to be the case to me), then why does Harrison Mixbus advertise the “MixBusses” as specifically being latency free? Is it just because of the EQ and Drive processing on those busses? Because they make it seem as if adding busses aside from the 4 or 12 that come with Mixbus will not be compensated for in terms of latency.
If you add a latent plugins to a bus (in Ardour5) , then alignment breaks. e.g.
Track -> master
Track -> Bus [latent fx] -> master (signal is late)
This can lead to phasing issues, or simply misalignment of the two tracks.
However with Mixbus
Track -> master
Track -> MixBus [latent fx] -> master
The track -> master
path is delayed to compensate for the FX latency on the bus. The tracks remain aligned. The delay happens in the mixbus-sends. This also works with live signals.
What works in Ardour 5 is the following scenario:
Track 1 [latent fx] -> master
Track 2 -> Bus -> master
It works because Ardour reads future data from disk on Track1, which when delayed by the FX, lines up with data from Track2.
All this will change soon, upcoming Ardour 6 will have full graph latency compensation (and delay Track 2 accordingly).
Yes, a-eq does not add latency.
Latent effects are those that need context:
The classical example would be a lookahead limiter. A limiter needs future data to make a good decision by how much to reduce the signal level without changing the sound-characteristics. So the signal is buffered, analyzed and then processed. Due to the buffering the signal is delayed.
Another example would be pitch-shifters or autotune. Context is required to detect the pitch.
Some plugins also buffer to process the data more efficiently. On modern CPUs it is more efficient to process in large chunks. Convolution is one example where this approach is common practice to optimize performance.
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.