It seems there have been a number of bugs/corrupt sessions encountered that trace back to sessions’ time domain being set to musical/beat time. My current template is set to musica/beat time. While I haven’t had any issues, I’m considering changing the template’s time domain to audio/wall clock time for stability in the future. I’m running 8.12 now and intend to switch to 9.0 at release. I record a mix of midi and audio sources. I mostly play along with a click or drum track aligned to the grid and sometimes use tempo and time signature changes.
Is my impression that a session in audio time is “safer” correct?
What are the implications of choosing one time domain option over the other? Are there particular use cases or work flows where when is preferred over the other?
Apologies if I’ve missed this in the manual or the forum.
Ardour 9.0 is still in a pre-alpha state and not released. It’s not even ready for beta-testing yet.
Yes, absolutely.
It pertains to region positions on the timeline.
Say you want to move a MIDI region exactly 1 Bar forward and then 1 Bar backward:
When using Music Time the region lands on the exact same spot.
When using Audio Time the MIDI region is snapped to the closest sample (at sample-rate, usually 1/48000 sec), which, depending on the BPM may or may not be the same spot where it started.
While the difference is tiny, it can accumulate if you excessively duplicate or sequence regions.
I think a better system is needed. I don’t recall seeing this time domain issue in other DAWs. If I want to work with both audio and MIDI I should be able to without considering this kind of thing. I should just place stuff where I want it and it should magically be correct.
As it is I use Audio Time for everything because Music Time is too rigid.
Other DAWs do do different things. The only one I know about for certain is Reaper, which represents all time in floating point seconds. I am fairly sure Ableton Live does everything in musical time. If you stick with a certain kind of workflow, this works ok.
I wrote a document a few years ago about time representation. It’s a bit technical, but it outlines the fundamental problem I was trying to solve:
I don’t know of any good ways (or maybe I should say, better ways) to solve the problems we were/are trying to solve. As Robin mentioned, if you use audio time, but actually work with a beat/bar oriented workflow, you will end up with things slipped in time sooner or later. The same applies if you use seconds, as Reaper does, with the additional problem internally (at the code level) because some of those seconds represent times that are tempo (and even meter) dependent, and some do not. Figuring out which ones to update when the tempo is changed and which ones to leave alone is (or appears to me) to be highly non-trivial.
But perhaps there is a brilliant, simple and clear solution out there that I just couldn’t dream up and don’t know about.
One thing that is different between Ardour and Reaper is that Ardour has specific track types. Midi/Audio, so is there a world where each track type (or each track) can have its own time domain?
The issue is providing a UI for changing it, etc. There’s a heirarchy of what are called TimeDomainProviders that starts with the Session and ends with individual data types.
The replies and the document help a lot thank you.
Often I will jump-start an idea by multi-duplicating a 4 bar drum pattern for 4 minutes or so. So if the 4 bar pattern is duplicated e.g. 30 times, the edges of the last region will have drifted in a worst case 30 samples (less than a ms for 48000 Hz) from the edges of the musical time bar edges. I suppose this makes moving other regions around and snapping a little ambiguous, but the imprecision would be audibly imperceptible I think in most cases.
That said, would choosing a tempo like 115.2, 117.188, or 120 BPM (where there is no round off error converting between bar length in audio time and music time (48000 sample rate)) elimate the drift in this particular use case (duplicating regions the size of one bar). Obviously artistic considerations should motivate the tempo. However I can’t tell the difference between 120.2 BPM and 120.0 BPM, so I may as well make my life easier and choose 120.0.
Internally, we measure audio time in “superclocks”, at 282240000 ticks per second.
We have on a long-term to-do list item: quantize all tempos to a rational number that causes 1 tick (1920/th of a beat) to always be an integer number of samples long.
This is not trivial to do, and obviously has a small downside for people who believe that tenths or hundreds of a beat in a tempo setting are important (you do not appear to be one of those people