Managing DSP Load

Hi all,

I’ve been using Ardour for some fairly CPU intensive projects (lots of sampled instruments, some physical modeling plugins, plus the usual array of compressors, filters, and such), and DSP load is starting to become a problem for me. I’m not getting constant X-runs quite yet, but I know that if I add much of anything (another track, a flanger, whatever) I will be.

The only thing I’ve tried so far is increasing the buffer size. It has helped, but even at the maximum Core Audio buffer size (4096 smpl) offered by Ardour, it hasn’t helped much. And the extra latency is only acceptable for me during mixing.

I know that freezing tracks is the go-to approach for greatly reducing DSP load. How does it work in Ardour? Do I just export a FLAC of my MIDI instrument track up until the first send, deactivate the track, and create a new audio track? Is there any easier way?

Another question: is sample rate important for DSP load? I’m running at 88.2kHz. Would running at 44.1kHz or 48kHz instead help? I’ve been reading all the conflicting viewpoints on whether higher sample rates actually do anything, and the consensus seems to be that they don’t, but I’m not quite sure about that either.

Are there any other techniques for managing DSP load?

Thanks,
A.

IIRC you can right click a track and select “Freeze” and ardour will do the bounce and deactivate stuff for you.

Half the sample rate means half the numbers to be processed in the same amount of time. A plugin can oversample internally if it needs a higher sample rate, so I’d really consider moving to a lower rate if DSP load is becoming an issue. You might find though that it is rather difficult to change sample rates in the middle of an ardour project though.

1 Like

I’m not seeing that option when I right on the track in the main window. It is a MIDI track – does freeze in Ardour work with MIDI tracks? Or is the option just in a different menu? If Ardour can actually do this for me, that makes my life much easier. My main problem with a complicated freezing process is that I rarely have a track that is finished, and won’t be modified any time soon, so the easier freezing is, the better.

Unfortunately, I’ve already started all the tracks I’m currently working on at 88.2kHz, and I’m not willing to risk any corruption from trying to change that. The lower latency is nice too, assuming, of course, that I manage to get the buffer size small enough without causing Xruns.

Thanks for your help.

Freeze is not available for MIDI tracks. It makes almost no sense - what you would be looking for there is something more “render MIDI to audio using MIDI processing in this track, then drop the MIDI data and convert into an audio track”. Ardour can’t do that at present, and it would be a very different operation from Freeze, though conceptually useful for the same kinds of reasons that Freeze is.

Hi Paul,

Thanks for the response. I understand that “freeze” for MIDI is a very different operation, being, as you said, more of a “bounce to new track”. Are there any plans for such a feature or something similar to become reality?

Even though this isn’t a feature now, how would you recommend approximating it manually? What’s the best way to bounce a MIDI track? Can I bounce directly to an audio region/track? Can I keep bouncing to the same file and have Ardour keep the audio region up to date with the backing audio file?

Thanks for your time.

What I usually do i just record the midi track output to the next audio track and mute the miditrack… in case i want to go back and change things i can reactivate the midi track and do it again… for me this works well but depends on how complex your midi sessions are it might be cumbersome…

@calimerox: That’s a great idea! It will be cumbersome, but nothing like what I was thinking of before (Go do a stem export, open the export folder, drag in, etc).

My one concern with that process would be that it would introduce timing problems or delays. I did a little semi-scientific test though: recorded a note that starts exactly on bar two, and then zoomed in as much as Ardour would let me, and the audio seemed to be perfectly accurate, so I won’t be too concerned.

Thanks for the thought!

Drop your sample rate… Seriously there is no physical reason for higher sample rates if you have good converters that work well at lower sample rates. (most modern converters)… I realise this can’t be done for existing projects, but for your next ones give it a try.
There is so much literature about this, but I’m at work and don’t have time to reference it all.

I'm running at 88.2kHz. Would running at 44.1kHz or 48kHz instead help?
Lower sample rates mean less DSP / CPU usage. You can completely reproduce a bandlimited signal provided it is sampled at more than twice the highest frequency (forget about any analogies with 'stair-steps', that's not how it works). This means that with modern converters even 44.1kHz is more than adequate for the maximum audible frequency of 20kHz. There is an argument for using higher sample rates to improve the (sonic) performance of some plug-ins, but in reality most plug-ins which need higher sample rates (should) upsample to a higher rate internally. Your choice of sample rate may also be dictated by the target format, generally you should avoid unnecessary sample rate conversion if possible.

Hi allank and mike,

Thanks for the info. In the future I’m definitely using 44.1kHz. I started this project when I was only first getting into audio production, and was lead to believe that higher sample rates were a tiny-marginal-increase-in-quality in exchange for extra-disk-usage-we-have-so-much-space-it-doesn’t-matter. DSP load (a major problem on a 2011 computer…) never factored into the decision, because I didn’t know what it was.

Currently, I’m solving my DSP load problems by using @calimerox’s method of track freezing, and freezing my really CPU intensive physically-modeled guitar tracks.

I realize extra resampling isn’t a good thing, but will the final mastering conversion from 88.2kHz down to distribution 44.1 introduce any noticeable noise or artifacts? Do I need to worry?

Thanks.

No, you don’t need to worry about a single final SR conversion to your distribution format.
I mostly record at 48kHz (a lot of it’s for video) and I don’t worry about converting to 44.1kHz either.

tiny-marginal-increase-in-quality
If it's done right, the only difference is the ability to record and play back frequencies between 20kHz and 40kHz.

Which, if I understand right, are completely useless to have recorded, yes?

Which, if I understand right, are completely useless to have recorded, yes?
You can't hear them. There are those who claim to be able to hear a difference with higher sample rate recordings, but the accuracy of these claims is normally clouded by either lack of repeatability in a controlled environment, or that there is a vested interest in making such claims (artists with a back catalogue to promote on a new format, engineers with a reputation to enhance etc). The best it's fair to say is that those who can reliably hear a difference are probably hearing some artifact of the rest of the signal chain triggered by supersonic frequencies. Typically these might be psychoacoustic phenomina due to intermodulation products folding down into the audible range (and which are actually quite undesirable). Contributary factors include amplifier instability (insufficient phase margin) / or non-linearity etc. The general wisdom is that its best to design analogue circuitry capable of processing frequencies in excess of the audible range, but to ensure that the input frequency range is adequately and appropriately constrained. But, in short, no you shouldn't worry about using lower sample rates.