Ardour in live music?

Greetings everyone,

I’m a relative beginner in Ardour. I’m sometimes doing small live gigs with a medium category digital mixer. Sometimes the capabilities of the mixer are limiting (e.g. not enough EQ filters per channel).

I have recently learned there is now a Waves edition (called Superrack Performer) to run on notebooks to host plugins for doing live sound, and this would suit me, as the digital mixer has multitrack USB I/O over ASIO.

However, I was thinking, maybe Ardour could do the same thing for me? Anyone knows whether Ardour would be better/the same/worse for this purpose?

Thank you in advance.

Ardour can be used for this, whether it is better/worse etc. depends on your exact needs and workflow, but the basics of it comes down to, typically this is a specific set of tools needed, that Ardour is far more reaching than those tools and as a result using those tools may not be as simple/quick as a dedicated tool for this.

Yes Waves Superrack is one option, Yamaha also has one out, as do a few other companies depending on your preferences.


You are right, I underspecified.

I just want more flexible EQ on the channels than available in the mixer, perhaps also some reverb or delay, etc., then some cross/matrix mixing (buses/sends) for monitors and the main PA.

I was thinking mostly latency and reliability-wise. Basically, can Ardour, despite being a much more general tool, perform with the same low latency as other such software (on the same hardware).

There are only two things that limit the lowest latency you can get with ardour:

  1. the behavior of your operating system and hardware
  2. the amount and type of DSP running inside ardour

There are systems where people have run Ardour with 8 sample latency just fine. You’re very, very, very unlikely to be able to do that on generic computer hardware. If you run lots of very DSP-heavy plugins (e.g. some algorithmic reverbs) that will also limit the lowest latency you can get.

1 Like

I’m using Ardour to spatialize 4 live sources into an 3rd order ambisonics setup. There are no issues with it that i didn’t already have with Ableton or any other live software really. If your computer and your sound interface can handle the load at a given latency, you’re good to go.

Before that i used Pure Data to host the plugins and route the audio with [vstplugin~], and i coud have used Element, an open source alternative to Gig Performer / Live Professor : Element - Kushview


I have been using Ardour for many years mainly as a ‘‘Live’’ mixer for my electronic music with Eurorack modular synthesizers.

There are 8 audio inputs via USB audio interface assigned to different tracks and groups in Ardour.
Each track is equipped with an X42 EQ and compressor, the buses are used for reverb and delay effects as well as another bus for the SideChain.

Honestly, it’s really very good, stable, great sound, low latency, I can even do Live YouTube sessions without problems with Ardor routed in OBS Studio.

I love Ardour, thank you to everyone who works and contributes to this project.


I do the same thing…almost…and it’s a bit more convoluted. OBS is involved, but so is a separate online meeting. Everything feeds Ardour directly, and everything is fed from Ardour directly - nothing else has any audio processing whatsoever, except for what I can’t disable or set to do nothing, and nothing has any direct connection to anything else. It’s all in Ardour.
I could stream from OBS to YouTube, just by clicking that button in OBS, but I’ve never seen the need to on that rig. I do record in OBS and upload afterwards though.

Anyway, the key here is latency, on a system that is doing lots of other things too.

For a dedicated system that does nothing else - like a digital console - single-sample latency is possible through the DSP: get one sample from every input, process that one sample all the way through, deliver one sample to every output, and repeat. Then you’re left with only the converters’ group delay, which can be in the range of 1ms or so, analog to analog, at 48kHz, and that’s because of the FIR filters in the converters themselves. (Tech note 1)
Higher sample rates can use shorter FIR filters with fewer samples of delay (Tech note 2), in addition to less time per sample, if you really care about that, and this reduced latency is the only real benefit that I see to higher sample rates, live.

Tech note 1:
(cheap, gentle-slope, analog lowpass for anti-aliasing, outside the ADC chip - then the chip actually samples with low resolution in the mid-MHz range, plus a small amount of intentional ultrasonic noise to force the LSb of that to wiggle - then steep-slope digital FIR lowpass at that mid-MHz sample rate, with cutoff for Nyquist at the desired output rate, which also converts the out-of-band-noise into more resolution - then just pick samples to send out at that desired rate and throw the rest away - all inside the ADC chip itself)
(similar for the DAC, but in reverse)

Tech note 2:
(This is most of what happens when you tell the converter to use a higher sample rate. Its actual analog rate doesn’t change at all, nor does the analog filter that precedes it. You might think of it like an engine and transmission: the engine drives the analog sampler directly, and it has enough variability to cover 44.1kHz to 48kHz output with a little bit extra on either side, but much different rates like 96kHz need a different “gear” for the same engine speed / given clock, and thus the same actual analog rate and FIR rate. That different “gear” uses a different FIR with the same cutoff but relaxed slope, and it’s that relaxed slope that produces less delay.)

For a non-dedicated system, that has to manage a filesystem, fancy GUI, live video processing, and whatever else you’re doing, all at the same time on the same hardware, you need a buffer that can fill up on the input side while it’s doing other things, and play out on the output side while it’s doing other things. Then it processes the entire buffer at once whenever it gets around to it. Record X samples for each input, process all of those samples at once, deliver them all at once to each output to play out, and repeat only when the input buffer is full again. That’s in addition to the converters’ delay, which is unchanged from above.
And it’s usually double-buffered, so that it has an entire buffer’s worth of time to get around to grabbing the input, or to delivering the output, and doesn’t have to get there exactly between the right pair of samples, which practically never happens. That double buffer also doubles the latency at that step, again in exchange for smoothness on a system that’s doing everything else too, on shared hardware.

The speed of sound in air is about 1125 feet per second, or (very roughly) 1 ft/ms. So a 48kHz dedicated system with 1ms total worth of converter delay (ADC+DAC) - plus about 3 more samples (negligible) for communicating in, processing, and communicating out - typically sounds like the speakers are about 1 foot behind where they actually are, in terms of timing. Very few people are going to notice that.
Now add the buffer that you need for your non-dedicated, PC-based system and convert that to distance. If you (or your listeners) can stand the speakers being that much farther away, you’re good! If not, you need to do something different.

Of course, if you’re running all of this through even a half-decent video production thing like OBS, then that’ll have a way to adjust its buffers to make things line up again. Video processing generally has more latency than audio processing, mostly because of the MUCH slower “sample rate” (called “frame rate” over there), so you probably need to delay the audio anyway to line back up with it. So if Ardour takes some of that delay but not all of it, then there’s no change at all in the viewers’ experience! And all you have to do is reduce OBS’s Sync Delay for the sources that come from Ardour, by the amount that Ardour takes.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.