Sounds like a cool idea. Ardour has been uses for FOH in the past, usually either with some hardware ctrl surfaces and/or Open Stage Conrol. Are you aware of that project? Itās somewhat related.
Iād not take that on stage at this point in time. We keep finding critical bugs (even just last week, pipewireās MIDI system for control surfaces is pretty much unusable). Pipewire moves quickly, but i suggest to not to rely on it at this point in time, especially for ultra low latency.
In any case, I think you should aim to to minimize the stack as much as possible. With less software involved, there are fewer things that can go wrong.
which brings me to:
Why is that needed? Ardour has a built-in websocket interface and one can already do custom web interfaces. There is also direct support for scenes and snapshots built in to Ardour.
This Is not usually needed. Especially if you want to run 64 tracks on system with fewer than 64 cores, you likely waste more resources by excluding some cores. Besides, the Linux scheduler does an amazing job.
Sadly not enough of one for purposes of a live console. No filtering ability (Either on save or more commonly recall), scene safes, etc. And that is just scratching the surface really. It is developing and I keep wanting to look at it myself but never have time.
1. Audio Core:
PipeWire vs. JACK My choice of PipeWire is driven by the need for automatic, dynamic I/O routing to create a turnkey systemāa capability I believe PipeWireās Plumber simplifies compared to JACKās necessary scripting. I am 90% inclined to test PipeWire first while monitoring X-runs aggressively.
Question for you:
For dynamic I/O management in a turnkey system, do you know of a less complex scripting solution for JACK that rivals PipeWireās flexibility?
2. Middleware Daemon
The proprietary Middleware is essential because Ardourās native Scene/Snapshot functionality lacks the filtering and āscene safeā capabilities required for a professional live console (as noted by Seablade). Furthermore, the Middleware layer, communicating via the standard OSC protocol and WebSockets, is designed to establish a clear legal boundary between the GPL core and our proprietary add-on marketplace (if Iāll make in the future), supporting the Open-Core business model.
3. CPU Pinning System
This advanced Pinning system is a design choice driven by the demanding stability goal: <2 xruns/hour at 5-10 ms latency on commodity hardware. While the Linux scheduler is very good from what iām understanding, using isolcpus provides the hard isolation necessary to prevent background OS processes from causing micro-interruptions on the real-time audio threads. This is the gold standard for guaranteed RT performance. Am i in fault?
Hi @seablade Seablade, thank you for your feedback. Your confirmation regarding PipeWireās limitations for live use is duly noted. I answered in details to Robin.
Obviously⦠I am looking for contributors to develop this advanced scene management⦠If you are ever interested in contributing to the functional design or development of this component, please know my door is always open.
Iād use Ardourās built in ALSA backend (which supports multiple soundcards just fine) and allows for MIDI device hotplug.
Well, that could be added. Also, if you think the current low level abstraction which Ardour provides is sufficient for a higher-level external control, I highly recommend to look into Lua scripting to implement the features on Ardourās side.
Especially since you already have a Lua interpreter as headless wrapper to begin with.
A Lua script can directly access libardour internals, and offer complete control.
Please reconsider and avoid any proprietary components at all cost. As Ardour shows nicely a free/libre software project can be commercially successful.
Having worked on rt audio systems in the past two decades, I can say, yes you are At least in the latency range that you aim for.
Besides, audio threads should not be preempted in the first place, and the main audio I/O thread pretty much does nothing. I suggest to focus in IRQ scheduling and proper hardware (no NMI, possibility to disable C1E states in the BIOS, dedicated hardware IRQ for the soundcard,ā¦). Those factors are much more significant.
One reason to pin threads can be modern CPUs with P/E cores. Then again you may want to avoid those CPUs to begin with (or disable E-cores along with hyperthreading)ā¦
Thank you so much @x42 for your feedback and precious technical advices. I have revised the entire architecture, moving away from PipeWire and integrating the proprietary logic layer directly into Ardour via Lua scripts, as you suggested.
This new direction, focusing on JACK2/ALSA and a GPL Core, is much more robust and community-friendly. The project can now officially join the FOSS and Free world!
To answer, I need the flexibility to dynamically map physical audio inputs (e.g., analog input 10) to Ardour track inputs (e.g., Track 1) at runtime via scripting, to simulate real-world live console patching.
I have a follow-up question based on @seablade 's input, who pointed out that Ardourās native scene/snapshot system lacks critical live features like filtering and āscene safesā (protections). I recognize that this is essential for a professional live console.
I am grateful for this input. My main doubt is:
Should I proceed with developing a complex, custom Lua script to handle all the scene/snapshot logic, filtering, and safes? (This means bypassing Ardourās native snapshot API.)
Or is the Ardour community planning to address these live-sound features natively in the near future, which might align with my vision?
Any advice on the best path forward (custom script vs. waiting for core development) would be greatly appreciated.
Thank you again,
and welcome to me at the FOSS world!
My use case would be less FOH, but submixer and creative tool.
Here is a question though: why Ardour?
You want a mixer, right? Why drag along an entire daw?
Edit:
Another question. The middleware daemon, open Stage control already allows for this kind of scripting. I run it headless on a pi connected to a synth. I just have to open the website hosted there and I can control all the parameters and preset management from my iPhone.
Just asking those questions, because you are clearly further along the design path than me.
Also, LLM use for designing and researching will always be a bootlicker, rarely ask critical questions or provide you with alternative paths to explore.
I compared several approaches and sought opinions and clarifications from a number of skilled individuals. It appears that rebuilding the core is more challenging than utilizing a stable one, such as Ardour. With Ardour, you already have everything you need, including metering.
If I understand correctly scene-safe are used to only load/restore partial state.
ie. retain Fader levels or FX, or ⦠when applying a given scene.
Is that what you refer to?
Iām also wondering if it would perhaps more interesting to base this project on Harrison LiveTrax 2 (another Ardour derivative, complete source available), which has a similar target audience.
unless⦠do you plan to have timeline automation? support multitrack recording?
Iām not going to create any derivate but use a stable software with updates as basement of my OLMS. Maybe Iām in fault but it seems to me that Ardour has everything: busses, plugins, and so on⦠by looking at Livetrax 2 I see a recorder + faders. Maybe its just a different UI with less parameters? Iām not sure itās the right base for OLMS. Again, am i in wrong? If so, may i ask you why?
On the other side, why Ardour should be the wrong choice for starting? Whatās the issue?
I have never been able to solve this fundamental problem:
The user must purchase a computer I/O system with enough microphone preamps and enough monitor outputs to handle a reasonably sized band. And you have to spend a lot more if you want recall of the basic elements (mic gain, 48v) that you get with a live mixer.
What you often find is that a small digital mixer is the most practical way to get a lot of mic inputs and monitor outputs from your computer. So youāre now faced with carrying a perfectly serviceable live console around, just to support your virtualized one.
That said, thereās plenty of room to do something cool in the computer that complements the live mixer.
Harrison LiveTrax and Apple MainStage are 2 products that complement the digital mixer without trying to ābeā the mixer. And I see a lot of live rigs running Ableton Live for backing tracks and video playback.
Using AI to vibe-spec and vibe-code is one thing, but copy-pasting AI in a conversation between humans is lame. Sorry if itās not the case, but the AI smell is so strong in your last answer.
Been trying to avoid being a naysayer, but I have to agree with Ben for the most part.
I am no stranger to both the budget end of mixers and the high end range of mixers in live sound, given my professional history and work. In general I love aspects of some things, and have no problem with using, say Mixbus, as a live mixer for myself, but I would eb hard pressed to say I can build a mixer of any size cheaper than the options already out there. Proprietary lockin isnāt so much an issue when dealing with any of these in my experience, it isnāt that it doesnāt exist, but rather that really you are buying the tool and use the tool, and when the tool dies you buy a new one. Yes it takes time to rebuild libraries of presets etc you may have used to move quickly, but that may bring me to my next point:
What has interested me more is controlling the mixers in a somewhat standard way. There are two aspects to this, one is using software similar to TheaterMix for instance, which can help manage aspects of live mixing consoles, including EQ, Dynamics, etc. that does an āokā job of allowing this to be transferred between consoles potentially (Havenāt tried the latter part of that, just the managing of it on a single console). The EQ etc. always sounds different between consoles, but like anything with presets these should be starting points only.
The second aspect is the reason some people decide to use things like Waves Soundgrid for processing, it allows you to use the same tools on any consoles. I donāt need or want a specific EQ for mixing live necessarily, but what DOES benefit me, is having presets I can pull up for any channel quickly and easily and know they will sound the same on any console. When you are running on shows quick this can save quite a lot of time, sadly I donāt think Soundgrid is there yet for this to truly apply, and while there are other options I havenāt spent a lot of time exploring this due to the requirement of dedicating a computer to it really.
Finally I just donāt know how you would make money in such a project to support yourself. There have been similar closed source projects along these lines before, Software Audio Mixer is one that for a while gained steam, but honestly it just never holds up.
This could also be achieved with a combination of PW, jackmix (or any other headless mixer) and carla. My idea was to get a pi, use the trio for the audio layer and control it from an open stage control interface via midi and OSC. Open stage control would also be able to implement the logic, like scene-safe, presets etc.
There are a lot of pitfalls in all this and I do not think it will be suitable for a big console replacement in professional venues, but as a creative mixer for dub mixing, electronic music livesets etc.
The software defined mixer kinda screams for an open source pendant. Something that allows the user to either make the best of their junk audio interface and singleboard computer collection or, thinking this further, a modular system, consisting of open source hardware and software. Open source interfaces for i/o, a software layer for routing and controls. Maybe even analogue insert effects system.
Open source hardware would also allow for a market of high end boutique producers to make money of this. Or just cheap mass market clones. They could coexist. Ok, getting of rails here a little. But I think @x-radios seems to be driven by a similar idea.
@GenGen I really appreciate the alternative suggestion using PipeWire, jackmix, and Carla! That is definitely a viable approach, especially for lighter setups like a Raspberry Pi.
For OLMS, Iām currently committing to Ardour Headless because:
Integrated Logic: Ardour provides native Lua scripting for complex session logic (like our proposed bank and scene management), which is safer and less complex than building custom logic with an external service.
OSC Protocol: Ardour offers robust, integrated OSC support right out of the box, reducing the need for middleware or translation layers.
Proving Ground: Ardourās core engine is a proven, high-performance platform on Linux RT, giving us a stronger foundation for the professional-grade stability the project aims for.
I fully agree that this kind of software-defined mixer needs an open-source pendant! Thank you for the insightful feedback.
@baptiste ever use it for.language and technical barrier. Anyway itās just an interpreter, I never copy and paste just because I have not time. I ever evaluate every single inbound and outbound sentences. My presence is 100% human and everything in this topic if ever double or triple filtered by my little human mind. Thank you for your open and sincere answer.
You are absolutely right about the cost-effectiveness and the challenges of sustaining a project like this.
At this point, OLMS is less of a serious business venture and more of a passion project and an open architecture experiment. The initial business idea largely dissolved when the extreme difficulty of achieving genuinely professional, ultra-low-latency / zero x-run performance was fully realized.
However, the architecture remains fascinating, and we canāt predict what new low-level APIs or hardware tools might emerge in the future that could pair perfectly with this software core.
So, why not keep building it? Iāll see where the journey takes me.