Open Live Mixing System - Don't buy a mixer: do it instead

Verona (IT), December 2, 2025. From the desk of Francesco Nano (sound engineer, holistic operator)

Good morning everyone and a special greeting to @paul with whom I already had the possibility to interact by email.

THE IDEA:

I would like to succeed in realizing this idea: a real-time Digital Mixing System built on Linux RT and Ardour Headless, leveraging dynamic CPU core allocation for stability and controlled entirely via a custom Web UI using OSC.

The idea is, in practice, to mix free starting from a mini PC and a sound card in a stable system that minimizes as much as possible latency and X-runs within a generic system and without specific DSPs. I know it is an ambitious goal and that perhaps it almost might not be worth the effort but I enjoy the idea of succeeding and maybe of creating something truly useful and appreciated in the world of music, especially open and free.

TECHNICAL ARCHITECTRURE

To make the project concrete, I have defined the following architecture (full documentation available on GitHub: https://github.com/Open-Live-Mixing-System-OLMS/Open-Live-Mixing-System/blob/main/README.md ).

Technology Stack:

  • OS: Linux RT (Arch) with PREEMPT_RT kernel
  • Audio Core: PipeWire + PipeWire-plumber
  • Engine: Ardour 8 Headless
  • Protocol: OSC
  • Interface: Custom Web UI (HTML5/JS/CSS)

3-Layer Structure:

  • Core (GPL): Ardour headless (48ch Template, static routing).
  • Middleware (Proprietary): Node.js/Python Daemon for state, scene, snapshot, and bank manager.
  • Interface (Proprietary): Custom Web UI.

Real-Time (RT) Optimization:

  • CPU pinning system with isolated cores (dedicated to IRQ, PipeWire, Carla, Ardour).
  • Target: Latency 5āˆ’10 ms @ 128 samples, <2 xrun/hour.

Current Phase: Initial setup completed (Arch Linux + Ardour 8 + PipeWire + XFCE4). Next steps: 16ch template configuration, routing with PipeWire Null-Sink/ALSA loopback, and OSC testing with Open Stage Control.

BUSINESS SIDE - THE DUAL PURPOSE:

Imagining this project I see two possible parallel evolutions and that do not exclude each other:

A) Creating a semi-professional mixer, extremely versatile, economical and above all… free! I would really like to know that talented musicians, lovers of the open world, use this tool in their live performances and promote new and good music through new tools born with intentions of freeing rather than enslaving

B) Adopting an Open-Core mixed system in which the GPL licenses of all involved software, first among all Ardour, are respected AND, at the same time, evaluating if it is possible to create a marketplace for plugins and additional components (both free and premium) and developing a business model similar to that of WordPress where the base system is free but additional components can be paid.

This model in turn is reasonably designed to finance 2 real necessities:

  • my personal sustenance (I do not intend to get rich but to find a way to live decently, and I would not mind if it was also with this project)

  • to finance the development of a new type of license for intellectual works to give the possibility to the music world to create an alternative to copyright-based models as we know them today, without challenging them though. A different, niche, complementary way. This part is off topic so I will not go into detail within this discussion but I wish it to be put on record that the business part of this project is TRULY philanthropic.

A COUPLE OF PREMISES:

My past mistakes:

After about two months of reasoning and interaction with various devs I understood that I sometimes made a mistake in approach, in contacting people privately (outside this forum), sometimes I was too optimistic and stubborn, sometimes I insisted by re-contacting because I did not get responses. I am new to the development community and I am learning a lot. I intend to restart with a more suitable approach for the environment I am in.

Human Developers VS A.I. :

I also had the opportunity to learn more closely about the world of programmers and their intolerance to requests for free work and exploitation which, obviously, are more than legitimate and motivated. Despite this, I often felt a strong aversion, almost a deep resentment (not from Paul who was always kind), towards topics such as:

  • let’s create a project together and then try to get something out of it
  • vibe coding
  • let’s create specifications together
  • having the sacred fire for music (which apparently counts less than zero in the face of code writing) etc…

In short, I am not a programmer, I am a musician, a sound engineer, a lover of the human race, music, and life in general. And I live my own time. And in the precious time I have been given to live, I find myself with:

  • ideas that push from within to come out
  • love for music
  • live and studio audio skills
  • project management abilities
  • empty wallet for this project
  • sympathy for the world of programming and…
  • A.I.

So, I kindly ask the courtesy, if possible, not to attack me just because I am trying to put all these things together. Let’s talk about techincal features and implementation, instead. :+1:

I put on record that:

I want it to be put on record that, before writing this post, I spent months with the AI just to understand what I was trying to do in order to formalize a collaboration proposal to developers, reldev, and ux designers.

When I had slightly clearer ideas and tried to interface with the programming world I did not find the possibility to match my interests and assets with the professionals I interfaced with.

I therefore chose, instead of abandoning my romantic goddess or transforming it into a startup starting from fundraising, something that amused me more: I activated visual studio code, I installed cline which allows interfacing with the AI and, through vibe coding, I installed an Arch system with Ardour, Pipewire, Asla etc… (obviously I had over 1 year of experience on this type of approach, limits, defects and opportunities). The weapon is loaded and I will not hesitate to use it LoL :grinning:

I leave my door open

I would truly be happy if some serious developer joined my adventure but I understood that my approach usually creates more intolerance than sympathy in professionals :innocent: . I am like you… but I am not one of you. I understand it, I take note of it, I accept it, I just take a different path.

But I willingly leave the door open to a collaboration with those who will be able to integrate proactively into my project with the awareness that, if I who know nothing about programming, manage to do things that were unimaginable for me, I imagine you, dear professional dev, how much you could do and how much better than me, in less time!

Anyway, I am aware of the problems of code generated with A.I. including:

  • low quality and inaccuracy of the code
  • logical errors
  • potential vulnerabilities

On the first 2, after over 1 year of work with the tool, I have developed my policies to maintain high quality. The third point instead I do not have the skills to evaluate when a code is secure or not (so it is an area where certainly more expert eyes will be needed).

Then there is the current ethical question on the table. The fact that if you use AI you don’t pay humans, which I solve in this way:

  • I would not have the budget to pay you anyway at this moment
  • we are facing an epochal change: like when machines surpassed the use of the horse. Those who had carriages were screwed but when technology moves forward you can’t pretend nothing is happening

I pray that Skynet does not take control and that Matrix is only a good movie and not the description of what will come.

I live my time, aware that if I did it supported by those who know more than me everything would be easier so, once again… you are welcome if with respect and proactivity you wish to join me: I will be happy to share with you my best practices on the use of the AI so that you can use them as a lever.

The hope remains to create a strong leader team with which there can also be economic satisfactions, as mentioned before, and maybe I could meet someone in this forum. Or maybe not. I am open.

CONCLUSIONS

I officially start the work with this post which I will simultaneously update here and on linuxmusiscians.com with the goal of letting the world of Linux musicians know about this project, finding sympathizers and supporters of my vision, sharing the result of my efforts for free and, perhaps, finding some new friend with whom to do things seriously.

Thank you very much for reading me so far. Francesco Nano

3 Likes

Sounds like a cool idea. Ardour has been uses for FOH in the past, usually either with some hardware ctrl surfaces and/or Open Stage Conrol. Are you aware of that project? It’s somewhat related.

I’d not take that on stage at this point in time. We keep finding critical bugs (even just last week, pipewire’s MIDI system for control surfaces is pretty much unusable). Pipewire moves quickly, but i suggest to not to rely on it at this point in time, especially for ultra low latency.

In any case, I think you should aim to to minimize the stack as much as possible. With less software involved, there are fewer things that can go wrong.

which brings me to:

Why is that needed? Ardour has a built-in websocket interface and one can already do custom web interfaces. There is also direct support for scenes and snapshots built in to Ardour.

This Is not usually needed. Especially if you want to run 64 tracks on system with fewer than 64 cores, you likely waste more resources by excluding some cores. Besides, the Linux scheduler does an amazing job.

3 Likes

Seconded

Sadly not enough of one for purposes of a live console. No filtering ability (Either on save or more commonly recall), scene safes, etc. And that is just scratching the surface really. It is developing and I keep wanting to look at it myself but never have time.

  Seablade
1 Like

Hello Robin!

Thank you for your valuable feedback. I’m truly honored to have you join the discussion! :+1: :hugs:

You can find the complete technical specifications and the detailed RT allocation matrix here: https://www.google.com/search?q=https://github.com/Open-Live-Mixing-System-OLMS/Open-Live-Mixing-System/blob/main/PROJECT)_SPECS.md

Here are replies based on the OLMS architecture:

1. Audio Core:
PipeWire vs. JACK My choice of PipeWire is driven by the need for automatic, dynamic I/O routing to create a turnkey system—a capability I believe PipeWire’s Plumber simplifies compared to JACK’s necessary scripting. I am 90% inclined to test PipeWire first while monitoring X-runs aggressively.

Question for you:
For dynamic I/O management in a turnkey system, do you know of a less complex scripting solution for JACK that rivals PipeWire’s flexibility?

2. Middleware Daemon
The proprietary Middleware is essential because Ardour’s native Scene/Snapshot functionality lacks the filtering and ā€œscene safeā€ capabilities required for a professional live console (as noted by Seablade). Furthermore, the Middleware layer, communicating via the standard OSC protocol and WebSockets, is designed to establish a clear legal boundary between the GPL core and our proprietary add-on marketplace (if I’ll make in the future), supporting the Open-Core business model.

3. CPU Pinning System
This advanced Pinning system is a design choice driven by the demanding stability goal: <2 xruns/hour at 5-10 ms latency on commodity hardware. While the Linux scheduler is very good from what i’m understanding, using isolcpus provides the hard isolation necessary to prevent background OS processes from causing micro-interruptions on the real-time audio threads. This is the gold standard for guaranteed RT performance. Am i in fault?

Thanks again for the input!!!

Best regards,

Francesco Nano

Hi @seablade Seablade, thank you for your feedback. Your confirmation regarding PipeWire’s limitations for live use is duly noted. I answered in details to Robin.

Obviously… I am looking for contributors to develop this advanced scene management… If you are ever interested in contributing to the functional design or development of this component, please know my door is always open.

Unity is strength!

All the best

What flexibility do you need?

I’d use Ardour’s built in ALSA backend (which supports multiple soundcards just fine) and allows for MIDI device hotplug.

Well, that could be added. Also, if you think the current low level abstraction which Ardour provides is sufficient for a higher-level external control, I highly recommend to look into Lua scripting to implement the features on Ardour’s side.

Especially since you already have a Lua interpreter as headless wrapper to begin with.
A Lua script can directly access libardour internals, and offer complete control.

Please reconsider and avoid any proprietary components at all cost. As Ardour shows nicely a free/libre software project can be commercially successful.

Having worked on rt audio systems in the past two decades, I can say, yes you are :slight_smile: At least in the latency range that you aim for.

Besides, audio threads should not be preempted in the first place, and the main audio I/O thread pretty much does nothing. I suggest to focus in IRQ scheduling and proper hardware (no NMI, possibility to disable C1E states in the BIOS, dedicated hardware IRQ for the soundcard,…). Those factors are much more significant.

One reason to pin threads can be modern CPUs with P/E cores. Then again you may want to avoid those CPUs to begin with (or disable E-cores along with hyperthreading)…

Thank you so much @x42 for your feedback and precious technical advices. I have revised the entire architecture, moving away from PipeWire and integrating the proprietary logic layer directly into Ardour via Lua scripts, as you suggested.

This new direction, focusing on JACK2/ALSA and a GPL Core, is much more robust and community-friendly. The project can now officially join the FOSS and Free world! :sweat_smile:

To answer, I need the flexibility to dynamically map physical audio inputs (e.g., analog input 10) to Ardour track inputs (e.g., Track 1) at runtime via scripting, to simulate real-world live console patching.

I have a follow-up question based on @seablade 's input, who pointed out that Ardour’s native scene/snapshot system lacks critical live features like filtering and ā€˜scene safes’ (protections). I recognize that this is essential for a professional live console.

I am grateful for this input. My main doubt is:

  1. Should I proceed with developing a complex, custom Lua script to handle all the scene/snapshot logic, filtering, and safes? (This means bypassing Ardour’s native snapshot API.)
  2. Or is the Ardour community planning to address these live-sound features natively in the near future, which might align with my vision?

Any advice on the best path forward (custom script vs. waiting for core development) would be greatly appreciated.

Thank you again,
and welcome to me at the FOSS world! :+1: :hugs:

2 Likes

Been toying with a similar idea since a while. Especially with neat interfaces like this coming out:
https://www.modularaudiotools.com/

My use case would be less FOH, but submixer and creative tool.

Here is a question though: why Ardour?
You want a mixer, right? Why drag along an entire daw?

Edit:
Another question. The middleware daemon, open Stage control already allows for this kind of scripting. I run it headless on a pi connected to a synth. I just have to open the website hosted there and I can control all the parameters and preset management from my iPhone.

Just asking those questions, because you are clearly further along the design path than me.
Also, LLM use for designing and researching will always be a bootlicker, rarely ask critical questions or provide you with alternative paths to explore.

Hi @GenGen welcome and thank you for your comment

I compared several approaches and sought opinions and clarifications from a number of skilled individuals. It appears that rebuilding the core is more challenging than utilizing a stable one, such as Ardour. With Ardour, you already have everything you need, including metering.

1 Like

If I understand correctly scene-safe are used to only load/restore partial state.
ie. retain Fader levels or FX, or … when applying a given scene.

Is that what you refer to?

I’m also wondering if it would perhaps more interesting to base this project on Harrison LiveTrax 2 (another Ardour derivative, complete source available), which has a similar target audience.

unless… do you plan to have timeline automation? support multitrack recording?

Yes it is.

I’m not going to create any derivate but use a stable software with updates as basement of my OLMS. Maybe I’m in fault but it seems to me that Ardour has everything: busses, plugins, and so on… by looking at Livetrax 2 I see a recorder + faders. Maybe its just a different UI with less parameters? I’m not sure it’s the right base for OLMS. Again, am i in wrong? If so, may i ask you why?

On the other side, why Ardour should be the wrong choice for starting? What’s the issue?

Really appreciate your helping.
Many thanks.

Francesco Nano

This has been a passion of mine for many years.

I have never been able to solve this fundamental problem:

The user must purchase a computer I/O system with enough microphone preamps and enough monitor outputs to handle a reasonably sized band. And you have to spend a lot more if you want recall of the basic elements (mic gain, 48v) that you get with a live mixer.

What you often find is that a small digital mixer is the most practical way to get a lot of mic inputs and monitor outputs from your computer. So you’re now faced with carrying a perfectly serviceable live console around, just to support your virtualized one.

That said, there’s plenty of room to do something cool in the computer that complements the live mixer.

Harrison LiveTrax and Apple MainStage are 2 products that complement the digital mixer without trying to ā€˜be’ the mixer. And I see a lot of live rigs running Ableton Live for backing tracks and video playback.

-Ben

3 Likes

nevermind me, I only just recalled that you want to run this headless.

(I thought due to the UI layout and dedicated direct i/o toggle mode)

1 Like

You are absolutely right about budget mixers. OLMS is not a competitor for them; it’s an alternative to expensive, limited consoles.

Our focus is on three main advantages:

DSP Power: it uses standard x86-64 CPUs for unrestricted plugin scalability. This offers the best Performance-to-Cost.

Open Logic (GPL): The system’s Lua scripts are fully accessible, providing total customization and zero vendor lock-in for complex automations.

Stability: We use Linux PREEMPT_RT for hyper-optimized, low-latency performance essential for live use.

Let’s see if these will be enough…

Using AI to vibe-spec and vibe-code is one thing, but copy-pasting AI in a conversation between humans is lame. Sorry if it’s not the case, but the AI smell is so strong in your last answer.

4 Likes

Been trying to avoid being a naysayer, but I have to agree with Ben for the most part.

I am no stranger to both the budget end of mixers and the high end range of mixers in live sound, given my professional history and work. In general I love aspects of some things, and have no problem with using, say Mixbus, as a live mixer for myself, but I would eb hard pressed to say I can build a mixer of any size cheaper than the options already out there. Proprietary lockin isn’t so much an issue when dealing with any of these in my experience, it isn’t that it doesn’t exist, but rather that really you are buying the tool and use the tool, and when the tool dies you buy a new one. Yes it takes time to rebuild libraries of presets etc you may have used to move quickly, but that may bring me to my next point:

What has interested me more is controlling the mixers in a somewhat standard way. There are two aspects to this, one is using software similar to TheaterMix for instance, which can help manage aspects of live mixing consoles, including EQ, Dynamics, etc. that does an ā€˜ok’ job of allowing this to be transferred between consoles potentially (Haven’t tried the latter part of that, just the managing of it on a single console). The EQ etc. always sounds different between consoles, but like anything with presets these should be starting points only.

The second aspect is the reason some people decide to use things like Waves Soundgrid for processing, it allows you to use the same tools on any consoles. I don’t need or want a specific EQ for mixing live necessarily, but what DOES benefit me, is having presets I can pull up for any channel quickly and easily and know they will sound the same on any console. When you are running on shows quick this can save quite a lot of time, sadly I don’t think Soundgrid is there yet for this to truly apply, and while there are other options I haven’t spent a lot of time exploring this due to the requirement of dedicating a computer to it really.

Finally I just don’t know how you would make money in such a project to support yourself. There have been similar closed source projects along these lines before, Software Audio Mixer is one that for a while gained steam, but honestly it just never holds up.

Seablade

4 Likes

@x42 do you think you should add this in the new coming release?

This could also be achieved with a combination of PW, jackmix (or any other headless mixer) and carla. My idea was to get a pi, use the trio for the audio layer and control it from an open stage control interface via midi and OSC. Open stage control would also be able to implement the logic, like scene-safe, presets etc.

There are a lot of pitfalls in all this and I do not think it will be suitable for a big console replacement in professional venues, but as a creative mixer for dub mixing, electronic music livesets etc.

The software defined mixer kinda screams for an open source pendant. Something that allows the user to either make the best of their junk audio interface and singleboard computer collection or, thinking this further, a modular system, consisting of open source hardware and software. Open source interfaces for i/o, a software layer for routing and controls. Maybe even analogue insert effects system.
Open source hardware would also allow for a market of high end boutique producers to make money of this. Or just cheap mass market clones. They could coexist. Ok, getting of rails here a little. But I think @x-radios seems to be driven by a similar idea.

1 Like

@GenGen I really appreciate the alternative suggestion using PipeWire, jackmix, and Carla! That is definitely a viable approach, especially for lighter setups like a Raspberry Pi.

For OLMS, I’m currently committing to Ardour Headless because:

Integrated Logic: Ardour provides native Lua scripting for complex session logic (like our proposed bank and scene management), which is safer and less complex than building custom logic with an external service.

OSC Protocol: Ardour offers robust, integrated OSC support right out of the box, reducing the need for middleware or translation layers.

Proving Ground: Ardour’s core engine is a proven, high-performance platform on Linux RT, giving us a stronger foundation for the professional-grade stability the project aims for.

I fully agree that this kind of software-defined mixer needs an open-source pendant! Thank you for the insightful feedback.

1 Like

@baptiste ever use it for.language and technical barrier. Anyway it’s just an interpreter, I never copy and paste just because I have not time. I ever evaluate every single inbound and outbound sentences. My presence is 100% human and everything in this topic if ever double or triple filtered by my little human mind. Thank you for your open and sincere answer.

Francesco Nano