Why not merge JACK and ALSA?

Wouldn’t it be nice to have one audio system on Linux, which can compete with excellent usable CoreAudio and is even better with the freedom of JACKs software routing?

Why not merge JACK into ALSA (kernel-level) and make it a JASA or ALCK. :smiley:

Since @paul is also JACK dev I hope it’s ok to ask here: Is this even possible or a thing to hope about?

Maybe you could even use the same server for video as well, and have a unified multi-media routing server.
Pipewire multimedia server

I think you will find that most distributions either have already switched to Pipewire, or have plans to switch in the near future. There are a few JACK use cases which Pipewire does not fully cover, but the latest version covers probably 80% or more of the use cases for both PulseAudio an JACK.

Paul has not been involved in JACK development for many years. Falktx is the current JACK maintainer.

1 Like

The aim and use-cases for ALSA and Jack (and desktop sound servers like PulseAudio) have always been distinct. This is my take on it:

ALSA is the low level sound driver that interfaces to the hardware. The equivalent on Windows, roughly speaking, is ASIO. ALSA does have some advanced features under the hood which extend beyond this, but its primary role is to act as the interface between the hardware and applications. Like ASIO, ALSA normally only allows a single application to access a device at a time. This means only one application at a time can play back music and record.

Jack is meant to it on top of a driver layer, like ALSA, and provide advanced features such as routing and tempo distribution. This includes the ability to easily route multiple applications to and from one ALSA device to allow simultaneous use. Jack integrates well with ALSA already and so, arguably, your request has already been met.


Jack is not good for Desktop audio. Jack is designed for an environment where the user is directly controlling and configuring the audio and MIDI routing, with fine-grained control. This is not useful for desktop audio where people expect things to “just work” without complex routing needing to be configured.

That is where sound servers like Pulseaudio come in. Pulseaudio has many features but, basically, it takes an opinionated view of audio routing and sets it up for you automatically. Note that it often gets things wrong, especially on pro-audio setups, but for the majority of users it works well. Pulseaudio is also designed to work with ALSA.

The issue comes when you want to use both pro-audio routing with Jack, and Desktop audio with Pulseaudio as the two are slightly incompatible and you can’t (or couldn’t) use them at the same time as they both want to take exclusive control of the ALSA device. There are solutions to that which work well, but often involve a bit of configuration to work.

So, if I can interpret what you probably actually want, I don’t think it’s anything to do with ALSA, per se. And I can’t see the scope of ALSA ever changing. What I think you actually want is a unified audio environment which seamlessly works as a desktop sound server which “just works”, and can be used with audio production apps in the same way as Jack does, allowing fine-grained control of complex routing.

That system is Pipewire. It, basically, aims to replace Pulseaudio and Jack with a single, integrated system for audio and MIDI (and also video). It is still, IMO, not yet fully mature, but it is getting there and, as @ccaudle points out, it’s increasingly becoming the default on many Desktop Linux distros.



This is out of the scope of either Ardour or Jack.
It’s literally a Linux question.
The primary reason Jack and Pipewire and PulseAudio exist is that ALSA is a single-client audio driver.
When Jaroslav Kyocela implemented ALSA as the first full-duplex driver to replace the OSS drivers, he made some architectural choices that will haunt us as long as Linux uses ALSA as it’s fundamental driver.

Since only ONE program can open the ALSA driver to record and play audio, and gains exclusive access, this limited Linux audio from day one.

Paul wrote Jack, and Jack has the additional capability of routing audio between applications, and then Jack sits on top of ALSA as an audio server for user-space programs. This happened to fit the Unix philosophy of making pipelines of single-function programs, so many Linux music creators would patch a separate drum-machine program like Hydrogen, into an audio recording application, like Ardour, or even running the entire PD virtual universe into an audio recording application or mixing application. Jack also can communicate via MIDI clock and allow for synchronizing different applications across the system.

When you refer to Coreaudio design, I agree with you, but trying to convince the entire Linux universe to switch driver models from ALSA to a more comprehensive one seems like a herculean task, rather like herding cats.
In any case, the scope is the Linux audio driver, much larger than our little community here.

My sense is that PulseAudio is consumer-oriented, system sounds, your browsers audio, etc. We are accustomed to playing audio from multiple sources and not having to diagnose why no audio is playing in our browser window. Jack was always oriented towards audio production applications.
Pipewire seems like the PulseAudio people discovering Jack for the first time, and NIH syndrome taking over.
Most modern distros have a virtual spaghetti-mess of dependencies including pulse-pipe pipe-pulse and who knows what, but with the usual culprit stopping Jack from startup being that Pulseaudio already OWNS the audio interface (Thanks ALSA!)
Don’t even start me on ALSA plugins and dmix and the absolutely bonkers architectural excuses ALSA carries with it to this day, let alone channel maps, etc… it’s a proper mess with no adults in the room.

When in doubt, set your system audio device to the built-in “soundcard”, and just directly connect your Ardour or other program to the ALSA driver for your real soundcard, and it WILL WORK.

**(ALTHOUGH: you may have to open alsamixer in your terminal and manually turn up your audio interface inputs, when you wonder why you get no sound from your working audio interface inputs, that ALSA inconveniently just decides to name with surround-sound nonsense channel-names, which is another annoying thing, presuming multi-channel audio interfaces are for consumer surround-sound listening and not making music… argghhhh. )

Next level, trying to get Jack started, in which case you can have all the candy you want.

1 Like

Pipewire is merely another “sound server” sitting on TOP of ALSA.
It doesn’t replace ALSA, and as such it’s the same as Jack or PulseAudio.

The need for soundservers being due to

  1. ALSA single-client architecture
  2. Novel inter-app audio routing that you need something like Soundflower for on Mac, as CoreAudio doesn’t provide this functionality

CoreAudio is cool because you get a single aggregate device for your entire soundcard (not ALSA and it’s weird idea to call each stereo pair a device) and you get friendly names for the channels that tells you what actual soundcard inputs/outputs you are using. CoreAudio is an audio server in that multiple simultaneous clients can connect to it at one time.

In a nutshell, Pipewire will not FIX anything, it’s just more lipstick on the same pig of ALSA

Some corrections and clarifications:

The underlying kernel-side audio/MIDI implementations on macOS and Windows are also single-client these days.

Apple moved the multi-client handling out of the kernel more than a decade ago (see coreaudiod on any newer macOS system). CoreAudio itself, like ALSA, consists of a kernel side component, and a user-space component, with a variety of different APIs and services depending on how close to the metal you need/want to be.

WASAPI is also single client, and was WaveRT and ASIO.

iOS now does provide inter-application routing, and I expect that macOS will very soon (if it doesn’t already - I have not been following that for a while). macOS has provided inter-application MIDI routing for years.

ergo … ALSA is not a pig, it’s precisely the same sort of creature as the audio/MIDI APIs on other major desktop/laptop platforms.

1 Like

This isn’t correct. JACK knows nothing about MIDI clock or any other synchronization mechanism. It provides its own API so that clients can share music time information and transport state. That API isn’t bad, but it isn’t great either.


Keep in mind that the main feature of JACK is to reduce context switches and allow zero-copy of float audio buffers between application. ALSA cannot do either. Kernel modules cannot use float values at all (to prevent clobbering FPU state one would have to add kernel_fpu_ guards), also moving data between from and to userspace is expensive.

The proper solution is to run a userspace daemon that uses the kernel driver. This is how Coreaudio works on macOS and jackd and pipewire on Linux.


Correct. Which is basically what I said.

Au contraire.

It will fix the issue that we had with Pulseaudio that means that Jack will not start due to exclusive access to the audio driver (something which exists on other platforms and existed with OSS before Linux switched to ALSA).

This is an issue that I know has put off a lot of people from using Linux and tools like Ardour.

It has been a genuine problem with Linux for over a decade. Whether you consider it “a pig” or not, having a single audio server which “just works” would be a genuine step forward for Linux, and will undeniably provide a benefit for the vast majority of users who struggle getting Jack working on most distros.

And, yes, telling Ardour to use the ALSA driver directly is a sensible option in a lot of cases, but has it’s own user issues, such as not being able to get sound from other applications when you do so.

Regardless of religious arguments about architecture, I look forward to the day when we have a stable, well structured sound server which “just works” for most use cases, which will make it a lot easier for musicians to transition to Linux.

Unlike merging Jack and ALSA which, not only seems architecturally broken, but also would probably not yield any significant benefits to usability.



Actually, this is not the problem that Pipewire will fix. Years ago myself and the original developer of PulseAudio cooked up a protocol that Pulse and JACK (and Ardour, and potentially other apps) could use to “negotiate” access to a piece of audio hardware. Once most Linux distros eventually got a new-enough version of PulseAudio (years ago!), JACK will always start because PulseAudio will yield control of the device.

The actual problem that pipewire will fix is the one you allude to later: whenever one of JACK or PulseAudio has control over the device, it is tricky-to-insanely-hard-to-impossible to get sound from applications using the other sound server to be audible. There are ways to do it - my machine is configured like this - but it is far from trivial and cannot really be offered to newish Linux users as a sane solution.

Pipewire, by virtue of being an audio (and video) server capable of handling the needs of both low latency, realtime audio applications and traditional desktop applications, will solve this issue.

OK, thanks for the clarification.



  • to easily aggregate multiple soundcards
  • provide seamless support when hotplugging devices
  • have persistent names for sound-devices and ports (no more random order when using multiple audio or MIDI devices)
  • support multiple buffer queues with different quality of service (allow to run low-latency Ardour and a high-latency Desktop sound app simultaneously)

That being said, I’m currently no longer convinced that the pipewire ecosystem will be the future. It became too complex, and ambitious with conflicting goals and repeated the mistakes pulseaudio made. It currently also does not scale as well as JACK, and as opposed to JACK also enforces policy.


Core audio is already good, so why not take a model from it and make something similar for Linux?

CoreAudio doesn’t provide inter-application audio routing (not on macOS, so far, at least).

Also, several aspects of Pipewire are in fact modelled on CoreAudio.

1 Like

If a device manufacturer, for example RME, made Linux drivers, where in the chain would they operate? Would they be like ALSA instead?

Device drivers are the lowest level of ALSA. And ALSA drivers are no better and no worse than the drivers you’ll find on any other platform.

1 Like

I suspect they would need to create an ALSA driver.

The issue with this, compared with some other platforms, is that this needs to be built as a kernel module which means it needs to be compiled against the users current kernel. It also, almost certainly, means it has to be Open Source, which I can imagine many vendors baulking at.



NVidia would say otherwise (their kernel drivers are not open source).

Of course, using such kernel modules causes other problems, but there’s no inherent obstacle to it.

There’s the additional problem that I think the kernel-side ALSA API is not 100% covered by the “GPL exception” that NVidia uses.

Anyway, to date there have been maybe 2 or 3 such instances of an audio interface manufacturer actually writing the Linux driver. In all other cases, it comes down to willing Linux users / developers to do the work, generally aided by information from the manufacturer.

I did consider Nvidia (and also VMWare) in this. I got the impression it was far more problematic to do this with ALSA. I guess not impossible though.