God Save Pipewire

When else do you expect it to release the device?

When I don’t want a recording mix session 100% focused (where I’d use Alsa) but intent to do multitasking.

Robin summed it up perfectly.

But I need more than playback only and that without relying on jack for everything. Then I could stop using jack or having jack installed. Everything would be up to ALSA (exclusive access for heavy tasks and virtual/pseudo device for multitasking)

That’s a “when” and a common scenario.
Sometimes I want to check an audio file while I’m mixing or recording a demo without stop the audio engine to do this ordinary task.

You could use Ardour’s audition feature from the import dialog for that (or import a reference to it).

If you want to share your audio interface between applications then you need some system capable of doing that. JACK is the main example of such a system. Pipewire can be considered an alternative implementation of JACK for this purpose. PulseAudio cannot be considered equivalent, because it is not designed with pro-audio/music creation workflow and applications in mind.

So, if you really need to do this, keep using JACK (or Pipewire-as-JACK).

If you other reasons for preferring to use Ardour’s ALSA backend, then you need to accept that this implies exclusive use of the device.

Also, sndfile-jackplay, clementine, vlc and a myriad of other ways to “just play an audio file via JACK”.

but I am not using JACK :slight_smile:

you’re also not that guy :slight_smile:

For sure, I forgot about this feature. That is why now I think that “audio file” was a bad example. Let’s say… a whatsapp web, telegram or skype audio as an example. The whatsapp is a real situation for me because my clients usually just drag and drop everything they want to show me.

I see… For real, ALSA backend works wonderfully and I’m just bringing this up for debate because 1) I don’t understand very much what is Pipewire ALSA (is it an emulation?) 2) Why couldn’t pipewire make this scenario any less inflexible like it is with Mac’s CoreAudio? (I don’t know how CoreAudio is designed to naturally do that) 3) I hope Pipewire can make things more pragmatic and intuitive in the future, so I’m not blaming Ardour, even because Bitwig works exactly like Ardour so far, that is, any difference in this scenario would be a pioneering spirit from Ardour.

Yes, when pipewire-as-jack be able to perform like Ardour’s ALSA backend. We don’t have freewheling implemented yet, we can’t change buffersize without a command line, there are mysterious glitches happening even with rtkit properly set and working. So… Almost there but not yet.

CoreAudio imposes a single API on all applications using it (at least, using it directly). This API is semantically similar to JACK. CoreAudio does not permit/provide the sort of APIs used by many ALSA/OSS/PulseAudio applications.

So … PipeWire’s situation is more complex right out of the gate.

But, if Pipewire works out (and it seems likely that it will), you won’t see most of that complexity. Apps that use the ALSA API will just continue to do so, and apps that use JACK will just continue to do so, and they will all be able to share whatever device(s) Pipewire is interacting with.

1 Like

I imagine it is Apple’s choice based on closed/exclusive hardware and proprietary software (MacOS and etc). It’s easy on the apple side because things are more restricted and imposed. So… Yes, I’m wrong. In fact, CoreAudio is more inflexible than FOSS, right? There are more options, libraries, architectures to support in the penguin universe and Pipewire tries to make a sort of alchemy of the best in freedom and plurality without coming from scratch.

Yay! :grinning:
I don’t see myself going to Windows or Mac OS as long as Linux (and Wine) exists.

Just as a final note, I feel that I should clarify one thing I wrote here. There are in fact lots of different levels of audio API on macOS (and iOS) that an application developer could choose to use. But they are all built on top of the underlying CoreAudio API. This is also mostly true of the libraries that a developer could choose to use on Linux for audio I/O, in that ultimately most of them end up using ALSA.

But the critical part of the puzzle is the difference between server/client systems and simple libraries. On macOS, there is one server/client system blessed by Apple, and that’s coreaudiod, which makes possible device sharing (but does not allow for inter-application audio on macOS). On Linux, in 2021, we’ve got the pulseaudio server, the JACK server and now emerging the pipewire server (which is not a new protocol, but a reimplementation of existing ones). The fact that Pulse and JACK do not fulfill the same goals is the root of the issues on Linux (but do note that Pulse fulfills little-used goals that coreaudiod does not even attempt, like networked audio).

I just wanted to correct the impression I may have left that if you looked inside an macOS-native audio-using application you would always find the same API. You might find a wide variety of APIs across different programs, but they are all built on top of CoreAudio, and all audio flows involve the coreaudiod process.

1 Like

I"m more of a newcomer, but I often play a Youtube video tutorial of how to achieve something in Ardour, while at the same time I use Ardour to do it. That’s where I have had that issue. I sometimes just switch to Alsa but it’s a pain to toggle back and forth. Sometimes I just watch the video on a tablet and use the desktop just for Ardour. It seems like we can do better than that.

1 Like

@vivantart1 did you install a low-latency kernel?..

Yes, but a rt kernel. Now 5.10.0-4-rt-amd64 is installed.
I’m in the Debian side at this moment, but I used to work with low-latency on Ubuntu.

“Why would you want to watch youtube while recording or mixing or working on a production?”

I have that all the time: I keep my session open in the studio in breaks or when I work on other stuff, then also sometimes i look for inspiration for a mix, for a tutorial, etc .etc i would definitely never do that on recording, but mixing, yes. The alternative would mean having a separate computer setup just for browsing and youtube. this is ok in a big and “only sound” studio but not the reality i know from productions and media companies i work with …

1 Like

Window > Audio/MIDI Setup > [Stop]


youtube

Window > Audio/MIDI Setup > [Start]

In the absence of the sound server to serve all, which hopefully pipewire will become, you either to run JACK or do the above.

2 Likes

Please can you move the audio / MIDI setup to the preferences menu? Instead of the ‘Window’ menu (I’ve lost a lot of time to this on several occasions… I hesitate to use the word ‘intuitive’ :slight_smile: )

That is already planned for a later version.

Right now it is required because:

  1. Ardour sessions have a fixed sample-rate
  2. Sample-rate needs to be set when creating (or loading) a session
  3. The soundcard needs to support this sample-rate
  4. Ardour needs to know which soundcard is to be used, to query what sample-rates are supported
  5. So the user needs to specify the soundcard (and rate) to be used early on

Once Ardour can resample I/O and has a rate-independent session time representation this is no longer needed.

However since Ardour exposes internal routing it may still be relevant to ask a user early on which backend/device to use. Otherwise there is the risk of loosing connections when loading some sessions. Preferences are only available at a later point in the application’s life cycle and not useful here.

Anyway, it won’t help for the case at hand. Preferences are for setup (select device, rate). Having actions (Stop I/O) there is not great interface design.

1 Like

OK, I’m confused - I understand that all of that is quite deep in ardour’s architecture, but if I can already go to

Window->Audio/MIDI Setup,

then why not just make it

Preferences->Audio/MIDI Setup instead? I just want the thing I have to click on to be in a more ‘intuitive’ location (for any definition of the word intuitive, Window is surely not an intuitive* place for Audio preferences…) or is that what you mean is already planned for a later version?

*"using or based on what one feels to be true even without conscious reasoning; instinctive."

Frankly, this is not something that bothers me. I find the usual way quite functional.
I find it extremely quick and easy to change the settings and I really don’t like the way it works in Reaper, for example.