I am running FreeBSD and I have ardour5 but I would like to avoid using Jack at all costs. OSS has all the capabilities of Jack w/o some of the downsides.
Would it be possible to have Ardour work without Jack just using OSS for not only Audio input, output but also Midi IO as well?
Ardour has been usable without JACK for several years on Linux, MacOS and Windows. If FreeBSD has an ALSA compatible API available, then Ardour’s ALSA audio/MIDI backend will likely be usable on FreeBSD. There is no Ardour OSS backend available and I would probably refuse to accept one as a patch.
OSS does NOT have all of the capabilities of JACK and I have written many times over the years about why the OSS API is so bad. Google will find such writings if you’re genuinely interested - I’m not going to repeat them here.
I did search around and I found articles like these: http://ossnext.trueinstruments.com/forum/viewtopic.php?t=5811
if you can share links to your previous writings I’m always interested in learning.
I’m a bit shocked that you said you’d even go as far as to refuse OSS patches so I am definitely looking forward to seeing your opinion on OSS.
Are you really not interested in OSS being a part of Ardour?
blubee: if what I get from this is correct: https://linuxplumbersconf.org/2009/slides/Paul-Davis-lpc2009.pdf
I do not see how anyone could use OSS for real time audio with low latency and expect bit perfect throughput. I can see why BSD would want to use it, but it appears BSD is more interested in using a “unixy” approach rather than what is best for audio. OSS looks like it would work well for piped utilities, maybe not so much for pro audio. Do note that Ardour does not support PulseAudio even though more people use PA than those who use OSS, Jack and direct to ALSA combined. PA is not suited to use with a daw, perhaps OSS has the same restriction.
That PDF of my slides from the Linux Plumbers Conference is probably summary of my thoughts on this stuff. Thanks Len.
That was an interesting read and while it might have been true, try listening to these guys at this Linux conference this year: https://www.youtube.com/watch?v=6oQF2TzCYtQ
Audio on the biggest Linux platform Android is in serious trouble, that’s why LG, Samsung and most every other hardware manufacturer creates their own sound system trying to avoid what’s already in Linux.
One reason why I try to avoid Jack is because if you setup a series of instruments and plugin and save it to jack’s patchbay or something similar. Close those programs an try to reload the session at a later time; there’s always issues.
I’d really recommend that you guys take a look at the video and then reply because to me, those slides seems like it’s just railing against OSS and doesn’t even provide any benefits. How is running a sound server on top of another system going to reduce latency? The sound server will impose it’s own latency in the mix, would it?
Is the option really to keep on ignoring the lessons of the past?
Properly designed sound servers don’t add latency. JACK adds no latency whatsoever.
It isn’t true that android manufacturers keep creating their own sound system to avoid what’s already in Linux. Android didn’t use ALSA at all in any meaningful way. They’re not trying to avoid “what’s in Linux”, they’re trying to get their hands on a usable sound system. Google has done some work on this, and they have an audio stack that performs sort of OK, but has hardware requirements that many (most?) android devices fail to meet.
The lesson to be learnt here is the lesson of CoreAudio, not OSS.
Your use of JACK suggests that you’re actually using separate programs. You can’t do that with OSS or CoreAudio. If you’re not doing that, you don’t need to use JACK anyway, and can just use native platform audio/midi support.
The issue appears to be that the *BSD’s adopted OSS as their API for audio/midi I/O, which is a bad idea for the reasons I outlined in that talk.
blubee: language like “just railing against OSS” has just lost your case (and shows you didn’t really read it). Looking at android and using the words “trying to avoid what’s already in Linux” tells me you don’t know what android uses for sound service, which is not ALSA the application sees but some other google server on top. It would appear that OSS is (to quote you) “ignoring the lessons of the past”.
“I’d really recommend that you” decide if you wish to run pro audio applications or a network server on your computer… maybe try one computer for network kinds of things and a Linux computer for sound. Trying to shoehorn a real time application into whatever you happen to already have, is like trying to record an orchestra with the built in laptop mic… OSS became a non-thing a long time ago, I wish people would stop trying to dredge up the past. There is nothing stopping BSD from using ALSA or an ALSA api. That was a decision made by people that are used to dealing with networks and security (good people to have) not people who want the very best audio out of a box. Linux suffers from a lot of the same thing, most people do not need anything more than PulseAudio, it does not matter that pulse looses samples now and then (and it does BTW) so long as audio keeps flowing and people can use skype. Those who want to do audio production need something more than pulse (or OSS) can provide.
I can see why Paul just says “we will never do OSS” rather than going over the same ground time after time. It appears that those who want OSS are of the: this is what I have make it work, type.
Anyway, no OSS.
page 24/68 the title is OSS API MUST DIE!
[blubee: language like “just railing against OSS” has just lost your case (and shows you didn’t really read it).]
I am not here to win any battles or even start a war. I was just asking questions, after reading it I can definitely see that Paul is personally invested in Jack since he was one of the developers; that alone says a lot.
Samsung Professional Audio SDK for android: http://developer.samsung.com/galaxy/professional-audio
Sony Hi-Res Audio API
Google trying to do Pro Audio on Android: https://developer.android.com/ndk/guides/audio/index.html
There are way too many examples to go over but when you take the time to read those API they have more in common with OSS and less with ALSA or JACK, let’s not mention Pulse.
So unless you’re telling me that you guys over here have it all figured out and do a better job at Audio than all of these teams of people, well that’s saying something.
Android didn’t use ALSA because it wasn’t a good fit at the time.
Android didn’t use Linux graphics for a long time too for that same reason but unlike with audio. Linux continued to improve and implemented KMS/DRM to get atomic update and a lot of nice features into the linux kernel.
Then Google switched from it’s custom graphics stack to using mainline linux graphics, and in their own words were able to dump a lot of redundant code.
This whole push-pull architecture that Paul goes on about in his talk.
push as in open a fd set the position in the file and write to it.
pull open a fd set the position in the file and read from it?
This idea that “Pro Audio” can only be achieved with Jack is a bit laughable since most people who do actually produce music uses mac or windows for that matter, windows…
FreeBSD doesn’t even come with OSS installed, its in the ports so you can install it if you choose. FreeBSD uses ALSA as the default with options for Pulse, Jack or OSS. It’s definitely not a this is all I have situation.
Anyways, I appreciate the feedback, my goal is to find the best software that’s easy to maintain and provide great quality even if that means the developers doing more work.
The attempts at creating shortcuts will eventually lead you repeating yourself over and over again which I would personally like to avoid.
Android not using ALSA was because the Android team didn’t understand ALSA. It had nothing to with ALSA’s actual design or implementation.
I explain on the other slides precisely why the OSS API must die. It isn’t about a “hatred” of OSS or where it came from, but it is about the nature of the API that OSS presents (also explained in the slides).
I have no personal investment in JACK. I ceased my involvement with JACK over a year ago, and have actively worked to encourage Ardour users to consider ceasing their use of JACK. I also encouraged the development of the non-JACK backends for Ardour, as well as designing and implementing the abstract audio/MIDI engine in Ardour precisely so that we could move away from JACK iself. JACK is pretty cool (if i say so myself), but it is unnecessary and inappropriate for most DAW users.
I know the people involved with the Google Android audio stack development. We have had discussions. I believe (strongly) that they have made a fundamental design error, and have explained to them why I think this. There is an audio/MIDI API that is used in a consistent form across mobile and desktop platforms, and that is CoreAudio (that said, the iOS use of it is requently VERY VERY different from the MacOS use). That API (both versions) looks absolutely nothing like OSS, and everything like a pull-model (callback) API such as ASIO, JACK etc. I know why the Google team did what they did, and I think it technically wrong and continues to explain (in part) why Android is so far behind iOS for pro/prosumer/creation music apps. (The bigger reason is related to Android audio hardware, which is incredibly non-standard and generally incapable of low latency operation at the hardware level).
You apparently don’t understand the difference between the push and pull models I’ve described at all. Let me try one more time: in a push model, the application decides how much audio data it wants to write/read to/from the device, and when, and the audio stack has to make that happen. in a pull model, the device decides how much data it needs to be read/written, and the application(s) have to deal with that. The pull model is used by all pro-audio/music creation APIs on every platform. No exceptions. ASIO, CoreAudio, WinRT, JACK, and several more.
I have never claimed that JACK is required for pro-audio. My point has been about the requirement for a pull model, and this is borne out by all such software and frameworks and platforms.
Oh, also, regarding your comparison of the audio and video stack in Linux: the reason why the “audio stack” in the kernel hasn’t “continued to improve” is that the analagous improvements in the audio world to the type of thing you cite in the video world were already done. What’s missing in the linux kernel stack is the PLL/DLL based approach to timing that I described in the slides, which would allow us to do the sort of things CoreAudio can do. OSS doesn’t have this either, and because of its firm attachment to the Unix open/read/write/ioctl/close API, adding it would be a little harder/more complex.
What is stopping you from contributing these [PLL/DLL based approach to timing that I described in the slides] pieces to the Linux kernel?
Those slides are from 2009, what caused things to stall for almost 10 years?
There are only a handful of people with the interest, knowledge and skill to do this work. It would also require a significant time investment. What do you think I’ve been doing for the last 10 years?
I appreciate your time to have this discussion with me.
Thanks for the link to the slides.
Good luck with your project.
@blubee: I’m not sure why you are bringing up OSS, this was in the notes for 4.15rc1 from the ALSA maintainer:
“The biggest change from diffstat POV is the removal of the legacy OSS driver codes that have been already disabled for a long time.”
Seems OSS on linux is very dead, I don’t see how OSS on Solaris or FreeBSD can maintain enough momentum to keep it going, or why you would want to. Especially since ALSA is available on FreeBSD and Ardour works with ALSA, why would you try to keep OSS alive?