I did publish a new Linux Mixer UI written in HTML5 to control Scarlett/Focurite music orriented sound board.
A video explaining how to use the UI with Ardour is avaliable from http://breizhme.net/alsajson and the code is avaliable on github from the demo page.
If few others are interested, I would be happy to collaborate on a project to get a good generic Mixer UI targeting music orriented boards for Linux.
Well Soundcraft has a nice looking mixer for their hardware build with html5 canvas. You can take a look: http://www.soundcraft.com/ui-demo/mixer.html
I think html5 is a nice idea. This way you can control your levels from the mobile phone
Very interesting! Are you in contact with alsa developers for this?
@lucianodato, the ALSA dev mailling list is very driver centric and has too much traffic for me to substain it. Futhermore dealing with Alsa mixer for Music orriented board only mater for musicians and does not seem to be main ALSA-dev preocupation. If you have any idea that would be a good fit to start a new Alsa Mixer project please let me know.
Yes the alsa-dev list has a lot of traffic, but there is no requirement to read it all. It’s there as a communication mechanism for developers and to developers around ALSA. That does focus on drivers, because that is the bulk of the work, but mixer GUIs are also a key part of their remit, such as the Envy24 and RME Hammerfall mixer GUIs which are part of the alsa-tools-gui codebase.
I would disagree that music or supporting musicians isn’t a preoccupation for alsa-dev, given that one of their primary aims is to enhance Linux support for music-focussed audio interfaces (like Roland, Focusrite, RME, Yamaha, etc.).
Regardless, I do think you are correct in saying that what you are developing doesn’t fit into alsa-dev purely because the approach you are taking seems to be so different from what they are doing and I can’t see the ALSA folks being particularly interested in embracing that approach. If you were developing a native GTK GUI app, then alsa-dev would definitely be the right place.
I am intrigued what you think the use case for this is compared to the ALSA approach of developing lightweight, GTK based native mixer GUIs.
@Majik, I agree with your that a native Mixer UI would do the same or even more. Before I was using Qasmixer and I could have chosen QT and not HTML5, why ?
Before everything else I wanted to have something as portable as possible. Creating and maintaining an Alsa/Json gateway is simple, the code is 100% independent of sound card driver. It has so little dependencies that supports of multiple Linux distributions remain simple. I checked NodeJS Alsa interface and was not impress at all, the Python Alsa interface looks a little better but in my opinion still carry far to much dependencies. I now provide binary packages for the AlsaJsonGateway on most Linux distributions leveraging OpenSuse build serice and Ubuntu PPA, and I’m deeply convinced that maintaining such a gateway is going to be far more simple than supporting a Native plugin in Node or Python.
Independently of the gateway versus direct interface I could have used QT to request the Alsa/Json/Gateway. I choose HTML5 for multiple reasons.
. A part of curiosity, I wanted to check how far new generation of HTML5 applications could go.
. HTML5 does not require compilation, a simple browser is enough [everyone has a browser and it works out of the box on any distribution]
. you can control your board in remote, which can be useful and simplify studio/stage cabling.
. Most of new applications in the embedded space where using QT but are now moving to HTML5.
. HTML5 with frameworks like Angular or Foundation5 for rendering is now mature enough to build applications that can adapt dynamically to different configuration of sound boards and screen.
On my opinion, next generation of Music board will support APIs through a small REST httpd server and a simple HTML5 application will be enough to drive the board from a simple browser. From my experience implementing Focurite Mixer, I’m now convinced that it is possible to implement a generic Mixer that could map any specific board from a simple json configuration file. The information provided by Alsa driver are not enough to really take advantage of board specific functionality, but a simple Json config file would allow to build dynamically a Mixer UI adapted to most of music board.
Conclusion: I really think that having a gateway independent of UI is the only option for long run maintenance. HTML5 has small advantages but they are not fundamental and QT would be a valid option. If Qasmixer had still be maintained and was able to get better usage of specific feature of each sound board and I would have continued to used it. As I wrote on a different post, my dream would have been Ardour to control the Alsa board through a gateway that would handle the specificity of the sound board like it does for Jack. Unfortunately this does not look like an option Ardour team is willing to take.
Interesting. Your approach certainly follows where most AoIP audio interfaces are now with a web page as interface. For my card I use mudita24 (ice1712 AI) and quite honestly, I use very little of the UI. All of my monitoring is done external anyway.
I would have the same problem with this interface, the GUI is just too big. I would like a skinny window (3/4 inch high?) that I could put above the Ardour mixer that had one level knob per preamp (ADC level) with some metering, and one knob per DAC. Generally, everything else is set and forget. Another option might be a plugin that bypasses audio but has controls that set the preamp that channel uses. The problem with all of these (and probably the reason the Ardour team is not interested) is that there is no standard way of doing this, each card would have to be supported on it’s own. It really needs a separate project such as your own to work.
One of the downsides of using a browser for a UI, is that most browsers are not used for one thing. In fact another tab might open even from selecting help from whatever application one is using. So setting a mixer IF up so it just fits nicely above the mixer does not work for these other things… Uh oh… mouse scroll wheel does not seem to move faders… Anyway, getting rid of the tool bars seems to get rid of them on the main window too… basically what I am saying is that the user has to put up with a UI that is designed for other things and uses much more in resources than what it needs. We would also need a stripped down HTML5 application without the baggage FF, Chrom* or whatever has.
I could not use this for FOH mixing, that is for sure. The faders/knobs are too hard to control. My mouse has to land on exactly the right spot to move the control. For faders I should be able to put my mouse anywhere on the slider and move up or down. Knobs I should be able to click anywhere on the knob and then move up or right to increase and down or left to decrease. Is this a limitation of the HTML5 UI? If so, HTML5 has a long way to go for audio use.
From my personal POV, a HTML5 control is better than nothing, but not my first choice. I would end up installing a second browser just for the AI mixer so I could keep it out of my main browser.
My first choice or not, I fear you have hit the wave of the future.
Bear in mind a “Simple HTML5 browser” is about 1000 times more heavyweight than a standalone GTK application.
And unless the JSON/HTML server is embedded into the audio interface hardware, the software server itself is likely to be of a similar size (or bigger) and have just as many dependencies (or more) than a standalone GTK based GUI.
An interesting way forward would be to have a GTK based mixer which also exposed the controls via an API. The JSON API would need to be standardised to make it widely useful. This would give a simple lightweight GUI for the 99% of Linux desktop users who just want a simple tool to use, and an API for those who want to do something like develop automation, or to launch and run in a browser (perhaps for network-based control, or for Chromebooks).
For desktop Linux users though, I don’t see native GUI apps going away any time soon, especially for this sort of hardware control application.
I agree that HTML5 is far from being perfect, and its obvious that a native UI would requires less resources and provides a better user experience. I’m convinced that a native plugin for Ardour would be the best option, Ardour already has the right faders widgets, options to open/close controls and all the other tools that are needed to write a nice Mixer. Rewriting an 100% independent UI in QT is always an option but not an easy task, even restarting from existing tools like QasMixer.
The AlsaJson gateway by itself is very simple and normalizing its API should not be a complex task. If is was possible to normalize a MixerJSON protocol like there it has been done with XLMmidi to exchange scores. We could push for manufacturers to embedded a gateway with a standard protocol within music equipments. Such a standard would make very simple to write HTML5 or native UI for any board.
Technically decoupling UI from Alsa API only has advantages. The extra amount of resource needed to support HTTP and JSON protocol is ridiculous compare to any UI resource requirements. A “pmap -x” on AlsaJsonGateway shows that each of LibJSON +LibAlsa + LibmicroHTTP require around 2M of resources when for a simple UI like PavuControl libGTK alone takes 20M and GLIB around 10M.
Technically the gateway only expose raw controls as provided by the Alsa driver. This gateway is nothing more than a new version of amixer.c command line transformed to support REST/JSON. What’s would require some normalization efforts is a common configuration file to enable dynamic configuration of the UI to support every sound card specific features. The other main missing API current gateway has is websocket to support realtime monitoring, but this also would not be a big job.
If the community could agree on a share MixerJson specification, this would open many doors. Manufacturer could start opening equipments API, people could write HTML5, QT, GTK, … UI independently of the board or operating systems. I’m not sure where would be the best place to drive such and effort, but in my opinion it would worse the value to try it.
Pavucontrol on my system only uses 17k of memory resources plus about 1M of additional disk resources.
Yes libGTK and GLIB are large, but on any Desktop Linux system they are already there, and the incremental requirements are negligible (about 1/20th of the JSON approach).
So if you are talking about a solution for Desktop Linux, which I am, a native GTK application is not only the most useful and least hassle for users, but also has the lowest number of dependencies (practically zero or close to it) but also the lowest overall additional resource consumption.
Now if you were talking about a different environment, such as Chromebooks, where the libGTK and GLIB natives libraries are not present or available, then I would agree that something like what you are proposing may be the best way forward.
But for Desktop Linux, I’m really struggling to see a benefit.
Oh and I agree that decoupling is good, but that doesn’t necessarily mean presenting and operating only through an API, especially one as heavyweight as JSON. MVC techniques do exist for GTK and of course, Qt although I don’t see that as a sensible option for this sort of GUI as GTK is pretty standard on all Desktop Linux distros whilst Qt/KDE tends to be a separate dependency with additional resource requirements so isn’t ideal for this sort of GUI. I should point out that my desktop of preference is KDE and I’ve done a bit of Qt development myself, so I’m certainly not against Qt in general.
So decoupling the GUI logic from the presentation is logic and sensible, but also achievable entirely within the structure of the application.
Exposing the functions as an API is a step beyond that, and I would suggest exposing it as a remote API like JSON is a step even further. There are, of course, very good reasons to do that in many cases. I work a lot with JSON and SOAP APIs (and used to work with CORBA) so I’m very familiar with those environments. But usually those environments are driven from a specific use case, such as the need to remotely access that API across a network, or the ability to support extensibility and discovery, or a common framework for inter-app communication.
Again, I’m struggling to see where any of these apply here and, therefore, what the use case for exposing an API is, especially if a native GUI is provided as well. I’m not completely against the idea (after all, many KDE applications have the ability to be exposed and remotely called via dbus or, in the old days, DCOP which can be useful for scripting amongst other things) but I just don’t think most people will have a use for a network-accessible API if a native GUI is available.
And if you do want to support such a thing, maybe OSC would be more appropriate?
But making an environment which requires and depends on such an API for something as low-level as a GUI for a mixer, which requires the associated ALAA drivers anyway? It seems like an interesting project, but it feels a bit “over-engineered” for the sake of it to me. Somehow don’t feel it’s what users are crying out for.
Users rarely cry for technical solution, they usually search for working solution
While I agree that QT/GTK is already loaded in most configurations this is also true for the HTML5 browser. In fact independently of chosen technical architecture I doubt that resources limitation and ram usage could be an issue for a Mixer. Compare to the amount of resources needed for Ardour, Firefox, Gnome, etc …
The main advantage of JSON over native Alsa C-lib is simplicity. It is far less complex to support a scripted language with JSON than through a C native mapping. If you check Alsa/NodeJS and Alsa/Pyhton mapping it is obvious that ALSA/JSON/REST gateway is simpler. Furthermore as such a gateway is 100% independent of sound board the cost of maintenance is minimal. When checking amixer.c git history, we find 2 modifications in 2008, 3 in 2010, 1 in 2011 and nothing since then, and they is not reason why an ALSA/Gateway would require more work.
DBUS could probably be a viable alternative for Linux. Nevertheless I’m convinced that in the near future every smart sound card will propose a JSON/REST API. This is why I think Linux musicians should work on normalizing a JSON API. Otherwise each vendor will come with a new proprietary protocol that will be supported on M$+Mac but not on Linux.
Note: I check for OSC web page, but it seems that nothing moved since 2011
I need a GUI to use Ardour and other tools, as you pointed out.
I don’t need a browser running for any of these other tools. I don’t even need a browser installed to be able to use pretty much every music production tool on Linux.
I also need the GUI running in order to run a browser. However you try to manipulate facts, on a Linux desktop machine a web browser consumes significant incremental resources over running native GUI based apps like envy24control, etc.
In fact I general don’t have a browser running when using Ardour, and the general advice I have seen is to specifically avoid having one running because of the additional resource consumption (including additional processing requirements when rendering some content).
You are right that users don’t care about technical solutions. But they already have a working solution: I understand that a native GUI Scarlett mixer control has been available for a few months now ( https://community.ardour.org/node/8821 ) which is the default solution users will get on their Linux desktop. It works and (presumably) it works well. Moreover, being the default, means it’s easy to use without any additional setup, and doesn’t require them to consume additional resources by running a web browser.
So whilst I applaud what you are doing and agree there may be some valid use-cases in other environments, I can only conclude that for Linux Desktop users what you are pushing here is the technical solution that users aren’t crying for.
@Majik it is true that I consider that JSON API through a gateway is far better than a C/C++ native interface. On the other hand I would agree with you that HTML5 might not be the best option for a sound mixer UI. HTML5 was a simple option for me to solve my problem and at the end of the day it does the job.
As today on Linux most music oriented boards do not have a mixer equivalent to what is provided on Mac/Windows. Scarlett 18i8/18i20 is just one sample of them. Even if ALSAmixer works it is not really usable with 20 input/output. Envy24control that you listed is a valid mixer but it is targeting the Ice1712 which is not a chipset used by new generation of soundcards.
I doubt that reverse engineering of hardware could be a long run option to quickly support new sound cards on Linux. I really thing that a JSON/REST API is the future to go, but obviously anyone is free to disagree with me.
If you can get the vendors to support this natively, then I agree it would be interesting.
However, USB/Firewire/PCI devices are not independently and directly addressable by IP (which is what JSON was designed to use) and probably never will be in their current forms, so they would still need some sort of native driver on the connected host to expose such an API. That driver will be OS specific and there will still be the need to reverse-engineer the actual protocol to the audio interface in cases where the vendor doesn’t provide a native Linux driver.
Of course if some new USB standard was created which formalised a set of standards for JSON and encapsulated it on the interface in a similar way to how MIDI (for instance) is encapsulated then this would make everything more straightforward. Realistically, I don’t see this happening: none of the vendors are talking about this, it would be a significant standardisation effort, an expensive development for them, and I struggle to see that much of a benefit to them.
I would also suggest that, if they do come up with a standard for mixer control protocols, JSON isn’t necessarily the obvious choice. JSON is quite a heavyweight protocol which is designed to work in IP networking environments, specifically using http for transport and transaction control. All of that is pretty alien to low-level host interfaces like USB, and very alien to bus technologies like PCIe. IMO it would be highly artificial and not particularly beneficial to implement an IP/HTTP/JSON stack onto these interfaces when much more appropriate and efficient possibilities exist.
Having a standardised JSON mixer API doesn’t reduce the amount of code that needs to be put in place to make a mixer GUI. In fact it probably increases it. What it may do if it is adopted as a standard and becomes widespread, is make the coding more standardised and modular, so that standard libraries can be developed to accelerate development. Of course it also gives the option to develop for different GUI environments. But I would argue any standardised API will do that (e.g. ALSA already has one) and a network-centric API like JSON possibly isn’t the best choice for a host-based development API.
FYI The ui mixers were purchased from a different company that developed them first, and were bought by Soundcraft. They do not have the same quality or reliability you might expect from Soundcraft, there are numerous issues reported over the webs on them.
Also, the Soundcraft Ui series HTML mixer is built into the unit: the mixers actually have a small onboard web server and native networking capability (i.e. they have an IP address).
That’s not really the same as what is being discussed here.
This thread is more about using web API standards (specifically JSON) as the way to control non-networked audio devices, without IP addresses, like standard USB sound cards. More specifically, the suggestion seems to be that JSON APIs should be used as the de facto control API and implemented as part of the Linux audio drivers (part of or alongside ALSA) to be used by any desktop applications instead of using the current ALSA APIs.
In a nutshell, this thread is about Linux audio driver and API internals which really doesn’t have much to do with being able to use your standalone mixer with an iPad or mobile phone.
Of course, in the future many more devices will go this route and I foresee a day when the majority of audio devices use ethernet and IP protocols as their primary interface to host systems. There are already high-end devices using streaming protocols like Ravenna/AES67. With such devices, an “on-the-wire” control API using IP/Web API standards makes a lot of sense, as does the device presenting an HTML control interface.
But IMO, whilst we are still mainly talking about non-IP based device control, adding JSON (or html) to the mix is really just adding a lot of bloat without adding a lot of value.