The first "Universal Sound Language"...?

I thought it might be interesting to know what the Ardour devs (and others) make of this - from the developers behind JUCE / Tracktion etc …

Domain specific audio languages have been around since a long time. Hopefully this has better luck establishing a standard compared to prior attempts. As opposed to e.g. MPEG4/csound (SAOL) or openAL, it might be the right time, too.

It’d be too good to be true, preferably coupled with some system-agnostic UI (html/CSS, perhaps webGL).

While SOUL mentions the possibility to also run a complete DAW graph, I rather see its application in the instrument and performance space, not so much in production environment.

In the future there’ll be a lot more CPU cores available to run parallel graphs while I don’t see hardware vendors selling cheap dedicated DSP chips to consumers (graphics/video hardware vendors have a huge gaming industry behind it, audio doesn’t). The most likely outcome is perhaps some ARM based solution, with DSP running in the driver/kernel space IFF you can find a way to do float-processing there. While many prosumer soundcards already have FPGA and SoC in their hardware, I don’t expect to see any standardization there soon. All those vendors like to sell their own DSP still.

SOUL-LANG is still very light on details. It’s too early to judge, while some of the ideas presented at ADC sound like a pipe-dream, all in all hardware based “audio shaders” would be a great asset.

It’s a pity that there are no recordings from http://faust.grame.fr/paw/

Yes - one of my initial reservations about it was: While this seems to be moving the processing closer to the driver / soundcard, the logical conclusion* of which is that (as he says) the audio driver runs JIT compiled DSP code. If this is agnostic to whether there is actually a hardware DSP in the system, if there wasn’t, then wouldn’t that mean running the DSP code natively, effectively in the kernel, where, on Linux at least, floating point processing is not recommended (though not impossible, assuming the kernel is running on hardware with an FPU).

*The logical conclusion might actually be running the entire DAW in the soundcard driver… :slight_smile:

Or even on the soundcard itself. Conceptually that isn’t that much different from what happens now. The DAWs process graph is triggered by the soundcard’s callback and control or audio data still needs to cross the Kernel/Userspace boundary.

The main difference would be that the DAW’s graph would be compiled and process-nodes can be expanded, perhaps even optimized across boundaries. That is kinda cool.

I can see that would work well as long as there’s only sparse control-information and basic automation; but complex DAW use-cases: e.g. record while playing back, generate timecode to sync e.g. a video and throw in a few latent plugins and some signal visualizers etc it becomes challenging. Next comes a user: “How do I use my USB microphones with it?” and it falls apart.

Anyway, if there ends up being hardware compatibility for DSP code (much like openGL used to be), that would greatly improve portability and consistency!

I can envisage that SOUL will rule the mobile market, touch-screen or augmented instruments, and perhaps the embedded world. It may indirectly change how music is produced in the long run: build an “internet of music things”. A great asset for performers, but I don’t think it will have any effect on the classical record/edit/mix workflow or the broadcast industry.

Now… what’s your opinion as plugin-maker?

Now… what’s your opinion as plugin-maker?

It’s a good concept, in theory - but there are some bits which I think would create challenges. I’m yet to find this mythical untapped DSP lurking in any of my current systems or soundcards though. I would be instinctively cautious around anything that proposes a new language as a miracle cure (a lot of languages get invented purely to scratch a particular programmer’s itch, and in many cases this is why we now have so many different languages that mostly do the same thing, just with different brackets… :slight_smile: )

https://xkcd.com/927/

I’ll qualify this by saying I’m firmly in the demographic of (over) 40something programmers etc - mentioned in the presentation, so I’ve spent most of my career learning all the quirks of getting audio / plug-ins to work in the traditional way and have grown older and more reluctant to change. I’m quite happy to use C/ C++, as God and nature intended… - so perhaps I’m not the target market for this. I also note the irony in the use of the “teach a man to fish” metaphor in the presentation - since by abstracting away all the ‘difficult’ stuff about DSP programming, that’s exactly what he is not doing

Exactly - but I understood (one of) the proposed advantages of moving more processing closer to the driver / hardware would be that audio data wouldn’t have to cross the userspace boundary (which might work well if you have e.g. a realtime instrument which generates its own output in response to a relatively simple trigger, but I can foresee some challenges in implementing all the audio processing that way, and logically, if you don’t implement it all that way do you not lose most of the benefits of doing any of it that way? - the principal one being low latency - which in my opinion can be something of a straw man, especially as a metric for ‘quality’).

Anyway - I wish them success with it, and it will be interesting to see how it pans out. Lets just hope when V2.0 comes out it has better backwards compatibility than glsl… (having to maintain five different versions of the same shader code because some versions use ‘varying’ some use ‘output’ and some work with some drivers on some hardware and others don’t with the same version on different hardware is not time well spent…)

2 Likes