Idea discussion : AI integration

Hi everyone,

For some time now I’ve been seeing more and more AI driven tools popping up.
Except for the buzzwordy-snakeoilish-marketingfocused obvious ones, I do believe that some tools are genuinely useful.
A few open-source ones have caught my attention but they seem to mostly be intended as “web services” where, in my view, inside a DAW would be the obvious place of choice for them.

Spleeter ( stem separation : ) is one that I use a lot. I managed to install it on my computer and I use it for sampling but also to quickly draft over client’s demos. I’ve also used it to isolate and cleanup takes with a lot of bleeding.

Matchering ( automated mastering : ) is another one that I’ve unfortunately not been able to test as the installation is a bit beyond my technical abilities. Instead I use a similar (in aim if not in design) service called Landr. My interest in these tools is for when I’m sending pre-productions to my clients. I do have a preset for my master channel but I’ve found Landr to do a better job at this admittedly specific use case. I would probably not consider using it for a final master but it does help to have this little bit of polish when showing your progress.

A more recent one is Tone Transfer by the Magenta project ( Change timbre of audio input into different ones : and the web tool ). I’ve just tried with a few samples but I can see a real potential for sound and texture exploration and it feels like this could be really handy when producing.

What are your thoughts about this ? I understand that it’s not as simple as dropping any code into Ardour, but beyond the technicalities, I wonder how this community feel about the topic in general or if anyone here use similar tools ?

I’m honestly not a fan.

I could see potential for some things, like tone transfer. I’ve occasionally used AI tools in photo and video editing and color grading, and sometimes they produce surprisingly fantastic results. The main thing I don’t like is that these tools tend to use a black box approach, so if you have to go back and adjust things afterward it’s hard to know what to fix because the adjustments they make under the hood are typically not revealed. I agree that they can save time for some tasks, and they can also reveal possibilities that you may never have considered yourself.

1 Like

How can we tell if you are real or just another AI designed to promote the use of AI in everything… :slight_smile:

If you look at some of the examples given, the only extensively trained neural network is magenta, and currently they use a lot of Bach.

That would be interesting however in the long run, but perhaps as plugin, not as built-in feature.

As for source-separation I’m not sure how that fits into a general DAW workflow. Perhaps import side (e.g. import guitar from file).

matchering is mainly driven by a FIR: – The same could be done by a plugin that learns an EQ pattern (not unlike plugins that learn noise patterns). There are probably already a couple VSTs out there.


Regarding the blackbox aspect, while I agree it is a bit annoying, I guess in some ways it wouldn’t be much different than having an intern do the job ; if it’s good, keep it, if not fix it or do it yourself.

The tone transfer is probbaly the most impressive one out of these examples in my opinion but I agree it’s still far from mature.

In my experience, spleeter is very effective.
Where I would see it would probably be as an option you could use over a region but that is solely based on my experience and my workflow.

As for matchering, actually now that you mention it, there is a matching eq plugin from the guitarix project I believe, so it would be possible to route MS to bus and use that I guess.

Do you think I should put the suggestion for stem separation through the official channels or would it be a waste of dev’s precious time ?

Regarding the question of my humanity, I hope this is proof enough : 1÷0=42


Crap now we gotta reboot Paul again…


Neural networks and related technologies were known long time ago, and they have nothing to do actually with artificial intelligence. These techniques may be useful on some areas but I don’t think this will make any change for music. There are more scientific articles on the argument but I’ll put this if you are interested -

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.