Parameter Linking for computational plugins (ambisonics)

Hello all,

I’m new to ardour and couldn’t find a solution for the following problem:

For our free audio dramas, we want to position and move sound sources within the soundfield.
The IEM Plugin Suite does a good job for 3D audio and works fine in ardour. But there is one useful plugin called “CoordinateConverter”, which does mathematical computations from spherical to cartesian coordinate systems, that requires using the calculated values as automation for another plugins parameters. They call this parameter linking and describe the setup using reaper in the documentation.

Is it possible to do this kind of “parameter linking” in ardour?

Best,
Dominik

So my first thought was no, but then I got to thinking and wondering if this could be done in one of two ways:

  1. Using Pin Connections. This would depend on if the plugin exposes the control data as a MIDI output or not, if it did it might be possible I suppose.
  2. Using Carla as a separate plugin host within Ardour. Again depends on how the plugin exposes the data, but this is the more likely solution I could see.

Sadly as I went to test I realized that their plugin downloads are x86 only and I am currently on a M1 Mac running native builds, so I can’t easily test at the moment. I honestly don’t remember if this is possible using Pin connections with MIDI like I am thinking or not but I think it is possible. That being said Carla would likely be the better option and a more visual workflow for this so long as the plugin exposes the control data as a port in it.

  Seablade
1 Like

Thanks for your suggestions, Seablade.

Unfortunately, I couldn’t get it to work this way :frowning:

There is no MIDI out in the plugin’s PIN configuration view. There are only 64 audio in/out pins, that actually do nothing and are just a pass through in this case.

I didn’t know Carla before. But when I tried to add any of the ambisonic plugins from IEM within Carla plugin, they are not shown in the list. I double checked if paths are correctly set. Maybe someone has a suggestion how to get them loaded into Carla?

Also, there is this documentation about OSC Control of IEM plugins and I wonder if that could be useful?

Make sure you tell Carla to refresh your list of plugins, and that you tell it to look for VST (VST2) plugins. Also confirm that the path it is looking for those plugins is correct.

I did see them in Carla here, but have not tested my theory. That being said, if there are no MIDI outputs in Ardour for that plugin I suspect it won’t work then, but not 100% sure.

  Seablade

I just wanted to say the project is successfully finished now without using that particular plugin “Coordinate Converter”. Instead, I linearized the automated motion around the listening position using the “RoomEncoder” plugin.

So overall, I can recommend IEM Plugin Suite for this kind of job (audio drama with dynamic motion in the scene).

BUT: I experienced problems with the computation speed. I had to use only a few audio reflection rays sampled in the RoomEncoder (e.g. ~70) to keep it real-time playable. I noticed that Ardour DSP was never using more than 100% CPU (that means only 1 core), despite I tried “all but one core” and “6 cores” in the settings. My PC has a Ryzen 3600X CPU, so 6 physical cores + hyperthreading.
Does somebody know why it does not use more than one core (Ardour V6.6 with Ubuntu 20.04)?

Not familiar with that particular plugin, but if you were running a single plugin process, one plugin is not shared amongst multiple cores, but multiple plugins can be shared across multiple cores. It seems this is the case from your post, and that that plugin is particularly heavyweight.

The other thing I will say is that while CPU may read <100% that doesn’t mean it isn’t across multiple CPUs. The reason DSP is shown instead of CPU in Ardour is because there is more that goes into that calculation and reflects the amount of time spent processing vs the amount of time available processing for each buffer, a more useful measurement as it shows a more real world expectation before dropouts.

     Seablade

Which means you can often use CPU more efficiently for a given DSP utilization by increasing the buffer sizes. That increases latency, so you need a realistic understanding of what the latency requirements are for your situation. One film frame is around 41ms just as a reference, and most people aren’t bothered by a one offset. EBU recommendations are for audio synchronized with 15ms of video image, just as a point of reference.

I did not see anywhere you mentioned your buffer settings, but if you are running a really low setting you might see if increasing the buffer size will allow either more reflection rays if that is what you want, or give more headroom so any extra system activity doesn’t risk underrun.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.