Distributed Audio Workstation

I’ve been searching a lot, but never really found something except for Logic 7 that enabled the use of more than one computer to do the DAW work.
And, Logic does this only half-heartedly, too, as e.g. it is not possible to record audio with different computers (synchronized) and have a common project state nevertheless on all computers afterwards.

I don’t really know if my ideas are feasible, or even wished by people who want to use DAW software like ardour, but I wanted to present them anyway and get a little feedback on this, as I think it’s quite interesting, not only from a technical point of view.

I’d like to introduce the idea of distributed recording, processing and playback of audio over a network, to be able to record simultaneously in several places, have different monitoring setups for each recording place, maybe, and nevertheless be able to jump over to any machine and be at total control of everything everywhere.
So to speak, I might have two fast machines with lots of storage capability, and ten machines with inarguably less horsepower; wouldn’t it be nice if I could use them all in the production process?
Given I’d let each of them record one stereo track, that’d make up 12 tracks of parallel recording; or I could use the fast amchines for effect calculations on tracks in playback, and have two machines play back two different monitor mixes, for a singer and a keyboarder maybe, and have eight recording channels for maybe two mics and six MIDI-powered tone generators.
I could stand up, walk to another machine that’s in another room and continue work in exactly the same way and environment I had on the other machine I was sitting at half a minute ago.
I could set each connectd machines resources to different tasks, maybe have multiple views, depending on what I need to do right now, and still it wouldn’t matter what machine I am physically working at.

The question is, would someone want this, and what pain would one have to bear to accomplish this?
I don’t know the code of JACK nor ardour, so I don’t know if there might be architectural barriers preventing any such use.
Also, network traffic might be a problem, though this and another (syncing a whole network) problem are solvable, e.g. by using asynchroneous data transfer after recording, so one might distribute changes not while they take place.

Who would want such a system, is it possible to make ardour the center of such an effort, and if, what would it take to do that?

Most of what you are describing is already feasible with netjack. It’s basically jack over eithernet, so you can have transport control, timecode sync, and interprocess audio routing between multiple machines. A little X-server magic would also allow you to interact with them all from one terminal as well.

I’m not sure, but it’s functionality may already be merged with the mainline version of jack. Not sure, but I remember people talking about it.


1 Like

Thanks for the reply.
As I said, I don’t know a bit about the innards of JACK or ardour, but I’ve already heard of netjack, but I wasn’t too sure about how well it would work.
Second, netjack’d put quite a heavy load on the network when having a given amount of machines running, so this might be generating bandwidth problems.
X-server magic has the same problem, too - too much traffic.
In an ideal case IMO the only information goint over the net while recording, or playing back or calculating effects et.al. should be clocking; after the work is done, data can be transferred and mirrored to where it is needed without taking away bandwidth for vital tasks.
So I guess (and that’s something I’ve been sure of from the start) it wouldn’t be that easy as to use what’s already in the public - this would render the whole thing unusable in no time.

I think it might be interesting though to hear from someone who knows the code if what I described would be possible and how hard it’d be to achieve this - because if it could be done with little pain, I think this one could make ardour really stand out from the crowd, as distributed computing over the net might be a cost-effective option in a broad range of application if done well; a nice feature that’S really been nowhere before, AFAIK.

Well, as long as you don’t route the audio between the different computers, then netjack would be doing what you just described.

But if you want to route audio back and forth between computers it does start to eat up bandwidth. Multiple (and balanced) gigabit eithernet adapters are recommended. :slight_smile:

unfortunately due to the nature of ethernet/ip, it is my belief that it wouldn’t be possible to sync two computers using just a network cable. someone correct me if i’m wrong, but i think you’d need something with a steady clock, such as s/pdif or adat.

There are people who transmit audio through ethernet to and from dedicated DSP computers, using Linux (and more specifically netjack). The DSP box does not run any audio interfaces so the “sync” issue is totally different from analog devices.

As there is no “time” to keep but just ordered samples, the wire can catch up to any “hickups” in the stream as long as the wire is not at its full capacity and the hickup isn’t too long for the master running a real audio interface. Also, it’s good to remember that a 100mbps ethernet wire can transmit a lot of audio.

2 streams (both to and from the DSP box) * 44100 (samples per second) * 4 (bytes per sample IEEE floats) * 16 (channels) = 43 mbps

That leaves a lot of headroom for a 100 mbps wire. If you go for a Gb interface (which uses a wire format much more suited for audio work anyway), you will have practically an unlimited amount of channels.

you’re absolutely right, doing external DSP is definitely possible… it’s essentially what the Muse Receptor does, it can use an ethernet cable to offload VST processing from your DAW. i don’t see why you couldn’t do this same thing with a couple of boxes via netjack. however, if i wanted to use the audio ins and outs on the Receptor for recording, i’d have to clock it to the computer using s/pdif or adat or it wouldn’t stay in sync. also, there’s the latency issue to deal with when sending audio back and forth for processing, which sometimes is and sometimes is not a problem, depending on what i’m doing. i imagine you’d run into these same issues with distributed recording.

so does this mean in theory you could create a network daw environment?

have the vocals and their dedicated processors over here, the guitars drums and bass on a different computer etc…

you hit play on one and the samples get ordered via the network accordingly?

It’s not theory. I know for a fact of at least one commercial studio which uses netjack to run a distributed Ardour / jack based DAW.

it would be a sweet electronic music setup. one guy doing solely the drums over in one room, another guy doing programming in another.

We’re talking about completely different problems here.
One is offloading DSP to a cluster, the other is a multi-user interface.
Offloading DSP to a bunch of networked boxes is possible with jack.
A multi-user environment would cause all kinds of conncurrency problems.
A DAW is modelled after a mixing console and transport controller. The transport is really a single user control. Each user needs his own transport, monitoring etc. There could be some kind of workflow element that allows multiple Ardour sessions to collaborate on a single project. You could share audio between sessions via centralized storage, but it would be like CVS for audio. Two people could not commit changes to the same file without merging the changes etc… This is totally possible, but would it really be worth the work to create a concurrent versioning system for audio? Just sharing audio between projects would be much simpler. If the drummer has a drum track he likes, let him put it on the master storage device so the guitarist can pick it up and throw it in his mix.
I think it sounds cooler than it would really be…

What logic 7 does is pretty cool, it basically lets you do with audio what most supercomputer applications do with similar data: Take many floating point operations and distribute them to a bunch of processors and re-assemble the results into a single data stream. There is some latency in synchronizing the results of the operations and putting them all back together into something you can listen to, but it allows for much more processing overhead. I haven’t actually had the need to do this though. Maybe if you are doing huge symphonic movie scores with soft synthesizers.

As a Logic user (and Ardour user/supporter) I will add that Logic does have some limitations; it can only offload DSP processing that is being done with Logic’s own effects/plugins. That means that currently if you were using third party plugins you cannot take advantage of the Logic Node architecture. Also, it isn’t terribly reliable.

Jack actually seems to be a step up. Although I’ve not tried any net-jack related work on either my Linux or Mac workstations.

“The only intuitive interface is the nipple. After that, it’s all learned.” -Bruce Ediger

When it comes to lack of resources, maybe the best is to use the tools we already have.
In case of i/o bandwith you could use external storage media (disks or raid arrays) mounted over a network via nfs.
In case of computing power: distributed computing. i forgot the name but it can be done with linux kernels. although doing that with lowlatency (“realtime”) kernels might not be so obvious.

I’ve been thinking lately about how cool it would be to have a Linux live sound solution. This would compete with Digidesign’s Venue and Midas’ XL8 and others.

This would, of course, require extensive development to improve netjack, qjackctl, among other applications. However, I think it wouldn’t be too difficult to get corporate sponsorship towards it, if we could show that the idea was feasible.

If I’m not mistaken, and am reading these posts correctly it seems we are talking about linking Ardour between multiple PC’s and the idea of linking via the Internet.

I’m shy to post my thoughts here because my knowledge of what you guys do is so limited and so respected. That said . . .

I find this to be a very exciting discussion/opportunity!

I’ve often thought that having a very simplified online version of Ardour (controller) would be quite useful. Multiple artists could connect through it, data would be saved on website server, and then downloaded after the session is over by the respected participants to their personal hard drives. Any post production changes could then be uploaded and stored on the server for the next session.

Perhaps the bulk of the work that Ardour might encompass online would be in the processing of musical data, (and providing a clock?) thus leaving the mixing etc. for the locally based PC’s.

Each player could enter their set-ups into the visual online controller, so that others would know the basic info about them.i.e: name, type of instrument, input levels visable by individual meter bridges per track so everyone could see the activity. Perhaps one guy is the designated “engineer” who has ultimate control, (saving files). Would that data created online be compiled and processed by Java and then submitted to the server as forms are submitted? I don’t know these answers but am dying to find out.

Am I correct to assume that a signal from a (musical) keyboard to the internet would transmit as quickly as a camera signal does?

I know that we sometimes suggest things that we make sound so simple. I’m sure it’s not, and respect the work that everyone has put into this project. As someone that doesn’t know anything about the programming aspect of Ardour, it seems to me that the most difficult accomplishment would be in the clocking and all players having a stable, and fast Internet connection.

Keep up the great work!

pwillis - Not so much the internet but over a studio (or stage) LAN.

JordanN - A feedback eliminator plugin would be nice.

FWIW there was discussion about this on the LAD list in 2004.
While the yakking was happening somebody went and implemented it in jack.

pwillis: when the internet is involved latency becomes a huge issue if you mean live collaboration. 100millisecond delays might be fine for gaming, but if you are trying to play and keep in time it is virtually unusable.

I stumbled across this site and read lucem’s comment and questions about a multiple computer DAW. I have not found any comments referencing M$ Server 03.I have been interested in this concept strongly enough to take a Network Administration Degree Course. Before anyone freaks out consider this:
XP Pro by design will not recognize more than 3.2 gigs of RAM but, Server Enterprise Edition will recognize 32 gigs and 8 processors.

I accidentally posted this comment in the wrong thread and to the wrong person. Sorry