How to link Ardour with Blender

@fathomstory

Last I looked into this, I thought it was a plugin that had to be loaded into Blender to get it to sync to Jack. You would then sync both Blender and Ardour to Jack.

    Seablade

Based on these responses I suppose I had better contact the Blender Comuunity as well. In school I was trained in Adobe Premier Pro and Adobe Audition. What is cool about those programs is that they link to one another when you want/need to do granular audio editing. I was thinking that Premier Pro and Blender could work in much the same way as the latter has a video editng mode that looks quite promising.

Oops, I meant Blender and Ardour might work in much the same way as Adobe Premier Pro and Audition

@fathomstory

They definitely don’t work as well in the same way sadly. Part of the reason has to do with interchange formats, namely that there isn’t a well developed and open option, and even the proprietary ones can be rough once you leave the ecosystem that wrote them (So Avid, etc.) If a good solution could be found to allow for this it may open the door to such a thing, but for instance Adobe since it controls both Audition and Premiere, can implement it’s own end to end solution for that that is a bit more difficult when the software is controlled by two different entities.

   Seablade

There is no special plugin needed. Provided that the build of Blender being used has been built with Jack support enabled, then it’s simply a matter of setting Jack as the audio interface within Blenders preferences. Once that is done, the video editor / compositor will lock to the jack transport.

@Reuben Thanks, can you please tell me how you do that, step by step? If you can do a blog post and/or video, that would be fantastic.

First make sure jack is active. Either via Ardour or something like Qjackctl.

In blender:
File -> User Preferences -> System Tab -> Audio Dev dropdown -> “Jack”

Go to the screen layout preset for “Video Editing”. Hit the playbutton of the timeline. You should be able to observe the Jack transport slaving to blender’s timeline.

So its 2019 and took at look at this thread.

It doesn’t seem to be relevant anymore. How would we do it now can i ask.

Thanks Ad

Nothing has changed. Works the same as it did 3 years ago.

Hi mate that good to know. I don’t want to admit this, but it seem i am not as savvy as everyone else around here.

Any chance that you can point to an extremely clear walkthrough on getting this up and running. Maybe like using something like lightshot or OBS and getting it on youtube.

Links:

  1. Lightshot: https://app.prntscr.com/en/index.html
  2. OBS: https://obsproject.com/download

Its always very humbling learning something new from scratch.

Thanks Ad :star_struck::star_struck::star_struck:

From what I’ve seen in another thread, you trying to do this on Windows, and I doubt you will have much luck. This is very much a Linux feature.

And even if you installed Linux, I’m don’t really see what would be gained by it. Just because you can do something doesn’t mean you should. Scoring and sound design is further down the production pipeline than editing, compositing or the zillion other things within the scope of blender.

The only utility I ever had in linking them is that I find using blender’s video sequencer display in place of xjadeo offers more flexibility in being able to easily change which video track I want displayed during playback. But trying to edit on both at the same time would be very tedious.

1 Like

I understand what you’re saying and can appreciate it. Thanks for the comment about xjadeo.

Fluidity would be nice, wouldn’t it?

I work in area with fast turn around times so a seamless, hassle-free, quick, fast, nasty and dirty workflow with time to do a reasonable mix would be great! So give me all the time I can get.

It would be interesting to hear what you have to say not so much but how things have been done for years (that’s kinda obvious), but potentially how they could look in the future.

Creative workflow under pressure right?

Ad

Hi,

There is a video editing project I have with multiple audio sources. I have the interview/dialog recorded with multiple mics. There are also sound effects/field recordings and the music soundtrack. So I did connect Blender to Jack, but am not sure where Ardour fits into this.

Here is the workflow. First, I need to sync the dialog from multiple mics and adjust. Then I edit the video, and bit by bit, add my sound effects and soundtracks.

To sync dialog parts, can this be done via connecting Ardour and Blender via Jack? I have audio playback on Blender from the camera, but I want to include my other mics. What would be the best way to do this. I would prefer to do the audio syncing and adjustment in Ardour.

What I hope happens is that once I edit the video in Blender, the changes will be reflected in Ardour. Is that a realistic expectation?

No, I don’t think so, if I understand what you are asking. Typically you would edit the video, then export a project edit list, and use a tool to convert that to an Ardour project in some way so that the audio which corresponded to the sections of video you kept were placed onto the Ardour timeline in the correct location.

Pretty much this.

My preferred workflow: edit the video with the dialog audio, then take into Ardour and process the dialog and add SFX and Music. I haven’t done this out of Blender in a long while so I don’t know if there is a good tool for exporting audio sessions yet, similar to the goal of AAF/OMF, but when I did it I always just took stem outputs, and in Blender they were intentionally edited with handles on either end as much as possible so that all the actual audio editing was cleaned up in Ardour.

Note that none of this process requires tying Blender to Jack or Ardour to Jack honestly.

And yes just because you do tie them both to jack to lock their transport together and in sync, there is no way for Blender to communicate video tracks to Ardour natively at this time (And would require a reexport from Blender anyways taking up lost of time for every video edit).

   Seablade

That sort of defeats the whole point of linking Blender with Ardour. When you have a dialog piece recorded from different audio sources, you do need a DAW-like tool to merge them properly. Audio records continuously, say an hour, whereas camera audio has the piece recorded in chunks that are not quite an hour. That is why you ought to be able to be able to move the audio sources and video along a time line to sync and adjust. Once you have your audio synced and adjusted (perhaps channel 1 is louder or 2 or vice versa or different depending on what is going on during a recording due to other noises. For example, maybe one mic source is preferable for a few minutes, then another, depending on what is going on. Sometimes there may be three or more audio sources.) Blender does not seem to be a good tool to sync and adjust multiple audio sources. I am not sure if you can even group audio streams in Blender (I asked the Blender IRC and received no answer.)

Now you are getting into the gist of it, is Blender the best tool for the job?

Don’t get me wrong the VSE in Blender has improved significantly, but it is still best when used to edit sequences together created in Blender, and most of the time for animation you are animating to the dialog, which means this is a moot point for the most part. I would edit the dialog, give it to the animators, they would animate to the dialog to match lip sync etc. and then I would mix in music and SFX.

Up until now you had just mentioned Blender and Ardour, not that you were only editing externally created video. In that case then yes I would edit the video with the audio recorded and then import both into Ardour to resync the audio. Ideally you would actually import the audio and sync in a video editor using something like Pluraleyes, but there isn’t a great solution for that in Open Source or Linux that I know of yet (Though in the back of my head I thought I remember seeing a project to solve that at one point). But honestly I wouldn’t have both open at the same time, edit the video, tell the story, export and then edit the audio in Ardour or similar.

For the record this is what I did for years with fast turnaround times, working with others from Final Cut Pro/Pluraleyes (The latter of which had a nasty habit of downmixing to mono for my editor at the time but that was years ago) and other solutions for editing video, and taking it into Ardour and Mixbus to edit the audio.

Seablade

By the way don’t get me wrong, the VSE in Blender is capable if you are willing to work with it, but it may not always be the best option. If that is all you are willing to work with though see my above workflow I would suggest.

   Seablade

Erm, “There is a video editing project I have with multiple audio sources”, that was what I posted a day ago. Before I edit the video, it is best to sync. It would be a bloody task to select my master sources at thr end and look for 5 seconds here, three seconds there, and 20 seconds yonder to match for the final mix. It would be looking for needles in the haystack. My main editing tool is Lightworks, but I normally use it for small projects. I have a large project and Lightworks dies a horrible death when I load too much footage. But, Blender can handle the loading of large amounts of source footage. I will try your ‘pluraleyes’ via WINE and see if that helps. If not, I will try to sync in Blender and then see what happens. If I had a one hour chunk of video and a hone hour chunk of audio–then I could marry them in Ardour/Mixbus or Reaper and export the result. But my video sources are in shrapnel, so using those tools are cumbersome in this case.

Ahh I skimmed back to the start of the thread before I answered and missed that. In which case see my point about sync. Many audio recorders these days even on the cheap end are starting to get basic sync involved as well, even if not TC locked using HDMI triggers to at least start and stop with video.

Yep it is, that is why notes help. I had to do that for years before affordable tools to assist with this were available. To my knowledge you would have to do that at some step of the process no matter what, pluraleyes and similar will help, but it depends on what exactly the source material is of as to how well it would work. A quick google around suggests that this may have progressed farther on Linux that I was aware as I haven’t looked into it in some time, but not sure, so I would suggest researching it if possible.

A thought as I am typing is that for your situation, I would probably still do initial sync in the NLE (Or VSE in Blender Terms) by importing all the audio tracks, and importing the video, and instead of cutting the audio to match the video, work the other way and lay the video clips on top of the audio in a single timeline. This is easiest in video editors that allow you to import a timeline as a clip in a second timeline, allowing you to then cut up the timeline to select clips as needed. I believe Resolve will do this IIRC, don’t believe Lightworks (Which is intended for a higher end production that would be TC locked anyways) and not sure on the VSE in Blender. Other options like Final Cut will obviously. At any rate then you are effectively doing your video editing in two steps, one to sync up your individual clips to audio in the NLE, and then editing the now full video down into the clips you actually want and dropping them on the timeline.

This is similar to how I would work for interviews and some documentary style shooting anyways as I might have a camera up recording the full time, usually this is where I am also recording audio, and use a second cam as a B cam to capture B-Roll and alternate angles if I am not sitting down to do the interview, line up the multiple cameras on top of the one consistent timeline and switch between them (Again easier in programs designed to handle this workflow).

How much footage are you loading that Lightworks dies so horribly on you by the way and what is your machine to edit on? It has been a few years since I have done any serious video editing but I don’t remember running into that on Lightworks when I used it.