How to link Ardour with Blender

Nothing has changed. Works the same as it did 3 years ago.

Hi mate that good to know. I don’t want to admit this, but it seem i am not as savvy as everyone else around here.

Any chance that you can point to an extremely clear walkthrough on getting this up and running. Maybe like using something like lightshot or OBS and getting it on youtube.

Links:

  1. Lightshot: https://app.prntscr.com/en/index.html
  2. OBS: https://obsproject.com/download

Its always very humbling learning something new from scratch.

Thanks Ad :star_struck::star_struck::star_struck:

From what I’ve seen in another thread, you trying to do this on Windows, and I doubt you will have much luck. This is very much a Linux feature.

And even if you installed Linux, I’m don’t really see what would be gained by it. Just because you can do something doesn’t mean you should. Scoring and sound design is further down the production pipeline than editing, compositing or the zillion other things within the scope of blender.

The only utility I ever had in linking them is that I find using blender’s video sequencer display in place of xjadeo offers more flexibility in being able to easily change which video track I want displayed during playback. But trying to edit on both at the same time would be very tedious.

1 Like

I understand what you’re saying and can appreciate it. Thanks for the comment about xjadeo.

Fluidity would be nice, wouldn’t it?

I work in area with fast turn around times so a seamless, hassle-free, quick, fast, nasty and dirty workflow with time to do a reasonable mix would be great! So give me all the time I can get.

It would be interesting to hear what you have to say not so much but how things have been done for years (that’s kinda obvious), but potentially how they could look in the future.

Creative workflow under pressure right?

Ad

Hi,

There is a video editing project I have with multiple audio sources. I have the interview/dialog recorded with multiple mics. There are also sound effects/field recordings and the music soundtrack. So I did connect Blender to Jack, but am not sure where Ardour fits into this.

Here is the workflow. First, I need to sync the dialog from multiple mics and adjust. Then I edit the video, and bit by bit, add my sound effects and soundtracks.

To sync dialog parts, can this be done via connecting Ardour and Blender via Jack? I have audio playback on Blender from the camera, but I want to include my other mics. What would be the best way to do this. I would prefer to do the audio syncing and adjustment in Ardour.

What I hope happens is that once I edit the video in Blender, the changes will be reflected in Ardour. Is that a realistic expectation?

No, I don’t think so, if I understand what you are asking. Typically you would edit the video, then export a project edit list, and use a tool to convert that to an Ardour project in some way so that the audio which corresponded to the sections of video you kept were placed onto the Ardour timeline in the correct location.

Pretty much this.

My preferred workflow: edit the video with the dialog audio, then take into Ardour and process the dialog and add SFX and Music. I haven’t done this out of Blender in a long while so I don’t know if there is a good tool for exporting audio sessions yet, similar to the goal of AAF/OMF, but when I did it I always just took stem outputs, and in Blender they were intentionally edited with handles on either end as much as possible so that all the actual audio editing was cleaned up in Ardour.

Note that none of this process requires tying Blender to Jack or Ardour to Jack honestly.

And yes just because you do tie them both to jack to lock their transport together and in sync, there is no way for Blender to communicate video tracks to Ardour natively at this time (And would require a reexport from Blender anyways taking up lost of time for every video edit).

   Seablade

That sort of defeats the whole point of linking Blender with Ardour. When you have a dialog piece recorded from different audio sources, you do need a DAW-like tool to merge them properly. Audio records continuously, say an hour, whereas camera audio has the piece recorded in chunks that are not quite an hour. That is why you ought to be able to be able to move the audio sources and video along a time line to sync and adjust. Once you have your audio synced and adjusted (perhaps channel 1 is louder or 2 or vice versa or different depending on what is going on during a recording due to other noises. For example, maybe one mic source is preferable for a few minutes, then another, depending on what is going on. Sometimes there may be three or more audio sources.) Blender does not seem to be a good tool to sync and adjust multiple audio sources. I am not sure if you can even group audio streams in Blender (I asked the Blender IRC and received no answer.)

Now you are getting into the gist of it, is Blender the best tool for the job?

Don’t get me wrong the VSE in Blender has improved significantly, but it is still best when used to edit sequences together created in Blender, and most of the time for animation you are animating to the dialog, which means this is a moot point for the most part. I would edit the dialog, give it to the animators, they would animate to the dialog to match lip sync etc. and then I would mix in music and SFX.

Up until now you had just mentioned Blender and Ardour, not that you were only editing externally created video. In that case then yes I would edit the video with the audio recorded and then import both into Ardour to resync the audio. Ideally you would actually import the audio and sync in a video editor using something like Pluraleyes, but there isn’t a great solution for that in Open Source or Linux that I know of yet (Though in the back of my head I thought I remember seeing a project to solve that at one point). But honestly I wouldn’t have both open at the same time, edit the video, tell the story, export and then edit the audio in Ardour or similar.

For the record this is what I did for years with fast turnaround times, working with others from Final Cut Pro/Pluraleyes (The latter of which had a nasty habit of downmixing to mono for my editor at the time but that was years ago) and other solutions for editing video, and taking it into Ardour and Mixbus to edit the audio.

Seablade

By the way don’t get me wrong, the VSE in Blender is capable if you are willing to work with it, but it may not always be the best option. If that is all you are willing to work with though see my above workflow I would suggest.

   Seablade

Erm, “There is a video editing project I have with multiple audio sources”, that was what I posted a day ago. Before I edit the video, it is best to sync. It would be a bloody task to select my master sources at thr end and look for 5 seconds here, three seconds there, and 20 seconds yonder to match for the final mix. It would be looking for needles in the haystack. My main editing tool is Lightworks, but I normally use it for small projects. I have a large project and Lightworks dies a horrible death when I load too much footage. But, Blender can handle the loading of large amounts of source footage. I will try your ‘pluraleyes’ via WINE and see if that helps. If not, I will try to sync in Blender and then see what happens. If I had a one hour chunk of video and a hone hour chunk of audio–then I could marry them in Ardour/Mixbus or Reaper and export the result. But my video sources are in shrapnel, so using those tools are cumbersome in this case.

Ahh I skimmed back to the start of the thread before I answered and missed that. In which case see my point about sync. Many audio recorders these days even on the cheap end are starting to get basic sync involved as well, even if not TC locked using HDMI triggers to at least start and stop with video.

Yep it is, that is why notes help. I had to do that for years before affordable tools to assist with this were available. To my knowledge you would have to do that at some step of the process no matter what, pluraleyes and similar will help, but it depends on what exactly the source material is of as to how well it would work. A quick google around suggests that this may have progressed farther on Linux that I was aware as I haven’t looked into it in some time, but not sure, so I would suggest researching it if possible.

A thought as I am typing is that for your situation, I would probably still do initial sync in the NLE (Or VSE in Blender Terms) by importing all the audio tracks, and importing the video, and instead of cutting the audio to match the video, work the other way and lay the video clips on top of the audio in a single timeline. This is easiest in video editors that allow you to import a timeline as a clip in a second timeline, allowing you to then cut up the timeline to select clips as needed. I believe Resolve will do this IIRC, don’t believe Lightworks (Which is intended for a higher end production that would be TC locked anyways) and not sure on the VSE in Blender. Other options like Final Cut will obviously. At any rate then you are effectively doing your video editing in two steps, one to sync up your individual clips to audio in the NLE, and then editing the now full video down into the clips you actually want and dropping them on the timeline.

This is similar to how I would work for interviews and some documentary style shooting anyways as I might have a camera up recording the full time, usually this is where I am also recording audio, and use a second cam as a B cam to capture B-Roll and alternate angles if I am not sitting down to do the interview, line up the multiple cameras on top of the one consistent timeline and switch between them (Again easier in programs designed to handle this workflow).

How much footage are you loading that Lightworks dies so horribly on you by the way and what is your machine to edit on? It has been a few years since I have done any serious video editing but I don’t remember running into that on Lightworks when I used it.

By the way this is also why a clapboard or similar at the start and end of shots can be a good thing as it gives you an easy way to line up clips as needed, but again depends on what exactly you are shooting.

Hi Seablade, that is why syncing audio at the start is better, while the files are still in larger chunks. But syncing also requires some adjustment of levels. Right now the best way to deal with it in Blender is to have two sources on the timeline, export the audio tracks seperately, work on them, and then re-import as audio track three. (Audio recorded is lav plus shotgun mic, on seperate devices.)

My master files are in H264, so DaVinci Resolve will not work (on Linux) unless I pay for the program. I don’t run Windows or Mac for video editing. Lightworks is paid and generally runs fine, but once I put two hours worth of footage on the timeline, the program crashes. The Lightworks forums said that my recording format, H264, is to blame. So you are thinking, ‘change the format’, right? Well, my hard drives have no space to reformat all my source material and the camera records at the format it does.

The machine I edit on is a desktop with an AMD Athlon X4 880K CPU, an RX 560 (4 gig version) graphics card, 16 gigs of RAM and SSD drives wtrh Ubuntu 18.04.

Yes, it all depends on your particular needs, and how you are working.

Now this you will have to explain, it shouldn’t from a standpoint of just sync. From a standpoint of usability for editing that can be more of an issue, that is why I true stem export can be very good from an NLE, something few do well, but many of the commercial options look at AAF or OMF instead then.

So the end result then is this is a problem. Editing proxies is a standard practice for instance for multiple reasons, but does require HD space. Even if you had studio of Resolve I am not sure if that supports h.264 on Linux or not, but I would still recommend editing proxies or at the least using a less heavily compressed format for the most part honestly, but space can be a large issue there. There is a reason that RAID arrays and similar are still popular for video editing.

Seablade

Sure, say you have the lav mic (audio A) and the shotgun mic (audio B), there are variances. For instance, because the lav is wireless, you get occassional signal interference or a pulse, and you cannot always control the recording environs. Audio B might have too much reverb or record other things as well. Ideally, if both sources are clean, you marry them. However, given circumstances, you raise or lower channel A or B accordingly. Maybe channel B is not that good, but better than A, so you have to use a ‘deverb plugin’ or run it through noise rduction as well. Maybe the last ten minutes of audio for one of the channels get wonky, so you toggle the channels…so many variables.

Yes, DaVinci Resolve Studio can import H264, but do not have a budget for it because my clients do not grant me such a budget, yet. But Resolve has other issues. I tried to sync audio in Resolve when I ran Windows (which does handle H264 in lite) but then the audio/video sync then got messed up and the program runs like someone on a bottle of vodka. Resolve runs better on Linux, but H264 is the rub.

Sure, I can edit proxies on Lightworks, but if your program crashes just from importing footage past a certain threshold, then you cannot even reach that part. Since Blender ‘holds the line’, I can export each channel in WAV, merge and edit in Ardour, and then reimport into Blender.

Interesting to see this thread vitalized again and how different workflows and experiences are.

for me it was pretty much the opposite as Fathom story s experience: blender is a great program but not the most convenient to work with as a NLE … for me the best way is the “classic” film workflow: editing in an NLE (lightworks in my case works the best) exporting the audio as stems / omf / aaf to mix in DAW ( ardour/mixbus …) export the final mix and render the deliveries with NLE. syncing up NLES and daws in my opinion doesnt make much sense, as you want to edit something in the video and the edit will not get reflected in the audio and vice versa. in this case an in program workflow you can achieve with resolve or now also with reaper (for very basic video editing), if you dont want to jump back and forth…

@calimerox If you have two channels of a single audio source (dialog), it would be nice to line up in an VSE/NLE and export. However, if your source audio has issues, because we do not always record in an ideal, pristine studio world, you do need to sometimes adjust your source audio. It is an extra, cumbersome step, but necessary. If I record dialog in a studio or get a clean take, your way makes sense. Out in the field, things happen. So you need to adjust source as best you can, edit, and then export for final mixdown. Sure, Reaper is fine for final mixdown. When I used to use Adobe CC, I loved that as I had Premiere and Audition open and linked, any adjustment in the former was automatically reflected in the latter. That was pretty cool.

Ahh see all of that I would (And have) do/done AFTER editing the video when I have taken it into a DAW. Those are things the DAW is going to be more precise and quicker at as well as in most cases better. For for instance with what I can tell of your workflow, when I have done similar I edit, or often times someone else well edit the video and hand off to me with the sync’d audio (I tell them not to process the audio at all in fact if I am not the one editing the video). I will then process the audio in a DAW much better suited for the task (Though to be honest I haven’t used the fairlight channel in Resolve yet).

Is this true on Linux? I know it is on Windows and Mac, just didn’t remember on Linux.

This is true of more than just Resolve and is why on any platform it is not recommended to use h.264 as your editing format but instead to convert to something more appropriate and possibly use proxy clips on top of that.

Yes but it seems like you are missing a ‘Why does it crash to start with?’ and see if there is something causing it honestly. Note I am not saying ‘You have to use LIghtworks’ but rather saying you should understand why this is an issue for you.

Everything except the last step was my normal workflow for this type of material, the last step I would not take it back into the NLE but rather mux the exported video and rendered audio together using FFMPEG, just worked better for me in particular, but I understand that is not ideal for everyone. Robin and I discussed at the time I was doing these types of things weekly about making it possible to do this within Ardour in fact, it wouldn’t be to difficult to implement but may add a level of complexity that would not be ideal for most users.

See I still don’t understand why you feel you have to adjust in the NLE first honestly. I can understand if you simply cannot hear it (In which case simple amplification is all that is needed, but I would remove before exporting to DAW myself) but beyond that I am missing something.

  Seablade
  1. Yes, audio is processed after editing, as a rule. But when you have two sources and both are problematic, you need to make a new (nicer) source. If I am wrong on that, let me know.
  2. Yes, DaVinci Resolve can import and edit H264 in Linux IF you pay.
  3. The Panasonic GH series camera I have records only in H264. Alas.
  4. Lightworks crashes because it is too much H264 (for big project.) If I have little bit H264, it says ‘okay’. But past a threshold, it dies. Two hours of RAW seems to be the threshold. H264 is problem because Lightworks forum say so. They say, “H264 BAD! MAKE LIGHTWORKS CRASH! IT DIE!”
  5. If I had the hard drive space and power and a nicer camera, I would work in better formats. Alas. I have what I have and endeavor to make due. Converting the sources for ‘big project’ would require more money, more processing power, more SSD, etc etc.
    The moral of the story: more money, problems go away! Hooray!