How to link Ardour with Blender

By the way don’t get me wrong, the VSE in Blender is capable if you are willing to work with it, but it may not always be the best option. If that is all you are willing to work with though see my above workflow I would suggest.

   Seablade

Erm, “There is a video editing project I have with multiple audio sources”, that was what I posted a day ago. Before I edit the video, it is best to sync. It would be a bloody task to select my master sources at thr end and look for 5 seconds here, three seconds there, and 20 seconds yonder to match for the final mix. It would be looking for needles in the haystack. My main editing tool is Lightworks, but I normally use it for small projects. I have a large project and Lightworks dies a horrible death when I load too much footage. But, Blender can handle the loading of large amounts of source footage. I will try your ‘pluraleyes’ via WINE and see if that helps. If not, I will try to sync in Blender and then see what happens. If I had a one hour chunk of video and a hone hour chunk of audio–then I could marry them in Ardour/Mixbus or Reaper and export the result. But my video sources are in shrapnel, so using those tools are cumbersome in this case.

Ahh I skimmed back to the start of the thread before I answered and missed that. In which case see my point about sync. Many audio recorders these days even on the cheap end are starting to get basic sync involved as well, even if not TC locked using HDMI triggers to at least start and stop with video.

Yep it is, that is why notes help. I had to do that for years before affordable tools to assist with this were available. To my knowledge you would have to do that at some step of the process no matter what, pluraleyes and similar will help, but it depends on what exactly the source material is of as to how well it would work. A quick google around suggests that this may have progressed farther on Linux that I was aware as I haven’t looked into it in some time, but not sure, so I would suggest researching it if possible.

A thought as I am typing is that for your situation, I would probably still do initial sync in the NLE (Or VSE in Blender Terms) by importing all the audio tracks, and importing the video, and instead of cutting the audio to match the video, work the other way and lay the video clips on top of the audio in a single timeline. This is easiest in video editors that allow you to import a timeline as a clip in a second timeline, allowing you to then cut up the timeline to select clips as needed. I believe Resolve will do this IIRC, don’t believe Lightworks (Which is intended for a higher end production that would be TC locked anyways) and not sure on the VSE in Blender. Other options like Final Cut will obviously. At any rate then you are effectively doing your video editing in two steps, one to sync up your individual clips to audio in the NLE, and then editing the now full video down into the clips you actually want and dropping them on the timeline.

This is similar to how I would work for interviews and some documentary style shooting anyways as I might have a camera up recording the full time, usually this is where I am also recording audio, and use a second cam as a B cam to capture B-Roll and alternate angles if I am not sitting down to do the interview, line up the multiple cameras on top of the one consistent timeline and switch between them (Again easier in programs designed to handle this workflow).

How much footage are you loading that Lightworks dies so horribly on you by the way and what is your machine to edit on? It has been a few years since I have done any serious video editing but I don’t remember running into that on Lightworks when I used it.

By the way this is also why a clapboard or similar at the start and end of shots can be a good thing as it gives you an easy way to line up clips as needed, but again depends on what exactly you are shooting.

Hi Seablade, that is why syncing audio at the start is better, while the files are still in larger chunks. But syncing also requires some adjustment of levels. Right now the best way to deal with it in Blender is to have two sources on the timeline, export the audio tracks seperately, work on them, and then re-import as audio track three. (Audio recorded is lav plus shotgun mic, on seperate devices.)

My master files are in H264, so DaVinci Resolve will not work (on Linux) unless I pay for the program. I don’t run Windows or Mac for video editing. Lightworks is paid and generally runs fine, but once I put two hours worth of footage on the timeline, the program crashes. The Lightworks forums said that my recording format, H264, is to blame. So you are thinking, ‘change the format’, right? Well, my hard drives have no space to reformat all my source material and the camera records at the format it does.

The machine I edit on is a desktop with an AMD Athlon X4 880K CPU, an RX 560 (4 gig version) graphics card, 16 gigs of RAM and SSD drives wtrh Ubuntu 18.04.

Yes, it all depends on your particular needs, and how you are working.

Now this you will have to explain, it shouldn’t from a standpoint of just sync. From a standpoint of usability for editing that can be more of an issue, that is why I true stem export can be very good from an NLE, something few do well, but many of the commercial options look at AAF or OMF instead then.

So the end result then is this is a problem. Editing proxies is a standard practice for instance for multiple reasons, but does require HD space. Even if you had studio of Resolve I am not sure if that supports h.264 on Linux or not, but I would still recommend editing proxies or at the least using a less heavily compressed format for the most part honestly, but space can be a large issue there. There is a reason that RAID arrays and similar are still popular for video editing.

Seablade

Sure, say you have the lav mic (audio A) and the shotgun mic (audio B), there are variances. For instance, because the lav is wireless, you get occassional signal interference or a pulse, and you cannot always control the recording environs. Audio B might have too much reverb or record other things as well. Ideally, if both sources are clean, you marry them. However, given circumstances, you raise or lower channel A or B accordingly. Maybe channel B is not that good, but better than A, so you have to use a ‘deverb plugin’ or run it through noise rduction as well. Maybe the last ten minutes of audio for one of the channels get wonky, so you toggle the channels…so many variables.

Yes, DaVinci Resolve Studio can import H264, but do not have a budget for it because my clients do not grant me such a budget, yet. But Resolve has other issues. I tried to sync audio in Resolve when I ran Windows (which does handle H264 in lite) but then the audio/video sync then got messed up and the program runs like someone on a bottle of vodka. Resolve runs better on Linux, but H264 is the rub.

Sure, I can edit proxies on Lightworks, but if your program crashes just from importing footage past a certain threshold, then you cannot even reach that part. Since Blender ‘holds the line’, I can export each channel in WAV, merge and edit in Ardour, and then reimport into Blender.

Interesting to see this thread vitalized again and how different workflows and experiences are.

for me it was pretty much the opposite as Fathom story s experience: blender is a great program but not the most convenient to work with as a NLE … for me the best way is the “classic” film workflow: editing in an NLE (lightworks in my case works the best) exporting the audio as stems / omf / aaf to mix in DAW ( ardour/mixbus …) export the final mix and render the deliveries with NLE. syncing up NLES and daws in my opinion doesnt make much sense, as you want to edit something in the video and the edit will not get reflected in the audio and vice versa. in this case an in program workflow you can achieve with resolve or now also with reaper (for very basic video editing), if you dont want to jump back and forth…

@calimerox If you have two channels of a single audio source (dialog), it would be nice to line up in an VSE/NLE and export. However, if your source audio has issues, because we do not always record in an ideal, pristine studio world, you do need to sometimes adjust your source audio. It is an extra, cumbersome step, but necessary. If I record dialog in a studio or get a clean take, your way makes sense. Out in the field, things happen. So you need to adjust source as best you can, edit, and then export for final mixdown. Sure, Reaper is fine for final mixdown. When I used to use Adobe CC, I loved that as I had Premiere and Audition open and linked, any adjustment in the former was automatically reflected in the latter. That was pretty cool.

Ahh see all of that I would (And have) do/done AFTER editing the video when I have taken it into a DAW. Those are things the DAW is going to be more precise and quicker at as well as in most cases better. For for instance with what I can tell of your workflow, when I have done similar I edit, or often times someone else well edit the video and hand off to me with the sync’d audio (I tell them not to process the audio at all in fact if I am not the one editing the video). I will then process the audio in a DAW much better suited for the task (Though to be honest I haven’t used the fairlight channel in Resolve yet).

Is this true on Linux? I know it is on Windows and Mac, just didn’t remember on Linux.

This is true of more than just Resolve and is why on any platform it is not recommended to use h.264 as your editing format but instead to convert to something more appropriate and possibly use proxy clips on top of that.

Yes but it seems like you are missing a ‘Why does it crash to start with?’ and see if there is something causing it honestly. Note I am not saying ‘You have to use LIghtworks’ but rather saying you should understand why this is an issue for you.

Everything except the last step was my normal workflow for this type of material, the last step I would not take it back into the NLE but rather mux the exported video and rendered audio together using FFMPEG, just worked better for me in particular, but I understand that is not ideal for everyone. Robin and I discussed at the time I was doing these types of things weekly about making it possible to do this within Ardour in fact, it wouldn’t be to difficult to implement but may add a level of complexity that would not be ideal for most users.

See I still don’t understand why you feel you have to adjust in the NLE first honestly. I can understand if you simply cannot hear it (In which case simple amplification is all that is needed, but I would remove before exporting to DAW myself) but beyond that I am missing something.

  Seablade
  1. Yes, audio is processed after editing, as a rule. But when you have two sources and both are problematic, you need to make a new (nicer) source. If I am wrong on that, let me know.
  2. Yes, DaVinci Resolve can import and edit H264 in Linux IF you pay.
  3. The Panasonic GH series camera I have records only in H264. Alas.
  4. Lightworks crashes because it is too much H264 (for big project.) If I have little bit H264, it says ‘okay’. But past a threshold, it dies. Two hours of RAW seems to be the threshold. H264 is problem because Lightworks forum say so. They say, “H264 BAD! MAKE LIGHTWORKS CRASH! IT DIE!”
  5. If I had the hard drive space and power and a nicer camera, I would work in better formats. Alas. I have what I have and endeavor to make due. Converting the sources for ‘big project’ would require more money, more processing power, more SSD, etc etc.
    The moral of the story: more money, problems go away! Hooray!

Why not send both sources in sync from the NLE to the DAW to edit and make the better sound there after you have edited video (And audio both) in the NLE?

Well I won’t disagree that Lightworks occasionally does go “LIGHTWORKS CRASH! IT DIE!” I wonder if it would do that if you weren’t using h.264 honestly, that may be why I never ran into it.

Honestly I am taking a guess you are on a GH4 or GH5? Those are pretty nice cameras if I am honest, I spent years shooting on a GH4 and currently shoot a step down from it actually on a G7. Any of these are quite capable of generating good shots obviously. I would just suggest an external drive for you to allow you to edit less compressed clips and use a proxy workflow as needed, not even sure you will need an SSD for this though it will speed things up obviously.

Well the production triangle will always apply of money, quality, and time.

Seablade

@seablade “Why not send both sources in sync from the NLE to the DAW to edit and make the better sound there after you have edited video (And audio both) in the NLE?”

How do you do that? I got Ardour and Blender linked via Jack, but how about the other stuff?

Hi Fathom Story,

still, even with messy on-location sound , maybe with a boom, a lavalier and a bleeding mic, it is better to edit all that later in DAW. cleaning up sound in NLE is never good. and the second problem is: it will be destructive, meaning that later when you import everything in your daw for further edit (and where you have better plugins, better meters, better everything for sound) if you want to alter things, often you cannot…

I understand that you need a “usable” sound for editing to work well on your video edit. I n this case I would just go with the best source for edit, and mute the others. I would not recommend applying noise reduction and other surgical destructive audio processing in NLE…

then in your daw you can still decide: do i use the lav, or the boom, etc… These are technical questions but also aesthetical choices. And these choices I would not want to make beforehand in a NLE with the image not ready, with no ambient sound underneath and no sound design etc. In a shared workflow, with an editor, sound editor, designer etc… I would sometimes refuse getting a pre mixed dialog mix from the editor, because the mistakes made in that mix, they will be hard to fix later…

This all I say in case you want to use a DAW at all for sound editing. of course very basic stuff, and sometimes sufficient depending on the project, you can do in your NLE .

@ seablade I agree Muxing the sound to the video would be best in a lot of cases and would avoid problems that could come with the rendering. On the other hand: when you need different renders to different codecs (like one render for youtube, one for screening , a lossless version for storage , etc) then muxing will be more trouble i guess…

h.264 : on lightworks this just works fine with either a proxy workflow, or converting the sources to an editing codec like AppleProRes with eyeframe converter or winff. this is kind of true for all editing software, as h264 is a delivery codec and very heavy on cpu for editing. Performance of video editing will improve drastically when you do not edit in h264. Same would be in Blender, kdenlive, shotcut, etc…

edit: fathomstory now i see your point 4 and 5 so you re aware of the h264 problem. In case you have the gh4: i m sure you can record with another codec, like quicktime, avchd, maybe some kind of intra codec that is nicer for editing. I dont have the camera, but you should check these options and test…

@calimerox My understanding is that quicktime and avchd are ‘containers’ and under the hood it is still H264. I have gone on the FFMPEG IRC and they suggested an opensource codec that is an industry standard, but so large that my machine can not handle it. Any conversion tends to mean signicantly larger file sizes. It is stupid that that I have a camera and cannot edit a video because manufacturers and editing software developers do not communicate with one another. Now I am stuck with something that ought to be a breeze to edit, but is not. I hate them all. It seems better and wiser, based on all this input I am getting, to simply give up. What really happened is the major players just dropped support for H264 and am left with legacy crap.

I am not cleaning up sound in an NLE (thought that would be nice) rather adjusting levels and deciding what bits on what track are more useful than the other. The ‘real’ editing is still in the final mix.

My understanding is that quicktime and avchd are ‘containers’ and under the hood it is still H264

I think you are right, but noneoftheless NLE s sometimes behave strangely with some of them and like others more… it s worth a try…

and yes it can be pretty frustrating all these codecs etc. thats why i love sound!! :wink:

i m not sure about dropping support of h264 is the reason. it s generally problematic to edit with h264. to compare it with sound it would be like editing mp3s in a daw directly. therefore afaik all workflows using h264 as an input format involve proxies (or some internal conversion like final cut does, which is “the same”)

btw which version of lightworks are you using? with which distro? in my experience lightworks got super stable with version 14.5 and is the best NLE out there

Only a quick moment so I will come back later to answer more:

Not quite. MOV (Commonly called ‘Quicktime’) is a container, AVCHD is a container yes, but also tends to refer to a specific h.264 encoding in my experience. MOV can contain multiple different encodings, h.264 is one and common for consumer and delivery video, but it can also contain ProRes for instance which is much better for editing. There are lots of options out there for editing, but generally the less compression the better, which is why they take up a lot of room (Though there are ways to bend that rule as well).

When Calimerox mentioned Prores he was not referring to h.264.

There are many reasons why h.264 is not commonly used as an editing format, one of which is because it can be fairly heavy to decode, another having to do with I-Frames and the fact h.264 only writes out a full frame every X frames (for instance every 60 frames) and just records changes instead, so to decode and edit at the frame level you have to decode all 60 of those frames, etc. It isn’t a matter of camera manufacturers and editing software not getting together, but a matter of compromises made when creating the cameras. There are cameras that will record in an edit ready format, but they are much more expensive, and generally consumers don’t want to deal with the size of disks you have to have to record in that format for their home movies. h.264 is a compromise for cameras like the GH4 etc. that can provide good enough results for editing, without needing so much space, but that doesn’t mean it is a good codec for editing in, just that it can provide a good enough visual quality to edit.

Ill be back later, just wrapping up my dinner break now at work, sorry.

Fine, I will smash my camera to pieces. And the computer. Free at last! Sweet releif!