I recorded audio with Ardour and video with a separate camera. I used the old fashioned clapboard style of making sure video and audio were lined up, since the audio recording started a second or two before the video recording started, and I was using consumer style equipment which did not have timecode.
So now I have a video file, which I imported into the Ardour timeline, and an audio file which started in Ardour, and which I then trimmed and moved slightly to line up with the video.
What I would like to do is trim the beginning and end of the video to get rid of some dead space (including the clapboard). Since I lose the clapboard when trimming off the beginning of the video, is there any good trick to make sure the audio stays lined up? I will probably be using kdenlive or similar to trim the video, so do I just have to keep track of how much I trim off, then trim the same amount from the Ardour timeline?
This seems like it would be pretty trivial with timecode, is there any way to retroactively add time code to the video file and to the Ardour timeline? I think I can generate timecode with some of the x42 tools, mux that with the original video file, and then when I trim in the editor hopefully the time code will still be readable. Seems cumbersome, is there a better way that I’m just not seeing?
An other approach is to set Session Start and End markers.
When you export the video using Ardour via Session >Export > Video , there is an option to cut the video on export: Range: from session start to session end maker.
What is ardour doing cutting video? That is not its job. Do the union representatives for kdenlive and openshot know about this?
Well, Ardour delegates this to ffmpeg, which is on the board of directors of said union
You can also print the command that is used in case you want to tweak and manually run it.
It’s also possible to add black frames, in case the session is longer than the video, but that’s as much as there is. No editing, only simple start/end trim.
Otherwise, your approach to “keep track of how much I trim off” is good.
Here is a common workflow for video + audio. Always record audio with the camera. This audio will later be the reference for timing full quality audio recorded with another device. Edit the video with camera recorded audio on a video editor. Only after you have the final version of video import it and the audio edited with it to a DAW. Then import your full quality audio to the DAW, this audio will be out of sync.
Now use the waveform of the edited and full quality audio to visually line up you full quality audio. This is easier than it seems. First you roughly line up the bumps in the waveforms when zoomed out, then zoom in and adjust more and more precisely. In the end you will find that you can line up individual waves quite accurately. Then cut the full quality audio at the next video edit point and repeat the process.
This is how it was done at the company I worked in before we got field recorders supporting external sync / timecode. This is still the workflow to sync audio recorded without timecode. And again this is way easier and quicker than it seems at first.
That seems like unnecessary additional work, though. Nowadays in pretty much all NLEs you can sync your high-quality audio to the scratch in-camera-recorded audio within the NLE and work directly with the high-quality audio as you edit the video. If you want/need to work with the audio in a DAW after you cut the video, you can export the video and audio tracks from the NLE and do additional mixing, EQ, mastering, etc. in a DAW. And of course some NLEs now have DAW capabilities built in, such as DaVinci Resolve.
That is true if you use a video editor that supports syncing audio tracks. Remember to define long handles for the exported audio if you edit the full quality audio in a video editor. This lets you move the border of a audio region to expose audio before / after the cut point as much as you have handles.
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.