I saw Ardour a year or two ago and I’d like to explore it some more. Just had a few quick questions please. =)
Sometimes I might want to use a “rasterized”/rendered approach (e.g. applying a filter and permanently creating a new audio fragment) rather than a live/applied-on-the-go filter or automation-based approach.
– Is this possible? And if so, will my 4 cores all light up to 100% when applying most filters for which parallelization is simple? (e.g. echo, reverb…)?
– Do I have to wait in realtime to render/compile some effects to a new track, or will it go as fast as my computer can handle?
Is realtime required?
Are there binary builds available for Suse and/or Debian of the 3.0/latest branch?
If I like to use convolution in my work, does Ardour have a built-in convolution plugin, or is jack_convolve the way to go? Or other suggestions?
If I like to use samplers in my work, is there a nice way to integrate a sampler with Ardour? Can Ardour 3.0 with its MIDI support be used as a sampler at this point, or is MIDI currently best suited for automation (can one select “MIDI frequency” as an automation input to a filter parameter?). In the latter case, might someone please recommend a sampler that integrates well with Ardour?
I saw an interesting section in the manual about tapping into an online free music-clip library with tagging. I’ve always wanted something like this!–and would like to try it (and the person clip-library tagging sounds amazing too in theory). However it says Ardour must be compiled with “FREESOUND=yes”. Is this a default option?
Thanks very much.
to create on-disk versions of treated regions, you want “bounce” (right click on a region). this will re-write the region to disk with any FX from the track, and add it to the region list.
all processing in ardour is done in realtime. if you ask for me too much, JACK will eject ardour for being too slow. the JACK DSP load provides an indication of how much work is going on - its a percentage of the available time (and it is the number for all JACK clients, not just ardour.
ardour’s DSP execution is not multithreaded at present. the program itself has many threads, but all audio currently happens in 1 thread. if you only 2 cores, i still believe that this is good design - it leaves the 2nd core free to handle the GUI, disk i/o, MIDI i/o and so on. but as 4+ core’s start to become more common, ardour will get parallel DSP threads. not quite sure when.
“realtime” means two things. running JACK with -R is almost always the right thing to do. however, this does not need a “realtime kernel” or “RT-enabled kernel” - using one simply makes behaviour more reliable and low latencies possible. recent (2.6.26+) kernels are nearly as good as an RT-enabled one anyway.
i do not recommend any testing or evaluation of ardour 3 at this point. it is very raw at this point, and subject to a lot of deep work.
Answers to your questions:
I believe so —
It can go as fast as you pc can handle. If it can’t during live playback, it might even take longer than live to do it, but it will sound right.
RT kernel is not required, but a very good recommendation. Less or no x-runs as a result.
No binaries of the development branch, as it is not deemed stable enough yet. You will find binaries after an official release though.
LinuxSampler. That is the main idea behind the midi support in Ardour as far as I can tell.
No idea about the freesound… check the scons makefile, or compile with that option.
Thank you both very much for responding. (Was very insightful and helpful.)
However I’m still confused about…:
“all processing in ardour is done in realtime”
Would this mean, unfortunately, that bouncing cannot be done faster than realtime?
bouncing will probably be faster than realtime - there are exceptions, but most of the time it will be faster.
goddamn it! can we not have bullshit like that hanging about this forum ? get a life (if you’re real, that is).