I recently tried to run synergy/qsynergy on my Linux DAW while running JACK and Ardour. I got a few xruns. When I quit qsynergy, the xruns ceased to be a problem. There was little difference in my system load with or without qsynergy.
I have a hardware KVM switch, but it would be nice to use synergy to move my mouse to/from my DAW and another Linux machine. Is there something about using a network service that causes JACK to hiccup? I read something once about killing network services to streamline a system for audio, but I figure that was old news and wouldn’t apply to a late model configuration.
As long as you have a properly set up realtime permissions you shouldn’t have to worry about it. I have done that, and in fact still have synergy server running on my Linux DAW so I can use that with my laptop when needed and haven’t had much problem.
I wonder if the xruns I experienced were because of running JACK at 96 kHz and using qsynergy. I never had problems with 48 kHz, although I have not yet tried running JACK at 48 with qsynergy running because my current project is in 96. But I rarely get xruns at 96 without running qsynergy. I got one earlier today with qsynergy out of the picture - not sure why.
Running at 96, I have a latency of 64 ms. Is that good? I vaguely remember having a 64 ms latency with the same hardware under Windows, but that was at 48 kHz. At 96, I increased the periods buffer because I was getting xruns regardless of whether qsynergy was running or not.
Would JACK versus jackdbus make any difference? I’m not really familiar with jackdbus.
No it really wouldn’t make a difference (Jack vs jackdbus). I think I tested at 96k, but typically I am working at 44.1 or 48, not 96. I will probably test out 96 when I work on phase 2 of audio for Tube though.
Latency, and low latency, is overratted for the vast majority of work. Really you only need to worry about low latency in two cases, 1. you are playing MIDI instruments frmo the computer, or two you are in a situation you need to mix live with the computer (Say for headphones from other live inputs, or for a live show). Other than that latency compensation should be able to address most needs.
So what does that mean for monitoring the mix while adding new tracks (guitar, vocals, etc.)? Is that where I need low latency, or will latency compensation do?
Latency Compensation will do what you want there from your description. Essentially what it should do is say, I know it takes this long for me to play a sound, and for me to record a sound(You have to provide the numbers to Jack IIRC) so I am going to offset the recording to line then up again so that they are now in time.
Makes sense to me. Thanks!
I’f you’re adding tracks and the singer or player is monitoring their own instrumental or vocal input from ardour, sure then latency does matter, in the sense that there’s a real delay between you making a sound and hearing it in your phones. Latency compensation will make your new track line up with the previous recording, but too much delay in the monitoring of their own sound may be distracting for the musician.
If you can get round this with hardware monitoring (e.g. my Delta 1010 lets me do this with envy24control) then I agree latency stops mattering very much. I’ve never got into the habit of doing monitoring that way: I suppose I should try it sometime!
I just set the frames/period to 4096 because I was getting an occasional xrun while Ardour was sitting around doing nothing. So far, no more xruns. Latency is 128 ms, but as you all have said, this really doesn’t mean much if you have a good setup.
I think I get it now about hardware vs. Ardour monitoring. I always use HW monitoring. I’ve never monitoring anything through Ardour. And while I really appreciate all the plugin effects available, my goal is to move away from those, which means even less load on my system.