Alsa vs Jack Midi

I’m curious to understand the performance difference between alsa and Jack Midi. At the bottom of https://github.com/jackaudio/jackaudio.github.com/wiki/FAQ_and_Myths it says (I think) that alsa midi may be affected by jitter in one of the hardware clocks on a computer (hence I assume that alsa midi uses one of these clocks for its timebase) but that Jack midi is not affected by this. Naively, I thought Jack uses alsa so how could Jack midi be better than alsa? Also what might be the typical size of the alsa jitter? It sounds like the amount of alsa midi jitter may depend on the hardware used, since the jitter is due to the realtime clocks.

JACK MIDI is internal to JACK, is entirely synchronous (messages generated by client A are given to client B during the same process callback) and delay free unless you have signal feedback loops. It is much less like any native MIDI system (ALSA on Linux, CoreAudio on OS X etc), and more like a highly customized messaging system designed specifically for low latency, real time music creation … which of course is precisely what it is.

ALSA jitter is definitely hardware dependent, but it is not significantly affected on the system clock on any modern Linux kernel.

Of course, when discussing JACK MIDI that involves hardware I/O, then the weakest link in the chain is the one that matters, which will again be the ALSA driver and the hardware.

Thanks for all the details Paul.

To add to what paul is saying; if I use my AKAI LP8 pad controller (http://www.akaipro.com/product/lpd8) and jack’s MIDI system, there is visibly less latency than using ALSA’s MIDI sequencer.
I strongly prefer jack MIDI because of this.

TW: JACK cannot reduce the latency of the ALSA sequencer because it uses the ALSA sequencer (unless you are using JACK2 with its “rawalsamidi” MIDI bridge). If you use a2jmidid or the builtin support of JACK1, you are still using the ALSA sequencer layer.

Of course I am using jack2. On a modern multi-core system, why anyone would not want to run jack2 is beyond me. Especially seeing how most distros don’t even package jack1 any more.
Of course I am using raw and not seq.

JACK2’s parallelism is of no benefit in many common workflows. This is widely misunderstood. See https://github.com/jackaudio/jackaudio.github.com/wiki/Q_difference_jack1_jack2

SMP support is not always as valuable as you would think. If your applications are chained INPUT --> A --> B --> C --> OUTPUT, then it will not be able to utilize multiple processors. However, if you applications are independently generating audio to the OUTPUT, that is when "parallel" sub-graph exist in the global graph, then they can be.
This is exactly the functionality I want.
However, if your applications
Instead of : However, if you applications.

In relation to https://github.com/jackaudio/jackaudio.github.com/wiki/Q_difference_jack1_jack2 :
There is a metadata api for jack2 now? Supports metadata API Jack2 - Yes or No?
And have the pulseaudio and dbus patches for jack1 been accepted? Because dbus is a pretty good reason to prefer jack2 if not.

Last time I tried it, jack2 with dbus support wouldn’t start from a text console because it needed xorg. Making a GUI-less command line program like jackd rely on X is nuts IMO. In theory it’s possible to make it work, but it’s a PITA. I use jack2 without dbus for this reason.

There will never be D-Bus support built into JACK1’s server. The patch was never accepted because I don’t agree that interactions with D-Bus are the job of the server. There is work that happens on a very intermittent basis to enable JACK1 to interact with PulseAudio via D-Bus. I have no idea when it will land. It is somewhat functional already. I don’t consider it very important because I don’t consider device sharing with PulseAudio to be a sensible way to work when doing the sort of stuff I’m interested in.

JACK2 does not support the metadata API. It has headers for it, but no implementation.

Last time I tried it, jack2 with dbus support wouldn't start from a text console because it needed xorg.
I came across this when creating a systemd service for jack2. It was a matter of downloading the source rpm, stripping out the dbus options and rebuilding the rpm. It wasn't too bad.
I don't consider device sharing with PulseAudio to be a sensible way to work when doing the sort of stuff I'm interested in.
Unfortunately controlling multiple devices appears to be more important than low latency audio to the main desktop distributions. They have (erroneously, in my opinion) gone with pulseaudio to do so. I personally am looking to android to make a move towards low latency audio which the Linux desktop can transition to. Low latency audio is one of the areas android is miles behind Apple's IOS handsets and it makes sense for android to improve their audio stack in order to compete. I read that Samsung have added jack support to their new handsets and while that's great I don't think it is the answer for consumer grade hardware.

There is essentially no difference in jitter/latency between alsa/jack MIDI. Alsa provides the driver for your USB interface, and Jack runs on top of that. Jack does not increase the latency and adds the capability of interconnecting audio/midi programs within your computer (actually even on other computers if you use netjack). See:
http://libremusicproduction.com/articles/demystifying-jack-–-beginners-guide-getting-started-jack