Android not using ALSA was because the Android team didn’t understand ALSA. It had nothing to with ALSA’s actual design or implementation.
I explain on the other slides precisely why the OSS API must die. It isn’t about a “hatred” of OSS or where it came from, but it is about the nature of the API that OSS presents (also explained in the slides).
I have no personal investment in JACK. I ceased my involvement with JACK over a year ago, and have actively worked to encourage Ardour users to consider ceasing their use of JACK. I also encouraged the development of the non-JACK backends for Ardour, as well as designing and implementing the abstract audio/MIDI engine in Ardour precisely so that we could move away from JACK iself. JACK is pretty cool (if i say so myself), but it is unnecessary and inappropriate for most DAW users.
I know the people involved with the Google Android audio stack development. We have had discussions. I believe (strongly) that they have made a fundamental design error, and have explained to them why I think this. There is an audio/MIDI API that is used in a consistent form across mobile and desktop platforms, and that is CoreAudio (that said, the iOS use of it is requently VERY VERY different from the MacOS use). That API (both versions) looks absolutely nothing like OSS, and everything like a pull-model (callback) API such as ASIO, JACK etc. I know why the Google team did what they did, and I think it technically wrong and continues to explain (in part) why Android is so far behind iOS for pro/prosumer/creation music apps. (The bigger reason is related to Android audio hardware, which is incredibly non-standard and generally incapable of low latency operation at the hardware level).
You apparently don’t understand the difference between the push and pull models I’ve described at all. Let me try one more time: in a push model, the application decides how much audio data it wants to write/read to/from the device, and when, and the audio stack has to make that happen. in a pull model, the device decides how much data it needs to be read/written, and the application(s) have to deal with that. The pull model is used by all pro-audio/music creation APIs on every platform. No exceptions. ASIO, CoreAudio, WinRT, JACK, and several more.
I have never claimed that JACK is required for pro-audio. My point has been about the requirement for a pull model, and this is borne out by all such software and frameworks and platforms.