Please consider making ardour accessible to blind people.
I am aware of the fact this topic has already been discussed, but I also think accessibility is a very important topic by itself, and increasingly so.
Note I have heard the argument that ardour’s interface has evolved in a way that makes it’s UI highly visual in nature and not able to be made accessible, however I believe this argument is invalid at it’s core.
The reason is, that any UI is visual in nature in many ways. what a screenreader user sees is very much disconnected from what the sighted user sees, both when it goes to how data is presented to them and how they interact with controls/ui elements. If something is not a freeform graphic but has a meaning, this meaning can (by definition) be exposed, often without making a completely alternative user interface which would be used only by blind people.
Sometimes that exposing is not ideal, but non ideal accessibility is way better than no accessibility.
I am pretty sure atk interfaces can be implemented for completely custom interface elements in full disconnection with gtk. it’s actually you who fully manage the representation of these elements exposed to screenreaders and that representation might have nothing to do with it’s visual appearance or behavior. we don’t have to be able to imagine things the same way you, sighted people, do. it just needs to work for us so that we can use the software.
Ardour only use GTK for layout (and some checkboxes in the preferences dialog). Everything else in the Editor (audio regions, timeline), the Mixer (meter, faders, etc) or Toolbar (bindable buttons) do not use GTK at all. They are just items drawn on a canvas.
The closest you could probably get is to pass tooltip information to a screen reader, but that rules out most of the editor, too. Not to mention, plugins. VST, LV2 etc are not accessible either.
It is probably not what you want to hear, but I suggest to look into Ecasound or NAMA, which are commonly used by visually impaired users with a braille display, or alternatively run Reaper on Windows (which is accessible on that platform).
Ardour’s GUI has indeed evolved in that direction. It is not an argument that can be discussed. It is already history, just read the source-code.
Other UIs however are more accessible. You can control many Ardour with physical control surfaces, or remotely (OSC, websocket), and there also is an interactive Lua commandline interface.
I am pretty sure if controls are drawn on a canvas they can also be made accessible. would have to verify that, but i am kinda certain there doesn’t have to be any connection between visual and accessible tree if that’s desired.
well, that’s already history and it evolved in that direction, however although this raises the bar because some things need to be made from scratch, it doesn’t preclude accessibility. And i mean the case where the visual interface itself would not be redesigned.
clarification: by doing from scratch I didn’t mean interface redesign to use gtk again, I meant doing accessibility itself from scratch on these canvas drawn elements without help of gtk.
clarification request: do you mean that accessibility for ardour cannot be implemented because instead of using gtk widgets it draws items on a canvas which is not visible to a screenreader, or because of the nature of the items themselves?
I have created a test case program in gtk3 to prove accessibility for canvas items is possible in general. Experiment has succeeded.
Basically, as I am blind I can’t meaningfully draw on a canvas, so I made a gtk program with a window containing empty canvas. It wouldn’t differ from a situation where it would contain items on the canvas, but normally you’d support clicking on these items, right? and some other interactions. This program doesn’t even have that.
But, when it runs, screenreader sees the window as having two buttons labelled quit and test. including seeing their position, being able to move between them with a tab key, being able to press them by space or enter, and even by screenreader’s builtin facilities. It presents them as lying next to each other horizontally. All that despite they don’t exist at all from gtk perspective nor from x/wayland perspective (a canvas has the x window but buttons don’t). I handle item activation by keyboard by registering key events on canvas itself and routing the click.
That proves you can implement accessibility for ardour with current interface, including doing it in a way not necessarily in agreement with component’s visual representation or meaning, considering it can expose things which don’t really exist.
Oh, I should have elaborated. Ardour does not use GTK’s canvas because it was not sufficiently performant with a couple hundred thousands items (many MIDI notes on many regions).
Ardour has a custom canvas library: https://github.com/Ardour/ardour/tree/master/libs/canvas
There is no theoretical issue with accessibility, but rather a practical one.
Thanks very much for the reply. The fact you have your custom canvas doesn’t change the relevance of my experiment. Especially not if your canvas itself is also a gtk widget, and quick look at the code told me it is, am I right? I saw something inheriting from GtkEventBox. My experiment could as well use a direct GtkWidget derivative but because I have never done any real gobject/gtk/whatever work and didn’t do C for years, I had a reason to be lazy. If I am wrong and it’s not a gtk widget, how do you hook into gtk widget hierarchy which you have above the canvas?
I will further describe the experiment below. I can upload it somewhere but this code is a bit chaotic and doesn’t even have proper error handling/memory management so might need a bit of tidying up.
So, here is what I’ve done:
- gtk window with a gtkdrawingarea, drawingarea is focusable, and reacts to keyboard events. I have two non gtk objects representing buttons which the key handler routes events to, so tab/shift+tab change focus between buttons inside drawing area, space and enter/return key activate them, they have fake click handlers. Nothing is displayed in the canvas, no mouse click reaction, that’s only because it wouldn’t make sense to do things I can’t even see and it would require much help from people.
- The canvas is actually a subclass of gtkdrawingarea made so that it can drag with itself a GtkWidgetAccessible subclass. It’s like each widget drags some subclass of AtkObject. However, because I wanted a screenreader to see that like a layout box with two buttons, i made the accessibility object to eat any info about canvas being focusable, and to pretend it has two children widgets. Note what screenreader sees, what sighted user sees and what things really are is already desynced here, which proves a point by itself.
- Children widgets are these two button objects mentioned above. They also have an accessibility object but descending directly from AtkObject. The children handling of pseudocontainer canvas exposes them to the screenreader and gtk is not involved from that point on, not at all, except catching information descending from parent like focus/key handling.
- The pseudobutton’s accessibility object exposes button label, that it’s a button, and generally everything including it’s fake position in canvas and the fact you can click to it to cause action. It also exposes focus or lack of focus when you tab around or switch windows. Because buttons have no real identities in gtk, when it’s needed, I hook up to the parent widget for things like focus information.
- It goes without saying that if I could do buttons without buttons, I could also draw things different than screenreader sees and I am not limited to representing the visual widget tree. Of course some relations to real widgets or items is needed because of things like focus which is not itself an accessibility thing, but crafting special accessibility modes or views is also possible if wanted or needed, probably, although would rather be a last resort.
Note I am aware of the fact if your canvas contained normal buttons, you probably wouldn’t need the canvas at all, and items on them are probably more exotic and harder to imagine how to expose them to a screenreader. However I think it’s better to tackle one problem at a time. First one is, is it technically possible to expose these items to a screenreader in the same or similar way gtk widgets are exposed, without changing nature of these items? The answer is yes. Also, you talked about some toolbar buttons which are also on a canvas, so there are probably lots of things using canvas that can actually be exposed without inventing stuff. And some things where exposing them as buttons/checkboxes where they aren’t such on screen would make perfect sense. We really don’t care about visual look and feel which we can’t even experience. It’s just thathis might need lots of custom code and custom behavior which might not always be self explanatory.
just in case I have put this with comments into a gh repository. haven’t tidied the code up but, well…
While ArdourCanvas::GtkCanvas does inherit from EventBox, nothing else about the canvas (other than the event data structures) is related to Gtk. The items on the canvas are not Gtk objects in any way.
In order for accessibility to be useful, there must be some way for the a11y layer to get various information about e.g. the type of thing on the screen that the mouse pointer is currently inside of.
The design and API of the Canvas items does not provide this.
Does that mean it would be impossible to do? Obviously, no. Does it mean that this is a major undertaking? yes.
My impression from reading some answers including answers from irc (some previous conversations) is that adding accessibility to ardour is believed to be either impossible or requiring something as drastic as complete or partial interface rework, as in, at best so hard to be impractical. The argument about ardour whose interface evolved in a way that is incompatible with accessibility kinda looks like so. I never really heard the arguments about required time and efford, more like “it makes no sense because it is not possible to do it properly at all with current design”.
My point is that yes, it’s a major undertaking probably requiring much work and changes throughout codebase, more difficult because such things were not added from the start, but not necessarily at that level of difficulty to make it impractical.
The fact you use a canvas makes it more difficult for the reason you’ve stated, as you have to expose a11y information from scratch, about both the canvas and items inside. But it doesn’t necessarily require changing whole design of it. It can still be non gtk items on a canvas, even if you’d need api changes, possibly new subclasses, depending on the case. Or at least, such a direction can be researched because it is possible to expose these items, as my above example shows. I am not even sure how much api would have to be changed other than mostly additive changes to allow items to expose these things without implementing it from scratch again and again. And except cases where accessibility implementation would substantially diverge from visual representation, which can also be done if needed. And keyboard focus is needed, which is not a11y layer side thing.
I asked about what canvas derives from to explain whether my theory is right. Because you are in gtk land, to attach accessibility tree related to non gtk items, you need some gtk widget ancestor to anchor it onto, because you cannot just make it magically plug itself into anything, and the root object is a gtk widget. Unless you wouldn’t be using gtk, then you implement everything from scratch. In gtk case you’d use atk but don’t have to turn anything which is currently not a gtk widget into one. Nor change nature of custom gtk widgets. In non gtk case, you could use atk or directly interface with a11y system by exposing dbus objects. I have to try that by the way, considering I just made an atk example. This seems to be a new recommended way to add accessibility when you’re not anchored onto existing toolkit which already has such support.
It’s not so much that we believe it to be impossible or a complete or partial interface rework. Rather, it means adding a11y to a minimalistic (canvas-based) toolkit that has no concept of this at present.
I have no objections to someone working on this and offering us patches etc., but the level of work required is not something that any of the existing developers could sensibly allocate much if any time to.
Whether or not we would accept such a patch would depend greatly on the technical design and the degree to which it impinges upon and/or constrains the current and future evolution of the canvas.
I continue to believe that this is a fool’s errand, but I am not sight-impaired and have zero experience with interacting with computers using the tools familiar to those who are, so ultimately I leave the judgement about the wisdom of attempting such a thing to those who are.
the level of work required is huge when you consider making ardour fully accessible, that’s probably right.
My intention is to show that it might be a bit less involved than it’s believed to be, at least on the toolkit side, more involved in other areas like making sure the accessibility design itself makes sense, as in how to represent elements of the interface and actual keyboard interaction for specific elements more than how to design canvas/items api in a way technically allowing exposure, especially if your toolkit is minimalistic (does this mean less moving parts?). Exposing widget to accessibility technologies generally just requires implementing atk interfaces, so basically exposing a tree synchronized with physical widgets/items (unless not) and sending events, and doing it in a way that possibly allows other items to do the same. Side note, we don’t use a mouse. mostly.
This work can be split in smaller items and can be done gradually. Some things are easier to do than others, like the above mentioned toolbars for example?
I believe this can be at least considered. It’s hard to expect such huge efford to be an item for release, but in a version where it’s split through some period of time, it’s more realistic imo. At least we have time to send feedback before you go too far in wrong direction, of course if anyone would be testing. I am generally not involved in any kind of sound production so am not the best test target.
OK, substitute mouse pointer with “cursor”
Not totally sure that’s true. The current canvas is not expected to evolve much more, certainly not in ways related to a11y. There are already 23 item types, we may occasionally add more but that’s the core set. Current buttons and dropdowns are not typically part of the canvas, but that is a todo item (albeit with low priority).
didn’t really get how does it relate to the specific quote for this message. or maybe I misread. Anyway, this is still open question, would canvas need to evolve that much to add accessibility? It’s far more about things like how to represent specific sections of interfaces. this is not always obvious. So do you have few types of canvas or few types of items? would just representing all possible items in accessibility tree help here, or would it require looking more at the specific usage site?
It would likely need to evolve in significant ways to add a11y, but if that happens (someone adds a11y stuff) and our luck holds, those ways would be largely orthogonal to the evolution that has already happened and might happen in the future.
unsure about that significance. definitely needs added api, but if most of what you are having in those items have visual nature, then I doubt many changes in existing apis would be needed. But hard to tell, idk the codebase. I know how it looks like in gtk, widgets don’t have any accessibility api except one method to get a peer atk object, and the atk object is the only thing that actually matters here. In fact in gtk2 times, accessible implementation for all widgets was a separate library! called gail, by the way. usually in other places it looks like widgets implement some special interfaces and expose more information. gtk4 has one interface implemented by widgets. More work is done by internal code, like emitting events when some exposed info is in fact dynamic, or for things like visibility or focus change.
That’s applicable for adding generic accessibility support. it’s the specific realizations for specific kinds of items which is the trickier part imo. And you might work with assumptions which are different than in case of toolkits like gtk.