Compose doesn’t differentiate between touch mode v...
# compose
z
Compose doesn’t differentiate between touch mode vs not-touch mode focus, right? Does Compose’s focus model just assume keyboard navigation isn’t a thing? I don’t see anything in the implementation of Material
Button
that interacts with the focus system.
j
Does keyboard navigation not work? For some reason I thought it did already. Differentiating between touch and mouse input are very much a work-in-progress, delayed slightly by some other internal priorities that ended up taking priority. Can you elaborate on what focus mode differences you're thinking about for touch/mouse modes? That will help us as we plan further work in this area.
I suppose by focus model, you're referring to questions like "is button focusable"?
☝️ 1
z
yea
i haven’t actually tried this (haven’t bothered hooking up a hardware keyboard to a physical phone yet) but i don’t know how you’d control this just from looking at the APIs. I was just reviewing some code for a
Button
in our internal design system and there was a
Modifier.focusable
on it, which made sense to me, but also doesn’t seem to happen in the Material library.
j
cc @Ralston Da Silva
z
Yep, i just connected a bluetooth keyboard and tried tabbing and arrow-keying around on some compose apps, and nothing happens. Compare to non-compose apps, which start showing focus state as soon as the keyboard is connected and let you move focus around with the keyboard.
We could start adding focus modifiers to things like buttons manually, but then they’d be touch-focusable too, which seems undesirable.
l
Yes, we haven’t implemented support for differentiating between these, but at some point I imagine we will need it / something similar. At a base level touching a button shouldn’t show a focus highlight when in touch mode, but keyboard navigation should
z
Good to know, thanks. I don’t think this is a blocker for us, so for now we’ll just avoid adding focus modifiers to things that shouldn’t be touch-focusable, and wait to see what happens.
Actually, this might be more of a blocker for us than I thought. I don’t think it would be feasible to try to implement this ourselves on top of foundation if that’s the case, would it? Maybe we could piggyback off the semantics infra that is used for accessibility focus, but i don’t know if those APIs are public.