Hi everybody, I have a philosophical question on h...
# compose
a
Hi everybody, I have a philosophical question on how would you design your system if it would be large enterprise data entry system (suppose legacy desktop application) with many dialogs, forms, listgrids, etc. which must support keyboard only navigation (without using mouse) - keyboard shortcuts, proper keyboard focus (navigating thru components on screen). I know that Compose currently is suitable for Android, but looking into far future lets suppose that it supports now browsers and desktops. As I understood Compose is just a View layer, kind of MVI architecture, model passes information into "Composable function" and you get visual GUI which return back intents/events/etc. How keyboard focus and navigation should be implemented? Does Compose emits some kind of "focused element" event? Or even how to render GUI component with specific Input field focused? If Compose is just "state renderer" so everything must be embedded into parameters (disabled/enabled field, focused/notfocused field). How about dialogs? How dialogs should be encoded? And how to handle navigation if there is some two or more non modal dialogs on screen. Going further it seems that these questions not directly relates to Compose, but should be separate library framework which handles and implements all GUI logic and then uses Compose to render it. Do these kind of libraries already exists? Maybe some of current GUI frameworks can be used, but looks like everything is tightly integrated with its own GUI rendering engine and cannot be used separately. I know that this all Compose stuff is state of art and no best practices currently exists, but what are your thoughts of how this could be implemented in "Compose" way?
l
I’m not sure you have clearly in mind the difference between
just a View layer
and
MVI architecture
as you just said that’s the same thing 😅
k
in case you are looking for a rendering engine, the Android System, Compose, Flutter and Chrome use the Skia library.
f
Conpose is separated (or supposed to be seperated) into
Compose-runtime + compiler plugin
- not related to ui at all
Ui
- a couple of artifacts , one does primitive stuff like
DP
that is not UI but is needed for it, one builds the ui tree, one draws to a generic canvas interface, one has android specific bindings and a few more (IIRC)
a
@Luca Nicoletti Just wanted to say that Compose is just a View layer of MVI architecture.
I'm looking a library or framework which can be used to manage all GUI state, keyboard focus, dialogs (modality navigation, close/open) etc.
Which can be used in Compose or React like view libraries
Currently I can find only full fledged GUI frameworks whose has that but also has an its own integrated GUI rendering.
So I don't get this how using React or Compose or other "View layer" only libraries to build complex applications out of it? Do you do all necessary programming for UI specific state control and management manually on every project?
Maybe my mind is affected by long use of MVC / MVP pattern but I cannot understand now how to use Compose or React like libraries to buid GUI systems 🙂
I know that there is libraries for UI state management as Redux
l
The thing is that Compose is not a
View layer of MVI architecture
a
but what libraries are used for managing basic (general) application UI
l
Compose is a UI toolkit that will allow you declare your UI in a declarative way, opposed to the current imperative way
That’s it
a
so can't I say that it will fit (or be used as) in "View layer"?
l
Yes, but not only in MVI
You can build your UI in compose with MVP
MVVM, MV Whatever
Or even without following any architectural design pattern
a
Ok, my question is if the system is architected in declarative UI way by using Compose UI toolkit
what's approach to control that UI? (keyboard focus, dialog states, show/close, modality)
l
That’s up to you
Or your company, you should chose which pattern applies the best on your application
a
That's why I am asking, what to use, and what are you using for that
l
I’m using MVI, but many others are using MVVM, others MVP
a
It's architecture how to layout a code
l
🤷🏼‍♂️
a
but not how to control and manage UI interactions
e.g. if on screen there are couple forms of inputs, keyboard "TAB" should move thru first form's items and then thru second form's items
l
That’s something that the framework provides
You just need to define it.
a
if modal dialog is opened then focus navigation using TAB key should be applied dialog's forms and elements only
l
Again, handled by the system
Are you asking
how to build my own operating system?
🧌
a
Actually I'm asking if using declarative UI toolkit, how to define and pass a UI components specification to system
and what a system is, e.g. enterprise application in browser
cannot be described in html/css properly to maintain proper keyboard navigation thru different forms, or when modal dialog is opened
z
We've yet to see if it's actually useful in a fully compose-based app, but the library Square is using for its POS apps, Workflow, integrates very nicely with Compose for the view layer. Some people have been playing with using it with SwiftUI too and it's been working pretty well. I've got some simple examples up on a branch: https://github.com/square/workflow/pull/703 Once Compose is more stable we intend to ship Compose support for real.
👍 4