molikto01/31/2020, 5:10 AM
Chuck Jazdzewski [G]01/31/2020, 5:30 PM
is to record the transaction state of
objects. The only public part is the
which is a unique (monotonically increasing) value used to select the current state of a model object given the frame.
objects use an MVCC like algorithm to support snapshot isolation between threads. The
serves the purpose of the clock in the MVCC algorithm. The
also tracks the list of objects that have been written to and allow observers to track modifications. This is used by
to track when
objects change. The observers can be set when the a new
is open which is currently controlled by the
. Layout uses a nested observation but that will be replaced by nested
once we have them.
synchronously because it was designed allow composition to occur on a separate thread (or even multiple separate threads) from mutation. We don't take advantage of that now so it seems like a waste of time. If we end up not supporting multi-threaded or of-main-thread composition, we will revisit the complexity caused by supporting this separation.
mcpiroman03/25/2022, 10:52 AM
Chuck Jazdzewski [G]03/29/2022, 12:18 AM
mcpiroman04/06/2022, 12:12 PM
The current slot table is fine as it is read-only during compositionSo aren't changes made in place when
? I was wondering whether it would be easier and faster if all changes to slot table were made in place during (re)composition, even when multithreaded - after all no group should be (re)composed simultaneously, hence no (high-level) synchronization. It's just that flat-array gap-buffer seems infeasible for this, it could require having multiple gaps (for each thread), or even more complicated handling of indices. Another thing is whether there is still a need for a fast iteration over the tree, for which the slot table was optimized. I don't know, but having scope-based recomposition and skipping seems to reduce much of the need for iteration. On the other hand profiling my application (CfD), which is quite data-heavy, shows that very much of the CPU time boils down to manipulating (mostly coping) arrays within the slot table. I'm not sure if this can be corrected within the current approach or is something inherited with it, but I want to ask anyway: How about using not 2 flat arrays, but something more tree-like? I assume that was chosen to pack the data for memory usage and data locality, but I also think that using tree of Array<Any?> should actually reduce the amount of data to be stored. And because of the above § locality should not be an issue. I think it should be possible to chose the branch factor so that the memory overhead (8 bytes per ref instead of current 4 per index and afaik 16 bytes per array on JVM) and indirection is not too high (so probably not an array per each group), but so that there is no need for gap-buffer and that each thread can work independently on a given array instance. That should enable fast, in place and multithreaded handling of the slot table. Because of the indirection it would be slower to iterate over entire tree, but again, it should not be an issue.
Chuck Jazdzewski [G]04/08/2022, 6:34 PM
very much of the CPU time boils down to manipulating (mostly coping) arrays within the slot table.This is not what I am seeing in any trace of Compose running in Android. The amount of time copying is relatively trivial compared to the amount of time taken when running the composition functions (which doesn't do any copying except when creating new content).