:wave: We’re trying to sync up loading animations ...
# compose
b
👋 We’re trying to sync up loading animations on multiple screens based on the
monotonicFrameClock
, but see some stuttering at the beginning of the animation. Any idea why this stuttering might be happening?
Copy code
var progress by remember { mutableStateOf(0f) }
    LaunchedEffect(Unit) {
        while (true) {
            val frameTime = withInfiniteAnimationFrameMillis { it }
            progress = (frameTime % Duration).toFloat() / Duration
        }
My guess is that it’s because the initial progress is set to 0? How can we sync that (from the start) with the frame time? Another possibility is that because
LaunchedEffect
is launched after the first frame, we start one frame behind. Don’t think I can use
DisposableEffect
(which doesn’t have this issue) as
withInfiniteAnimationFrameMillis
is a
suspend
function.
t
maybe you should do s.th. like:
Copy code
val start = withInfiniteAnimationFrameMillis { it }
while (true) {
   val frameTime = withInfiniteAnimationFrameMillis { it } - start
   progress = (frameTime % Duration).toFloat() / Duration
}
b
I’m pretty sure that won’t work. Using the time from the start would make them all not be synchronous, as the animations would be based on the first time they were composed rather than a shared time. Worse yet, we’d be suspending twice before we update our progress - making the jank worse. Here’s a vid with your suggestion implemented.
t
Ah ok i see. Sorry but i do not have a solution for this.
Maybe you could use a global Progress class which holds the current frametime. And gets updated by the composables. Than you can start at 0
b
Maybe that could work, but we already have a
monotonicFrameClock
which is a great resource for syncing these kinds of animations. I just need to know how to get access to it!
d
If the design goal is to always keep the progress in sync, why not have the progress hoisted instead of each progress indictor owning their own progress?
b
These screens are completely separate, in the video they’re all on the same screen - but in actuality the screens can be from any other fragment/activity. We’ve got no shared context really, so I’m not sure how we would hoist state.
d
Are all the screens share a custom MonotonicFrameClock or not? If there's no way to share data across these fragments/activities, you are left with another option: delay the progress rendering by initializing the progress value to null or something not meaningful. And only start drawing the indicator when the progress value has been updated by the clock.
b
Yes! I think that’s what we decided to do in the end 🙂 There’s a bit of a stutter as we miss the first frame - especially when a new screen comes up, but it’ll have to do for now. Thanks for the help!
j
This hits a fundamental issue with animations: at any frame, you can only know the play time if you know the current time and the start time. Therefore it is impossible to render an animation (correctly) before you know the start time. In your example (well, most animations, really), we only know the start time on the first
withInfiniteAnimationFrameMillis
. So, why can't you know the "current time" before that invocation to
withInfiniteAnimationFrameMillis
? Well, the default implementation of Android's MonotonicFrameClock doesn't actually keep track of time. It just passes on the time it gets from the Choreographer to anyone who is suspended in
withFrameNanos
. So it's kinda like a voice triggered clock without a face: it will tell you the time when you ask for it, but you can't see it. (Note that a MonotonicFrameClock doesn't always run on the Choreographer's time. In particular, the clock used in tests uses the time from a TestCoroutineDispatcher.) Hopefully this will give some insight into why things work the way they do.
❤️ 1
p
Is there a reason
MonotonicFrameClock
holding on to & exposing the last frame time would be wrong/broken, or is it more a “this is just how it currently works” kinda thing? There are quite a few uses for these globally synchronized types of animation (shimmer animations are another example, or Slack’s animated reactions!). At least in my mind the frame time is the natural way of implementing something like that, and there are already great test hooks in Compose for controlling it.
j
The last known frame time would only be accurate (up to frame duration ms) if you'd keep posting frame callbacks to the Choreographer, which wouldn't be great for performance. If you really want to keep track of the time, you could start a coroutine that just keeps looping over
withFrameNanos
, but I would advise against that from a performance / battery life viewpoint.
p
The last known frame time would only be accurate (up to frame duration ms) if you’d keep posting frame callbacks to the Choreographer,
A frame callback gives you the “time in nanoseconds when the frame started being rendered”. My (probably incorrect) mental model is that a layout/draw pass would always be preceded by a frame callback, so you’d always have a valid frame time during layout/draw (but not necessarily during regular composition?). What am I missing?
which wouldn’t be great for performance. If you really want to keep track of the time, you could start a coroutine that just keeps looping over withFrameNanos , but I would advise against that from a performance / battery life viewpoint.
Oh, so Choreographer callbacks aren’t just passively observing frames, they have side effects? Would constantly posting them prevent the device skipping frames where there’s no work to do (e.g. no animations running, no invalidations)?
j
Would constantly posting them prevent the device skipping frames where there’s no work to do
Correct
👍 1
so you’d always have a valid frame time during layout/draw (but not necessarily during regular composition?)
Not necessarily for layout, as layout can be happen during other stages as well For draw I'm not quite sure how RenderThread interacts with that assumption
😅 1