I'm curious about the differences in implementation. Briefly, I've been using GraphicsLayer
to capture the current Compose layout and render it onto a
surface-baked canvas, which records everything into an OpenGL texture. I then perform post-processing on that texture. Finally, I render the
processed texture onto my SurfaceView, and use the same GraphicsLayer to draw it as the content.
This implementation seems a bit different. From what I understand, the TextureModifierNode modifier adds a ComposeTextureView to the root of the current ComposeView when it is attached. This ComposeTextureView extends a standard TextureView to provide enhanced capabilities for rendering the Compose Canvas as a texture. That aspect is quite similar to my approach, however, instead of using a TextureView and a Canvas, I directly create a SurfaceTexture and a Surface, skipping the TextureView step.
What puzzles me is that the content of this TextureView is rendered again into the provided canvas within the modifier's drawing scope. I'm not sure why that's necessary, perhaps it's even a better approach than my GraphicsLayer method.
I've run into one strange bug with GraphicsLayer. Sometimes, the Compose UI stops updating, even though my OpenGL texture continues to render correctly. It's interesting because I can see the texture updating, but the Compose layout doesn't refresh.