I've been experimenting with the new `GraphicsLaye...
# compose
s
I've been experimenting with the new
GraphicsLayer API
and found it quite helpful. It allowed me to implement a rendering optimization trick. However, I've encountered some challenges that I'm hoping to get some advice on. Specifically, I'm looking for a way to create an
android.graphics.Picture
from a graphics layer. For all the details, please check the full thread 🧡.
GraphicsLayerPicture(val graphicsLayer: GraphicsLayer) : Picture()
Solved βœ…
I tried copy the
class GraphicsLayerPicture(val graphicsLayer: GraphicsLayer) : Picture()
from the Compose sources. Unfortunately, the method
graphicsLayer.draw(androidx.compose.ui.graphics.Canvas(canvas), null)
is an internal API, and I can't call it from my code. Dealing with a baked hardware Bitmap is somewhat inconvenient for my needs. I'm looking to obtain it as a
GL_TEXTURE_EXTERNAL_OES
texture target to perform further postprocessing in my fragment shader. Sadly, accessing the bitmap's
HardwareBuffer
(bitmap.hardwareBuffer) is only possible from API 31 onwards, and on the NDK side from API 30, even though hardware bitmaps were introduced as early as API 26:
Copy code
val image: EGLImageKHR? = androidx.opengl.EGLExt.Companion.eglCreateImageFromHardwareBuffer(display, bitmap.hardwareBuffer) EGLExt.glEGLImageTargetTexture2DOES(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, image!!)
On the other hand, working with
android.graphics.Picture
offers much broader API availability, and I can relatively easily turn it into a
GL_TEXTURE_EXTERNAL_OES
texture target (pseudocode):
Copy code
val picture: Picture = ...
val surfaceTexture: SurfaceTexture = SurfaceTexture(texId)
surfaceTexture.setDefaultBufferSize(viewportWidth, viewportHeight)
val sf = Surface(surfaceTexture)
val c = sf.lockHardwareCanvas()
c.drawPicture(picture)
surfaceTexture.updateTexImage()
sf.unlockCanvasAndPost(c)
sf.release()
surfaceTexture.release()
I'm aware of the documentation describing how to write the contents of a composable to a picture, but it's not as convenient as using the GraphicsLayer API. Maybe I'm looking in the wrong direction? Could someone hint at something I might be missing when working with a hardware bitmap to access it from a shader without copying data to the CPU on lower APIs?
BTW, I've managed to load a rendered GraphicsLayer into a bitmap as a texture and access it from my fragment shader, all without copying it onto the CPU. In the screenshot provided, the bottom half of the screen is rendered by OpenGL, while the top part is a regular Compose UI. However, as I described earlier, working with a baked Bitmap feels like taking one step forward and then one step back.
a
cc @Nader Jawad
πŸ‘ 1
r
What do you mean by baked bitmap? Your example of going through a picture is no different (assuming you avoided a CPU copy with the bitmap)
n
I'm not also following why there is a need to go from a GraphicsLayer to a picture. You can always draw the GraphicsLayer itself into a canvas obtained from a Surface backed by a SurfaceTexture as well without the need for a Picture. HardwareBuffer instances can be imported into GL as a zero copy operation as well. So on newer platform versions (Android Q+) you can skip additional copies of bitmaps.
s
So on newer platform versions (Android Q+) you can skip additional copies of bitmaps.
I would like to skip additional copies and before Q by rendering Picture into a Surface or so. Is it possible?
bitmap.getHardwareBuffer() only API31+
. You can always draw the GraphicsLayer itself into a canvas obtained from a Surface backed by a SurfaceTexture as well without the need for a Picture
Are there any examples of this available? What I saw in a doc, to use Picture for recodring canvas commands
n
No picture is necessary though so I'm not quite following. You can draw the GraphicsLayer into the canvas provided by the SurfaceTexture's Surface
s
How do I provide a canvas, obtained from a Surface backed by a SurfaceTexture, to the Compose UI? I might be forgetting the exact APIs for this, please correct me if I'm wrong.
I mean, in a way that's similar to how I can provide the GraphicsLayer. Just trying to figure out the best approach here.
Perhaps I'm missing something πŸ€”
n
Here's an example:
Copy code
fun drawGraphicsLayerToSurface(
    layer: GraphicsLayer, 
    texId: Int, 
    density: Density, 
    layoutDirection: LayoutDirection
) {
    val size = layer.size
    val surface = Surface(SurfaceTexture(texId).apply {
        setDefaultBufferSize(size.width, size.height)
    })
    val hardwareCanvas = androidx.compose.ui.graphics.Canvas(surface.lockHardwareCanvas())
    CanvasDrawScope().draw(density, layoutDirection, hardwareCanvas, size.toSize()) {
        drawLayer(layer)
    }
}
s
```CanvasDrawScope().draw(density, layoutDirection, hardwareCanvas, size.toSize()) {
drawLayer(layer)
}```
n
You use
CanvasDrawScope
to create a
DrawScope
from the canvas returned from
lockHardwareCanvas
and draw the GraphicsLayer within this scope
s
Oh, I skipped over that because it didn't seem to make any sense to me at first glance. Having an empty attached to nothing CanvasDrawScope.
Now I see why it will work
Really appreciate the help, thanks so much! πŸ™‚
n
No problem! Happy to help πŸ™‚
Also, just FYI You can use the
FrameBuffer
API to handle the GL logic of importing a HardwareBuffer into GL and making it current for rendering: https://developer.android.com/reference/androidx/graphics/opengl/FrameBuffer I noticed in your code snippet it looked like you were already using the graphics-core Androidx library so you also would have access to this API as well. There's nothing wrong with handling the EGLClientBuffer logic yourself, but it's already handled within FrameBuffer too
For Android versions prior to when Bitmap#getHardwareBuffer was introduced you can leverage a SurfaceTexture, however, for newer API levels, rendering directly with HardwareBuffer instances is the current recommendation
s
Actually this

https://files.slack.com/files-pri/T09229ZC6-F06T92EE6NA/khr_texture_load.pngβ–Ύ

is rendering into my FBO. However, I still need to find a way to access a HardwareBuffer from a Bitmap before API 31, aside from the Picture solution we figured out.
> For Android versions prior to when Bitmap#getHardwareBuffer was introduced you can leverage a SurfaceTexture I mean, for another usecase, when I have only as an input a Bitmap with HW config. So the SurfaceTexture would not help much
n
For prior versions you can just stick to
SurfaceTexture
usage
You can still draw that hardware bitmap into a hardware accelerated canvas that you would obtain from
Surface#lockHardwareCanvas
s
oh, I didn't realize this way, you are right
n
You would just need to switch on the texture import logic for importing of HardwareBuffers vs SurfaceTexture source but that's just a few lines of GL
s
yeah, I see now it sounds so obvious
and it do not perform any copies of the bitmap data, right? because it is already in the VRAM
n
If you are rendering to HardwareBuffers then you do not, but you may require a copy to get the SurfaceTexture content back to a bitmap for older API levels
It just depends on what your end destination is, either on screen directly or to a bitmap to be saved to disk/shared etc.
s
it purely in memory thing, for visualization
n
If you are rendering to a HardwareBuffer to be presented on screen you can use
SurfaceControlCompat
+
SurfaceControlCompat.Transaction#setBuffer
to show it on screen
I guess it depends on how you are visualizing it πŸ™‚
s
Ideally as regular hw bitmap within the Image, not involving the SurfaceView/TextureView, etc
n
Yup so you can call
Bitmap.wrapHardwareBuffer
otherwise you would need to do a copy of your destination surface back to a Bitmap for older API levels
s
yes, I'm using it now and androidx.graphics.Renderer
Anyway. Thank you so much for your time; it really unlocked a new perspective for me. I hadn't considered this approach before!
n
No problem! Happy to help! Also glad to see folks combining the different technologies here. It was one of the design goals of the
GraphicsLayer
API to help keep everything in a hardware accelerated code path all the time πŸ™‚
❀️ 2
s
GraphicsLayer
API to help keep everything in a hardware accelerated code path all the time
Yes, that's exactly my goal here with what I'm trying to implement. It's crucial to keep everything on the hardware path as much as possible for speed. I'm truly amazed by the flexibility of Compose UI and Android as a whole. I've already accomplished so much without even needing to dive into NDK. Incredible! πŸ€—
πŸ‘ 1
πŸ‘πŸΎ 1
πŸ‘πŸΌ 1
n
The graphics-core library does a lot of this NDK handling on your behalf as well. There is quite a bit of consolidation work done here to align the NDK and SDK into a unified API surface that can be accessed through Kotlin/Java APIs directly
❀️ 2
s
That's great that it doesn't require the newest Android version only. Supporting Android 9 is my target, and it seems to offer a rich set of graphical features.
n
cc @Siyamed
t
Appreciated this discussion, event though the details are a bit fuzzy to me. I've only done simple things with the graphicsLayer modifier. Is there a decent 10,000 foot overview of the compose rendering pipeline somewhere. Little hard (for me) to keep track of pictures, layers, buffers, bitmaps, etc, which are "old frameworks" vs "do it this way now"
s
I can't provide more insight into Compose UI rendering than what the framework's developers have shared, but from what I understand and have observed in the sources, Compose employs RenderNodes to record commands and perform drawing tasks. This approach, which has been a part of Android's architecture since API 23, represents a lower-level abstraction compared to the Canvas we're accustomed to. Speaking from my perspective, I embarked on an intriguing experiment. My goal was to render a UI (or a part of it) directly into a GPU texture, minimizing the copying of pixel data as much as possible. Following this, I plan to perform some postprocessing on these textures before drawing them back onto the UI. Here's where hardware bitmaps come into play. A GraphicsLayer rasterizes itself into such a bitmap. Essentially, a hardware bitmap acts as a handle to a texture that resides exclusively in GPU memory, avoiding the conventional storage areas for bitmap pixel data like the heap or shared spaces. With the help of the androidx.graphics library, it becomes relatively straightforward to interact with this texture from an OpenGL fragment shader, allowing for various manipulations. Given that the data isn't being transferred back and forth between the CPU and GPU, you always stay on a hardware code path, the performance benefits are significant. Despite encountering some minor challenges I had with HardwareBuffers, as discussed in the thread, Nader introduced a more effective method for achieving my objectives. πŸ™‚
n
@Travis Griggs the existing documentation for Android's Graphics Architecture here is very much still relevant even for Compose: https://source.android.com/docs/core/graphics Additionally there is documentation about Compose's graphics APIs here as well: https://developer.android.com/develop/ui/compose/graphics/draw/overview The Compose graphics APIs unify the existing framework APIs to provide a more opinionated approach for rendering content along with providing a more explicit API surface. For example, the GraphicsLayer API is built on top of RenderNode + View for compatible API levels and represents a displaylist of drawing commands along with metadata around how to render this list (ex. affine transformations, alpha etc). Picture was also a displaylist like API similar to RenderNode but had software rendered limitations across various platform versions and is superseded by RenderNode.
πŸ™ 1
261 Views