Is there a good way to get a reference to a compos...
# compose
l
Is there a good way to get a reference to a compose Image that I could pass to C (or Kotlin/Native) code to write to? Currently, I have an IntArray in JVM that I pass through JNI, and write to that from my K/N code. I then use the IntArray to create a compose Bitmap and draw that to a compose Canvas. Presumably, there’s a way to back an image with a Skia canvas, and pass the Skia canvas, like with the android Surfaces?
I’d like to avoid all the steps needed with the approach I mentioned, since they seem to take more time than I’d like.
k
On Android or desktop?
l
Ideally cross-platform, but even if just Android, I’d be happy.
Performance seems to be more of a problem on Android.
Since Android compose is backed by Bitmaps, I’m less confident in that. I’d assume there’s something that skiko provides, but not sure.
r
A
Canvas
isn’t a backing store, it’s an interface to write into a bitmap
l
I figured the canvas wouldn’t be the best route. Is there a way to do this with an Image?
k
Skiko would only be relevant if you use Jetbrains’s version of Compose on Android
r
What are you worried about exactly on the Android side? You can create a
Bitmap
from an
IntArray
just fine (or even
lock/unlockPixels
in JNI to write into the bitmap directly)
So you have two choices: the int array you mentioned, or two slightly separate code paths
l
Mainly performance. Passing the IntArray through JNI for filling, then using copyPixelsFromBuffer seems to have a non-trivial performance hit. I’ll have to look into lock/unlockPixels.
r
Passing the int array through JNI isn’t costly no, but it depends on what you do with it then
You could pass the NIO buffer directly too
l
Passing the array through JNI for filling wasn’t as much of a hit as I’d initially assumed it would be (less than a ms if I remember correctly). Would just be nice to avoid writing to the IntArray, then copying data to a Bitmap, then whatever runtime cost it is to draw the bitmap to the compose canvas (the last part is likely the least performance cost).
r
Passing an array is just passing a reference. It’s what you do with it in JNI that matters
hen whatever runtime cost it is to draw the bitmap to the compose canvas (the last part is likely the least performance cost).
Well… actually… doing what you are doing will cause a texture upload which isn’t free
l
Where would be the best place to learn more about the texture upload?
r
What do you want to learn?
And out of curiosity: what do you do to the bitmap in your JNI?
l
I’d mainly like to learn how to optimize this rendering process. Our performance on Android is lagging behind our iOS performance (my Tab S8+ struggles to keep up with an older iPad). I’m looking to optimize it as much as possible. I reduced recompositions as much as possible last week. Looking to do more.
r
Also: do you do this every frame?
l
We needed a video player, but ExoPlayer couldn’t fill everything we needed. We’re currently using ffmpeg to decode frames, then send them for rendering. We used to use Surfaces, but looking to switch to full compose, especially for the iOS support. I already have a branch that uses this same method of rendering on iOS, with an expect/actual for drawing the IntArray to the DrawScope.
r
Oh yeah ouch, bitmaps on Android aren’t meant for this. You should use a
Surface
If you don’t already you can at least keep reusing the same
Buffer
to do the
copyTo/FromBuffer
, and not use an
IntArray
Or switch to
lockPixels
If you make sure to use a direct
Buffer
, your JNI can just get a pointer to the data to skip at least one copy
And
lockPixels
will be copy-less
l
One requirement is the ability to ‘screen record’ parts of the screen, and surfaces didn’t show up when having a view draw to a bitmap. I had a method that would splice together the surface and the produced bitmap (also had to handle things above the surface).
r
SurfaceView
shows its content in a separate window
TextureView
is part of the UI hierarchy
But you can also use
PixelCopy
on Android instead
l
I remember TextureView having a major blocker for us at the time (2 years ago now) we looked at it. Can’t remember for the life of me what it was. I’ll have to look at PixelCopy for sure.
I’ll also take a look at Buffer. That could be quite helpful.
r
The only difference between
TextureView
and
SurfaceView
is that
TextureView
requires an extra GPU copy at draw time, but should otherwise be the same
l
There doesn’t happen to be a good TextureView equivalent for Compose, does there?
I remember that there used to be a lot of overhead for AndroidView in Compose, but not sure where that is now.
r
There isn’t, you’d use a
TextureView
The overhead depends on how many you use/the type of View, I wouldn’t worry about it in this case
And it’s going to be nothing compared to doing your own video decoding 😨
l
FFmpeg 6.0 claims MediaCodec support, so hopefully that helps on the decoding side. The time we spend in ffmpeg calls right now is quite significant.
r
Use a
Surface
or
lockPixels
, it’s going to be your best solution if you are dead set on doing the decoding yourself
l
Should I unlock the pixels as soon as I finish writing the new data, or is it fine to lock it once, then unlock it when it’s no longer used? I’d imagine it’s not pinning the object, but using one of the Get*Critical JNI methods, which would make it unsafe to leave locked, is this correct?
r
You’ll need to unlock before you can draw
And it’s not using JNI methods
l
I see. So even for the same bitmap, I need to assume that addrPtr can change between draws.
r
It probably won’t (although it could, there’s no guarantee) but it’s not about that, it’s about synchronizing the data for the rendering pipeline
Locking the pixels isn’t expensive
l
I’ll take a look at passing the bitmap through to JNI to avoid the IntArray copy. This will diverge the behavior a bit from iOS, so I’ll have to see what code I can share in this process. Hopefully it will be more when #compose-ios is ready to use.