i’m trying to use CoreML with the camera preview, ...
# compose-ios
j
i’m trying to use CoreML with the camera preview, but the inference is REALLY slow causing a lot of frames to be dropped. If I use the same model in a sample iOS app, it runs perfectly fine. Is this an issue with the kotlin/objc or Compose?
a
That depends. In general, it looks like iOS device have enough power to transfer screen-size images between Swift/Kotlin/Metal. Maybe something wrong with optimisations or interop casts. You can use iOS Instruments to profile your app and see which operation exactly causes lags. Also it would be interesting to see the reproducer and try the app on my machine.
j
Here is a barebones implementation. Basically translated the ios vision demo into kmp https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture You’ll need to run it on an actual iOS device. In the repro, only the camera preview gets displayed. The rest happens in the log output. You can see from the preview how janky the camera feed is, and the logs will tell you all the dropped frames. I’m hoping I’m just doing something wrong 😅
a
Well... That's an interesting example of the way, Kotlin captures strong references. Your app drops too much frames. Just guessing, your app is getting
kCMSampleBufferDroppedFrameReason_OutOfBuffers
error. You can verify it as described here. The trick is the following: Kotlin captures
CMSampleBufferRef
and releases it only when Garbage Collector is being triggered, so the app utilises all available buffers, then freezes until next GC cycle frees all of them. Simply adding
GC.collect()
somewhere in
captureOutput
function will bring your app back to live, however it's not the best solution. What you really need is to tell (or trick) Kotlin somehow do not to capture reference to
CMSampleBufferRef
with GC. I will write back if find how to do it properly.
👍 1
j
Awesome, thanks for the workaround! After many hours it’s now at least functioning 😅