Hi, throwing OutOfMemoryError FATAL EXCEPTION for ...
# kotlindl
j
Hi, throwing OutOfMemoryError FATAL EXCEPTION for bigger ONNX/ORT models on Android
I’m following ONNX Runtime Mobile Examples on android & all are working well. I’m using the same implementation to run my custom ONNX/ORT model on android but getting the following exception : E/AndroidRuntime: FATAL EXCEPTION: main Process: ai.onnxruntime.example.imageclassifier, PID: 21599 java.lang.OutOfMemoryError: Failed to allocate a 361332168 byte allocation with 8388608 free bytes and 164MB until OOM, target footprint 372920392, growth limit 536870912 at java.util.Arrays.copyOf(Arrays.java:3670) at java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:211) at kotlin.io.ByteStreamsKt.readBytes(IOStreams.kt:137) at ai.onnxruntime.example.imageclassifier.MainActivity$readModel$2.invokeSuspend(MainActivity.kt:154) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665) Following are some development environment and model specifications I’m using : 1. Custom model: Model Size 340 MB Is there a bigger model size creating the problem?
Copy code
private val onnxModel: OnnxInferenceModel by lazy {
    val modelBytes = context.resources
        .openRawResource(R.raw.segmentation_license)
        .readBytes()
    OnnxInferenceModel.load(modelBytes)
}
n
A few related issues have been reported on the ONNX Runtime GitHub page. In one of these issues, it was suggested to load the model from a file instead of from bytes (https://github.com/microsoft/onnxruntime/issues/19514#issuecomment-1944414727). However, it appears that reading from a file may also result in out-of-memory errors, as mentioned here: https://github.com/microsoft/onnxruntime/issues/19599. You can attempt to load the model from a file using
OnnxInferenceModel.load(strPath)
. If this method does not work, further investigation may be required to resolve the issue, potentially on the onnxruntime's side.
👍 1