Is there a way to load my ONNX model from local re...
# kotlindl
s
Is there a way to load my ONNX model from local resources instead of the model hub? UltraFace320 ist just 1 MB and I don't want to rely on the hub if I can just bundle it with my app.
👍 1
1
n
As I understand, the ModelHub exactly matches your scenario. Unlike the ModelHub for Desktop, its counterpart for Android loads models from local resources. code So the desired models from ModelHub should be loaded into the resources before the build. The recommended way is to use the Gradle plugin.
s
Is this planned for Desktop, too?
n
It is a good point. We missed this use case) We implemented it in such a way for Android because we don't want to deliver model weights to the end-user devices, but it's also reasonable for the end-user desktop apps.
j
You can download the model file and write something like this:
Copy code
val inferenceModel = OnnxInferenceModel {
    loadModelBytes() // your function to load bytes from the model file
}
val faceDetectionModel = FaceDetectionModel(inferenceModel, "UltraFace320")
You'll also need either to initialize
inferenceModel
itself by calling
initializeWith()
, or you can use extensions like
inferUsing
with the
faceDetectionModel
to do the initialization.
s
Thank you, that works! Nice that you already have this ability. I suggest to add this to the Getting started docs :)
242 Views