Hello, is KotlinDL multiplatform friendly? We are ...
# kotlindl
a
Hello, is KotlinDL multiplatform friendly? We are planning to create a multiplatform library and would love to use KotlinDL for preprocessing/postprocessing of images before/after inference. We have a jvm and a native target. PS: if it's planned for the future, when should we expect it ? Thanks
👀 1
z
At this moment it works on JVM only. I see your request about multiplatform library for image preprocessing. It could be a good separate library out of KotlinDL, I hope, probably this part could became multiplatform in future. What if at this moment use KotlinDL for jvm and Opencv for native target, do you need a js part in the future?
No strict plans for this, all of DS libraries at this moment works on JVM only and in early alphas and betas
a
Yes I totally agree, we could have a KotlinDL-processing lib or something that is purely pre/post processing of data and multiplatform. Not certain if js support is needed or will be needed for now but native id definitely needed. Your idea of using opencv could work (although I have found this lib to be not very portable/hard to install). Do you have a roadmap for multiplatform?
z
So, what framework are you going to use for inference in jvm and native parts, onnx? Does it work well in native code?
No, we have no roadmap for this, I know about early prototype for multik, but that's all. Also I've experimenting with DL core for native/js/jvm, it's a lot of pain at this moment for tensorflow integration
a
Let me give you more context: We will have a common multiplatform module, a multiplatform library that uses ONNX runtime (or KInference maybe) for offline users and a JVM online backend (ktor web service) for online users. The common multiplatform module will have shared business rules, search algorithms and pre/post processing. The library will use the common module to have access to preprocess code and process the data (eg: images) before sending it to onnx/kinference (it assumes the model is downloaded locally). By preprocess I mean resize,crop, normalization (for now that's all we need). The JVM backend will use the common multiplatform module for the same reasons, have access to shared pre/post-processing code before sending the image/tensors to an online model server (kubeflow). Here the model is never downloaded, we simply send a json with the tensors of the images pre/processed. Once we get the response we might apply post-processing to it eg: argmax etc.
Our offline users that will use the library is a c++ desktop app therefore I think Native will be required. Our online users will interact with a JVM backend via http. The backend will therefore require a JVM target for the common multiplatform module.
z
Yeah, I see. It's very important context. It could be solved more easily if the offline users could send requests to ktor from their desktop app) but as I understand you are going to make the model embedded, are you going to use native API to onnx?
a
Indeed this would be simpler but some of our users don't have internet access or in some cases the library might be used on a drone directly therefore it is not realistic to only have a ktor backend.
🔥 1
For the library, I think I will have to use ONNX c++ API for native (unless I find that KInference is multiplatform)
z
Do you need a help to reach kinference maintainers?
👌 1
a
That's why I took interest in KInference, seeing it is pure vanilla kotlin means it might be multiplatform in the future (or maybe it is already?).
I would love to @zaleslaw, if I could ask them if KInference is multiplatform, if not if it will be and when, that would be useful.
z
I'll try to help you next week with this, hope it will be possible to arrange a call, I have a few questions about plans too to kinference authours
👍 1
🙏 1
108 Views