I was doing some demonstration and found out the s...
# kotlindl
a
I was doing some demonstration and found out the simultaneous prediction time of object detection model is slower(relatively speaking) so my first attempt was to capture image , saved to the storage, load it and make prediction using
detectObjects(file: File)
function it was slow as you would normally expect but what got me thinking is instead of loading saving and reloading the image I directly fed the
predictRaw(f: FloatArray)
with
FloatArray
that came from rescaled and recolored
BufferedImage
at this moment the rate of data fed was about 23
BufferedImage
per sec (from my web cam) followed by the predicator which was 111x slower and took 6.02 - 4.4 sec to load single data(
FloatArray
) I wasn't also sure about the Rescaler and color swaper I made but when I did the bench it barely have made any changes even the whole process of converting
BufferedImage
to
FloatArray
was only having a loss of 1 frames After all the only thing I could think of is that the predictor instead of getting called from the the mem it's simultaneously being fetched from the storage If that was the case is there anything that I could do to prevent that Here's the code in case you are wondering
Copy code
while(cam.isOpen){
     model.predictRaw(cam.image.toFloatArray())
...
n
Could you provide a minimal reproducible example, so we will be able to investigate further what's going on?
a
@Nikita Ermolenko
Copy code
val m = modelHub.loadPretrainedModel(ONNXModels.ObjectDetection.SSD)
var x = 100
while(x < 100){
     x++
    measureTime {
           model.predictRaw(image.toFloatArray())
     }.apply(::println)
}
n
Sorry for the late reply. I guess inference time could be different because you skipped preprocessing step where resizing is applied. Could you try to add preprocessing before
predictRaw
call? You can find preprocessing code here
a
@Nikita Ermolenko sorry for being late, preprocessing wasn't available for BufferedImage in current kotlin dl and also the problem was issued in the repo so I used custom code to do the job
Copy code
fun BufferedImage.resize(outputHeight: Int, outputWidth: Int, interpolation: InterpolationType = InterpolationType.BILINEAR,
                         renderingSpeed: RenderingSpeed = RenderingSpeed.MEDIUM,
                          enableAntialiasing: Boolean = true): BufferedImage {
    val resizedImage = BufferedImage(outputWidth, outputHeight, this.type)
    val graphics2D = resizedImage.createGraphics()

    val renderingHint = when (interpolation) {
        InterpolationType.BILINEAR -> RenderingHints.VALUE_INTERPOLATION_BILINEAR
        InterpolationType.NEAREST -> RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR
        InterpolationType.BICUBIC -> RenderingHints.VALUE_INTERPOLATION_BICUBIC
    }

    val renderingSpeed = when (renderingSpeed) {
        RenderingSpeed.FAST -> RenderingHints.VALUE_RENDER_SPEED
        RenderingSpeed.SLOW -> RenderingHints.VALUE_RENDER_QUALITY
        RenderingSpeed.MEDIUM -> RenderingHints.VALUE_RENDER_DEFAULT
    }

    graphics2D.setRenderingHint(RenderingHints.KEY_INTERPOLATION, renderingHint)
    graphics2D.setRenderingHint(RenderingHints.KEY_RENDERING, renderingSpeed)

    if (enableAntialiasing) {
        graphics2D.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)
    }

    graphics2D.drawImage(this, 0, 0, outputWidth, outputHeight, null)
    graphics2D.dispose()

    return resizedImage
}
and there wasn't issue with it when i implement it on SSD mobile and PoseDetectionModel with reasonable fames(like 1/0.137 hz) so maybe that wasn't the case