Moritz Lindner
01/18/2023, 8:20 PM1.png
in the /images/train/
directory corresponds to 1.txt
in the labels/train
directory.
A txt file is either empty (if no object is in the corresponding image) or contains rows like 0 0.403260 0.570903 0.090604 0.160194
. Each row represents an object labeled in the corresponding image where the first element is the class and the other 4 elements describe the bounding boxes top left and bottom right coordinates.
I want to use the OnHeapDataset
. I was a bit confused regarding the usage of FloatArray everywhere but I think I got the featuresExtractor
working:
featuresExtractor = { basePath ->
println("extracting features in $basePath")
val folder = File(basePath)
val res = folder.listFiles().map { image ->
println("converting ${image.absolutePath}")
val imageWithAlphaChannel = ImageIO.read(image)
val imageWithoutAlphaChannel = BufferedImage(
imageWithAlphaChannel.width,
imageWithAlphaChannel.height,
TYPE_INT_RGB
).apply {
for (x in 0 until imageWithAlphaChannel.width) {
for (y in 0 until imageWithAlphaChannel.height) {
val rgb = imageWithAlphaChannel.getRGB(x, y)
setRGB(x, y, rgb)
}
}
}
pipline.apply(imageWithoutAlphaChannel).first
}.toTypedArray()
println("finished extracting features in $basePath")
res
}
However, I am unsure how to build the labelExtractor
since it awaits (String, Int) -> FloatArray
. So I am am unable to encode the described row structure (containing class, x1, y1, x2, y2) into an FloatArray
as I would need a Array<FloatArray>
to encode the information for the bounding box.
I hope my questions make sens and that I am not completely on the wrong track 😄Paul Woitaschek
01/22/2023, 11:33 AMBastien Leveque
01/22/2023, 5:52 PModay
02/10/2023, 9:58 AMAlex
02/12/2023, 10:33 AMAlex
02/16/2023, 8:15 AMimport org.jetbrains.kotlinx.dl.api.core.activation.Activations
import org.jetbrains.kotlinx.dl.api.core.loss.Losses
import org.jetbrains.kotlinx.dl.api.core.metric.Metrics
import org.jetbrains.kotlinx.dl.api.core.optimizer.Adam
import org.jetbrains.kotlinx.dl.dataset.mnist
Hope someone would guide me, thank youAlex
02/16/2023, 10:55 AMAlex
02/18/2023, 1:50 AMAlex
02/19/2023, 3:16 AMkotlin-deeplearning-visualization
Alexandre Brown
02/22/2023, 1:46 AMzt
02/23/2023, 10:57 PMAlex
02/24/2023, 6:25 AMpackage <http://org.jetbrains.kotlinx.dataframe.examples.titanic.ml|org.jetbrains.kotlinx.dataframe.examples.titanic.ml>
Luc Girardin
03/03/2023, 10:54 PMdarkmoon_uk
03/22/2023, 11:06 AMspierce7
03/27/2023, 6:49 PMGiraudon
04/03/2023, 2:19 PM*Julia Beliaeva* [21 h 06]
Hi, no, it's not possible right now. You'll need to train your model elsewhere and convert it to onnx format to use with KotlinDL in Android.
Was an answer provided 3 months ago ... is it still current ? I mean : will it be a version of Kotlin DL allowing you to train a model, from scratch, within an Android phone , using local data,? because in fact, i believe .. this is the “graal”, most of us are waiting for ! then the next question is : can we write code using current kotlin-DL .. that will run calculations used in a training process ? by advance thanks a lot for your help Vinzzaleslaw
04/13/2023, 7:52 AMStefan Oltmann
04/14/2023, 4:58 AMzaleslaw
05/22/2023, 2:31 PMzaleslaw
05/22/2023, 2:33 PMspierce7
08/26/2023, 2:45 PMMarcus Cvjeticanin
08/31/2023, 12:32 PMJames Yox
09/05/2023, 1:00 AM(1, 52, 52, 3, 85)
then parse the bounding boxes... I can create a tensor from the data of that shape but when I try to grab the info out It doesn't seem to be a valid representation of bounding boxes... I could be missing many things though since this is my first foray into trying to use anything beyond a super high level library for ML so I could be missing something very obvious.
Model Documentation: https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/yolov4Peter
10/02/2023, 6:20 AMSaher Al-Sous
10/18/2023, 3:08 PMSpring Boot
, I loaded the model successfully, and created the needed tensor, but I can't make a call to get the result from the model because of this error:
Caused by: org.tensorflow.exceptions.TFInvalidArgumentException: Expects arg[0] to be float but uint8 is providedI checked the model signature and it was like this:
Signature for "serving_default":
Method: "tensorflow/serving/predict"
Inputs:
"input_1": dtype=DT_FLOAT, shape=(-1, 299, 299, 3)
Outputs:
"dense_3": dtype=DT_FLOAT, shape=(-1, 41)
Signature for "__saved_model_init_op":
Outputs:
"__saved_model_init_op": dtype=DT_INVALID, shape=()
my tensor details are DT_UINT8 tensor with shape [299, 299, 3]
.
When I changed my tensor data type into float like this:
val imageShape = TFloat32.tensorOf(runner.fetch(decodeImage).run()[0].shape())
val reshape = tf.reshape(
decodeImage,
tf.array(
-1.0f,
imageShape[0].getFloat(),
imageShape[1].getFloat(),
imageShape[2].getFloat())
)
I got this error:
org.tensorflow.exceptions.TFInvalidArgumentException: Value for attr 'Tshape' of float is not in the list of allowed values: int32, int64if someone is curious how I loaded the model, created the tensor and called it, here is the code below Loading the model in `TFServices`:
fun model(): SavedModelBundle {
return SavedModelBundle
.loader("/home/***/src/main/resources/pd/")
.withRunOptions(RunOptions.getDefaultInstance())
.load()
}
Building the Tensor and calling the model
val graph = Graph()
val session = Session(graph)
val tf = Ops.create(graph)
val fileName = tf.constant("/home/***/src/main/resources/keyframe_1294.jpg")
val readFile = tf.io.readFile(fileName)
val runner = session.runner()
val decodingOptions = DecodeJpeg.channels(3)
val decodeImage = tf.image.decodeJpeg(readFile.contents(), decodingOptions)
val imageShape = runner.fetch(decodeImage).run()[0].shape()
val reshape = tf.reshape(
decodeImage,
tf.array(
-1,
imageShape.asArray()[0],
imageShape.asArray()[1],
imageShape.asArray()[2])
)
val tensor = runner.fetch(reshape).run()[0]
val inputMap = mutableMapOf("input_tensor" to tensor)
println(tensor.shape())
println(tensor.dataType())
println(tensor.asRawTensor())
val result = tfService.model().function("serving_default").call(inputMap)
and i used this dependency:
implementation("org.tensorflow:tensorflow-core-platform:0.5.0")
Then i changed the whole code, and used the Kotlin Tensorflow dependencies
implementation("org.jetbrains.kotlinx:kotlin-deeplearning-api:0.5.2")
implementation("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:0.5.2")
I loaded the model:
fun myModel(): SavedModel {
return SavedModel.load("/home/***/src/main/resources/pd/")
}
and called for the prediction:
val file = File("/home/***/src/main/resources/keyframe_1294.jpg")
val byteArray = ImageIO.read(file)
val floatArray = ImageConverter.toRawFloatArray(byteArray)
val myResult = tfService.myModel().predictSoftly(floatArray, "dense_3")
println(myResult)
but i got this error:
Caused by: org.tensorflow.TensorFlowException: Op type not registered 'DisableCopyOnRead' in binary running on My Computer. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)is there a fix for this or a ready example I can learn from? all i want to do is to use my model that i generated using Tensorflow 2 in spring boot application. thank youshould be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.tf.contrib.resampler
Akram Bensalem
11/08/2023, 11:48 AM%use dataframe
To something similar to this:
%use dataframe as df
Aaron Vontell
11/26/2023, 3:58 PMJavokhir Savriev
12/11/2023, 3:27 AMStefan Oltmann
01/10/2024, 8:37 PMJavokhir Savriev
03/15/2024, 5:23 AM