Hi, we are moving our REST endpoints from Android ...
# multiplatform
a
Hi, we are moving our REST endpoints from Android codebase to KMP in order to reuse network layer on both natives. After we moved everything to KMP (on a separate branch), we noticed significant performance degradation where Garbage collections were triggered more frequently, and the app heap is 2-3x bigger than before when we used the implementation from the Android codebase. Because of our business requirements, we are making 10 REST calls in parallel, and each of them has a large response Here are the differences: • Android codebase implementation ◦ Retrofit + kotlinx.serialization.json Json • KMP implementation ◦ Ktor + kotlinx.serialization.json Json We noticed that performances are way better when we are deserialising response body as stream (JVM) in Android codebase. Is there a way to deserialising response body as stream in KMP, ideally without JVM?
decodeApiResponse
implementation
j
Retrofit's kotlinx.serialization converter is not deserializing as a stream, as there is no general-purpose streaming API for that library. The only streaming API available is JSON-only, and we currently do not implement it.
a
Do you have a suggestion on which approach to take in KMP in order to efficiently handle large JSON responses from REST endpoints? Is Ktor + kotlinx.serialization.json Json the most optimal memory-wise?
j
Ktor has its own buffering layer and then you'll need to UTF-8-decode those bytes into a giant string which then gets fed to kotlinx.serialization where the string is parsed and creates a bunch of allocations for string keys and values.
a
Something like this?
Copy code
suspend inline fun <reified T : Any> decodeApiResponseFromString(
    json: Json,
    response: HttpResponse,
): ApiResponseResult<T> {
    return try {
        val bodyAsString = response.bodyAsText()
        val responseData = json.decodeFromString<T>(bodyAsString)
        ApiResponseResult.Success(responseData)
    } catch (e: Exception) {
        ApiResponseResult.Failure(ApiFailure.ApiParsingError())
    }
}
Or?
Copy code
suspend inline fun <reified T : Any> decodeApiResponseFromChannel(
    json: Json,
    response: HttpResponse,
): ApiResponseResult<T> {
    return try {
        val source = response.bodyAsChannel().asSource().buffered()
        val responseData = json.decodeFromSource<T>(source)
        ApiResponseResult.Success(responseData)
    } catch (e: Exception) {
        ApiResponseResult.Failure(ApiFailure.ApiParsingError())
    }
}
m
I'm not using ktor, but instead have my own wrapper around OkHTTP and UrlSession. For most of my decoding I'm loading as a String and then using kotlinx serializaiton. But for the large Strings, I'm using Moshi's streaming API to decode the stream from the OkHTTP response. And manually parsing the object instead of JSON to object mapping capabilities of Moshi or kotlinx serialization. On iOS I saved the response to file, opened the file as an
NSInputStream
used
NSJSONSerialization.JSONObjectWithStream
to decode, and then delete the file when done. I had to move the file also after the download, because
NSUrlSession
would delete it from under me.
a
Is there something which Retrofit is doing more optimally compared to Ktor when handling large JSON responses (from memory and garbage collection perspective)? I'm trying to understand what is different if both of them are using the same JSON from kotlinx serialization as
Converter.Factory
but we see difference in performances (heap size spike).
u
BTW I though ktor + kx.ser does stream jsons, there is streaming api on kx.ser
a
Maybe not just in KMP?
u
kmp on android is just android, not sure what you mean maybe on native (ios); although I'd still not expect it to be different
a
Is it possible that Ktor + kotlinx.ser is less performant compared to Retrofit + Kotlin. ser because it creates a bunch of short-lived objects, which puts pressure on memory?
```suspend inline fun <reified T : Any> decodeApiResponse(
response: HttpResponse,
): T? {
return try {
response.body<T>()
} catch (e: CancellationException) {
throw e
} catch (e: Exception) {
Logger.e("decodeApiResponse", e)
null
}
}```
We have a use case where 10 REST calls are executed in parallel and each of them is returning 1-2MB of JSON data.
u
Hard to say, technically what not using streaming should mean is that you'd allocate 10-20MB of RAM, but in practise who knows. Btw there is a ktor channel
👀 1
So maybe try there, but I'd assume you'd need to do some profiling