Apollo parsing on a low end device has lots of non...
# apollo-kotlin
s
Apollo parsing on a low end device has lots of non running blocks. is there any known explanation about it. The parsing also takes more time than actual network operation. What is the best practice to investigate something like this?
m
The parsing also takes more time than actual network operation.
Double checking you're measuring both parsing and networkd completely separately? By default,
fromJson
reads from a
BufferedSource
so it will do network I/O.
s
I can share the rest of the systrace. This part starts after interceptors done with network requests. Btw . I didn't expect to find any performance issue about Apollo parsing but here I am
I started by investing why there are non running parsing operations. There are not locks or something like that
m
The
NetworkEngine
only does the "handshake" part of the HTTP request, reading the body is made at the same time as the parsing
๐Ÿ‘๐Ÿป 1
If you have no fragment, parsing should be amortized. With fragments, the situation is more complicated because we need to buffer some of the data
(because we need to "rewind" the stream)
s
I think there are fragments. Lots of them. I am new in this part of the app. What should I look for
m
Is that even possible to have "non-running" blocks? I understand the white blocks above to be time spent in
HomePage$fromJson
(where exactly, I have no idea)
> What should I look for I would first try to isolate the network from the parsing: 1. measure the network time with a plain OkHttp call, make sure to read all the body to the last byte. 2. Dump that body somewhere in memory and call
fromJson
on it
s
Systrace block also says it was running only 50% of time. %40 we're not running
m
not running means "blocked on IO" I think ?
s
If it is buffered, is it possible that it waits for network response?
I will check that as much as I can. Since they run in block, I don't have clear answers why cant it move to next bit of parsing
m
If it is buffered, is it possible that it waits for network response?
Could be, the "bufferization" is decided "by fragment.
Copy code
query {
  user {
    name
    bio
    # buffering happens here
    ... on Admin {
    }
  }
}
๐Ÿ‘๐Ÿป 1
I would dump a response into an okio
Buffer
in RAM, create a
JsonReader
from it and measure the parsing time with
operation.parseJsonResponse(jsonReader)
I expect that to use 100% of one core
๐Ÿ‘๐Ÿป 1
(or something close)
s
Yes every bit of those blocks are fragment
I will do that. Just to put expectations right, buffer approach supposed to be faster, right?
m
Yea, it's not doing any networking so it should be faster
s
Let me experiment with that.
๐Ÿ‘ 1
I am confused a bit about buffer. if response is written into buffer, why request process is long? should request itself fishes after first byte and rest of the logic executes in parsing part? if it is really dummy question, I am confused about what happens when
m
Depends where you're putting your probes and what you're calling a request process
I would have expected to see more of Apollo in your trace stack though, looks like
apollo_fetch_HomeFeed
is calling
ResponseAdapter$Data.fromJson()
directly?
s
this is systrace added by codegen only to our code. I will include apollo classes in next run
also our apollo version is 4.1.1 quite old.
m
This part hasn't moved for some time, I wouldn't expect much difference on newer versions (but sitll worth a try!)
๐Ÿ‘๐Ÿป 1
s
those red parts the empty pieces from first run. so it is not sleeping. thats good news.
๐Ÿ‘ 1
it looks like 4-5 layers of fragments
m
Can you share some rough numbers? Size of the JSON, time to parse it? FWIW, we are tracking JSON parsing time here
s
Network response is around 50k bytes. Let me check the Jaso size in Monday. Tbh at this moment, I think it is the way we use the Apollo and how inefficient low end devices.
m
Sonething else that'd be interesting would be to compare to something like Moshi. It should typically be in the same range although apollo slightly slower due to fragments
s
I can also experiment that. Is Kotlinx serialization an option?
m
Sure. I said Moshi because the JsonReader code was taken from Moshi but kotlinx serialization should be relatively similar