We had a strange, transient (I have no evidence of...
# apollo-kotlin
n
We had a strange, transient (I have no evidence of this), issue with a mutation yesterday that seemed to be cached and had overrun the maximum field length of the cache. We cleared the app data, and the issue went away. I read the documentation in the code and it says that mutations use a fetch policy of
NETWORK_ONLY
, so it seems strange that the sqlite cache would be used at all for any of this. I'll update to the latest beta and see if we see it again.
m
Mutations always reach the network indeed but they update the cache with the returned data
🥇 1
w
mutations use a fetch policy of 
NETWORK_ONLY
If you return something from a mutation, it will get updated in the cache, so it’s not like mutations never use it
🥈 1
💯 1
😄 1
m
I'm curious what the maximum field length is
Do you remember the exception you had?
The SQLite limit is like 2^31-1 so it seems pretty unlikely to be reached 🤔
n
The object being returned was pretty long, but no, sorry, the actual exception was lost forever. The guys think it said something related to this: https://stackoverflow.com/questions/45677685/sqlite-cursorwindow-limit-how-to-avoid-crash
👀 1
m
2MB limit doesn't sound unreachable indeed...
n
This, for now, was an isolated incident during development. We're still not sure how we got it into this state. We have ways to reduce the size of our data, not adding history records if the data is the same as it was, for example, and this will reduce the size of the queries. I just wanted to let you know about this, as it seemed significant and strange.
🙏 1
We may be abusing GraphQL a little in our use case. We are storing a JSON object in a field that the clients decode, instead of having it decomposed into a hierarchy of GraphQL objects. Mainly because we don't know exactly what we might need in the future, so we set up a generic record structure.
m
I see... 2MB doesn't sound huge. I think I remember people putting images in SQLite BLOBS. I wonder how come that doesn't crash
Or maybe it does...
TBH, SQLite isn't really the best fit for what we're doing as we're really more key/value
But SQLite is so much battle tested that it's a natural fit. Also the transactional aaspect of it is reassuring.
If the 2MB is ever a bigger issue, adding a different backend is something to explore
n
We'll just keep it in mind now that we've hit it. The only reason our records get large is because we are keeping some historical values, and we weren't filtering duplicates. So, doing that will reduce the chances we will exceed 2MB.
👍 1