:wave: hi! we have a significant number of Apollo ...
# apollo-kotlin
f
👋 hi! we have a significant number of Apollo internal errors happening constantly across all our apps and we're currently trying to understand why they're happening and what we can do about that — full original context is in this thread, but the summary is that writing to cache seems to fail consistently for some of our users and the main culprit seems to be `SQLiteBlobTooBigException`s we're currently planning to experiment with different cursor size to see if that helps, but we would also love to hear from you on a few different points: • can you think of anything else we could look into or investigate? • have you seen any other similar cases like this before where we could try to extract a learning or insight from? • would it be possible for the apolloExceptionHandler to expose more than just the exception thrown so we can maybe learn what are the queries causing the failure, or anything else that might help narrowing things down?
b
The ideal would be to have access to the .db file when that happens, to investigate - of course that's not easy 🙂 We could certainly add more diagnostics to the exception surfaced through
apolloExceptionHandler
👍
gratitude thank you 1
(I think that cursor size link is incorrect)
Do you store any "big" structures in your cache? Things like images or large lists?
Looking at this comment on a past occurence of the same exception, the culprit was a base64 encoded image used as a mutation argument (arguments are stored part of the cache keys). Do you have anything like this by any chance?
f
(I think that cursor size link is incorrect)
fixed it, thanks! I don't think we have that, but we certainly have queries that are large — we're currently working to split them up but it requires quite some effort and it's something we'll have to address gradually do you have a rough estimate of what "big" is? it would definitely be helpful if we could get more confidence they're the culprit so we can prioritize splitting them up
b
It’s actually the size of records (or their keys) that’s likely causing an issue, not the queries per se - although they are related. A record may have a lot of fields, or some fields that have a large value (including lists)? Do you notice anything when looking at the cache with the IDE plugin? Difficult to quantify this though, especially since I don’t exactly understand what causes the exception. I guess that 2M window size is the max size that one record should be?
f
so what you're saying is that even if we split our queries, if in the end we still end up with records that are too large, we would potentially still have the same problem? I'll spend some time with the IDE plugin to get a better idea on how large our keys and records can be, thanks
b
If your cache keys are well configured, yes this is what's expected since the fields will end up in the same records thanks to the normalization process. Yes, often, having a good look at what's inside the cache can be insightful and maybe reveal some issues.