Hey guys. I have a strange problem that I need an ...
# apollo-kotlin
v
Hey guys. I have a strange problem that I need an opinion on. We have an active subscription and a mutation that returns the same data. Problem is that the data over the subscription comes back faster than the data over the mutation and is in a more recent state. But we cache the last data that we get from the mutation resulting in having a wrong state. I'm wondering if there is a way around this other than just simply ignoring all data from the mutation. Thanks in advance
m
Hi 👋 Good question! Ignoring the data from the mutation sounds like the way to go
You can use
ApolloCacheHeaders.DO_NOT_STORE
to avoid storing anything in the cache
v
I saw that. How would I go about specifying it for mutation but not subscriptions?
m
let me check
Something like this should do it:
Copy code
apolloClient
                        .mutate(Mutation(email = Input.fromNullable(email)))
                        .toBuilder()
                        .cacheHeaders(CacheHeaders.builder()
                            .addHeader(ApolloCacheHeaders.DO_NOT_STORE, "true")
                            .build()
                        )
                        .build()
                        .await()
v
Oh great. I'll give it a try
👍 1
m
I think you can put whatever instead of "true" and it'll work 🙈
Let us know how it goes
v
So the solution would work but there are still scenarios where the websocker response comes back slower than the mutation. Resulting in longer wait times. Our backend provides a field 'generatedAt' which is a 'Long' representing a timestamp in microseconds. Is there a good way to utilize this field so that I can simply ignore a fragment if it's 'generatedAt' field is less than what is is the cache
m
You might be able to do something with an
ApolloInterceptor
Mmmm no sorry that won't work, all the application interceptors are registered before the cache ones so I don't think you'll be able to bypass the cache there...
one thing you could try is using a NO_CACHE policy and saving to the cache manually using
apolloClient.apolloStore.write(operation, data)
, this way you have full control of what ends up in the cache
v
Yeah I thought about that too. It's a bit messy in my opinion. I was looking into using the Cache itself. But not sure if it makes sense. It would look something like this if the
LruNormalizedCache
was an open class
Copy code
class CustomLruNormalizedCache constructor(evictionPolicy: EvictionPolicy) :
    LruNormalizedCache(evictionPolicy) {

    companion object {
        private const val GENERATED_AT_FIELD = "generatedAt"
    }

    @SuppressWarnings("NestedBlockDepth")
    override fun merge(recordSet: Collection<Record>, cacheHeaders: CacheHeaders): Set<String> {

        recordSet.recordWithGeneratedAt()?.let { record ->
            record.generatedAt()?.let { newGeneratedAt ->

                loadRecord(record.key, CacheHeaders.NONE).generatedAt()?.let { lastGeneratedAt ->

                    Timber.d("Last: $lastGeneratedAt | New: $newGeneratedAt")
                    if (newGeneratedAt < lastGeneratedAt) {
                        return emptySet()
                    }
                }
            }
        }

        return super.merge(recordSet, cacheHeaders)
    }

    private fun Collection<Record>.recordWithGeneratedAt(): Record? = firstOrNull { it.hasField(GENERATED_AT_FIELD) }

    private fun Record?.generatedAt(): Long? = try {
        this?.fields?.get(GENERATED_AT_FIELD).asType<String>()?.toLong()
    } catch (e: Throwable) {
        null
    }
}
m
Yep, that could work
Just copy/paste the whole class, it's not that big (<100 lines)
v
LruNormalizedCache
?
m
Yep
I don't see any reason why you shouldn't be able to pass your own
CustomLruNormalizedCacheFactory
v
requires dependency on the nyTimes cache. And I would not want my code to diverge by copy pasting something. I guess I could also write my own
m
Yep, writing a lock free cache like the guava one (which nyTimes is using under the hood) is not an easy thing though so I'd recommend to reuse that at least
Or if you ever end up writing your own Kotlin lock-free cache, please contribute it back, we'll need something like this for proper multiplatform support 🙂
v
I see, did not realize it would not be as simple as I thought