I'm running into a concurrency issue I don't under...
# announcements
d
I'm running into a concurrency issue I don't understand how to fix. I have a function in my repository called 
getTile
, which may need to call 
renewAccessToken
. If I make ten parallel calls to 
getTile
, and I need to renew the token, one call should do the renewing, and the rest should block until it is done. I'm wondering if I want something that emits a stream of (mostly-identical) access tokens, which getTile pulls the latest access token from. The stream then chooses whether to renew or present a stale value
n
maybe not the best solution, but why not just make
getTile
synchronized?
d
Thanks. I ended up trying that, but I saw advice to use a mutex instead, so I'm trying that right now. This is what I have so far, it seems to work
Copy code
```class AccessTokenCache(private val renewAsync: suspend () -> Deferred<AccessTokenResponse>) {
    private var cache: AccessToken? = null
    private val mutex = Mutex()
    private var currentRequest: Deferred<AccessTokenResponse>? = null

    suspend fun get(): AccessTokenResponse {
        val cached = cache
        return if (cached != null) {
            Either.right(cached)
        } else {
            mutex.withLock {
                Timber.i("Acquired lock")
                val request = currentRequest
                if (request != null) {
                    request.await()
                } else {
                    val request = renewAsync()
                    currentRequest = request
                    request.await()
                        .map {
                            cache = it
                            it
                        }
                }
            }
        }
    }
}
n
but obviously Guava is a large enough library that it's not worth bringing in just for the cache.
d
Thanks
m
I just had the same problem a few days ago for OAuth access tokens that occasionally need refreshing 🙂
d
Seems we came up with the exact same solution
👍 1
m
I think yours may have a race condition
d
Can you explain?
m
No, nevermind. You never unset
currentRequest
🙂
It’s just odd to make an async request within a locked mutex.
I have the same issue though.
Needs some redesign 🙂
oh, I have no deferred though
Our implementations are different
d
How do you get your token then? Doesn't that require a network request?
What would happen if I did unset
currentRequest
? Failing to do so was a bug
m
Yeah that’s why I’ve said I do the same - async request inside locked mutex. But because we do that there’s no need for a deferred.
d
The deferred is there because I send off a bunch of api requests in parallel, each of which needs a token
m
Let’s say call 1 starts the request and sets
currentRequest
. Call 2 comes in but is blocked at the beginning of the function because the mutex is locked. Call 1 finishes, unsets
currentRequest
and returns the result. Call 2 resumes because the mutex is available. It checks the
currentRequest
which is not longer set and thus starts a new request.
await()
will likely never be called.
d
So
get
will be called in the uncached case multiple times, and should make only one request for a new token, but give that token to every caller
m
Yes, but your code wouldn’t do that if you unset
currentRequest
.
d
Thanks for the advice, I think I have a better implementation now
Copy code
class AccessTokenCache(private val renew: suspend () -> AccessTokenResponse) {
    private var cache: AccessToken? = null
    private val mutex = Mutex()

    suspend fun get(): AccessTokenResponse {
        val cached = cache
        if (cached != null) {
            Timber.i("Got cached without lock")
            return Either.right(cached)
        }

        mutex.withLock {
            val cached = cache
            return if (cached != null) {
                Timber.i("Acquired lock, got cached")
                Either.right(cached)
            } else {
                Timber.i("Acquired lock, renewing")
                renew().map {
                    cache = it
                    it
                }
            }
        }
    }
}
m
No idea what
map
does but looks better now.
d
The thing with Either and map is from arrow, it's now I do error handling in this function. The map sets the cache to the token value if the call to get the token succeeded
Thanks a lot for all your help
p
you should also mark the cache with
@Volatile
to ensure that the field is safely published, I’d say (otherwise, you’ll be getting unnecessary attempts at acquiring the lock). Also, do you ever clear the cached entry? (if you do, then without volatile, that may fail, and other threads than the clearing one might see the old value). You’re effectively doing this: https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java