Hi, we are currently rolling out an updated versio...
# coroutines
a
Hi, we are currently rolling out an updated version of our android app, which contains an update for kotlin 1.2.71 -> 1.3.0 and coroutines 0.26.0 -> 1.0.1 . We observe unexpected crashes, that seem to somehow escape the try{}catch{} block. I described the problem and what I figured out so far here a bit more in detail: https://github.com/Kotlin/kotlinx.coroutines/issues/873 . I'm quite puzzled that some of the exceptions seem to be forwarded to the uncaught exception handler. And I'm unsure if our general approach is wrong or if this is a bug.
I'm considering to start the actor in/with a custom coroutine scope that just logs such uncaught exceptions instead of crashing the app:
Copy code
val LoggingExceptionHandler = CoroutineExceptionHandler { _, t -> System.out.println("coroutine exception handler: $t") }
        val customScope = GlobalScope + LoggingExceptionHandler
customScope.actor{...}
What do you guys think, would this be a workable temporary hotfix, or bring more harm then it helps?
t
Be sure to read https://github.com/Kotlin/kotlinx.coroutines/issues/830 I've also faced some random exception rethrown. The only 100% perfect working case was to no more rely on Exceptions but return a sealed class result that says success or error and have the exception as a param. I'm pretty sure there's another race condition somewhere but could not get some attention to it 😞
a
thanks for the link! The failing function in our app is actually mostly doing network calls via OkHttp, exactly as explained in ticket #830. So I'll definitely give the mentioned workarounds a try. However my test cases don't use any
suspendCancellableCoroutine
, just
coroutineScope
and `async`/`await`. So the cause here should be a different one?
t
I'm not sure, none of the workaround there was working 100% when I went to prod. So I'm pretty sure there's another issue somewhere. let's hope someone see your new issue and match all to find the real root cause.
a
I can confirm, even with the workaround mentioned in the github ticket 830 we are seeing too many crashes.