If I want to make a child coroutine scope for my R...
# coroutines
r
If I want to make a child coroutine scope for my Repository so that the coroutines it launches are tied to application lifecycle rather than that of a view model or something more narrow what is the correct way to do that? Something like this?
Copy code
// MyRepository.kt
class MyRepository<T>(coroutineScope: CoroutineScope) {
    private val scope = couroutineScope + Dispatchers.Default
    private val dataSource: DataSource<T> = DataSource()

    fun add(newValue: T) {
        scope.launch {
            dataSource.new(newValue)
        }
    }
}

// Example usage in a Compose Desktop app
// Main.kt
@Composeable
@Preview
fun App() {
    val appScope = rememberCoroutineScope { Dispatchers.Main }
    val repository = MyRepository(appScope)
    MainScreen(repository)
}
c
You've got the basic idea right, but you'll want to also give the Repository coroutineScope a
SupervisorJob()
so one failing task doesn't cancel the entire Repo (and thus, also the parent scope passed into the Repo). And when you add that job, make sure to set the parent job, so cancellation of the application scope also flows down and cancels the Repository.
r
Ahhh okay, so instead of
Copy code
private val scope = coroutineScope + Dispatchers.Default
I probably want
Copy code
private val scope = coroutineScope + SupervisorJob()
and in my repo functions if I want the default dispatcher I can specify that there.
If I'm understanding this https://elizarov.medium.com/coroutine-context-and-scope-c8b255d59055 correctly then the joining of my supervisor job and the parent job is handled by using the
+
operator, is that right?
c
Not quite, you need to explicitly join a “child job” to its parent job for cancellation to work properly,
SupervisorJob(parent = coroutineScope.coroutineContext.job)
. From the documentation of the
SupervisorJob
function:
If [parent] job is specified, then this supervisor job becomes a child job of its parent and is cancelled when its parent fails or is cancelled. All this supervisor’s children are cancelled in this case, too. The invocation of [cancel][Job.cancel] with exception (other than [CancellationException]) on this supervisor job also cancels parent.
If you don’t set the
parent
property, then what ends up happening is that the child scope you create is isolated from the scope passed to the repo. If the coroutineScope pass in is cancelled, it doesn’t know about the child scope you created, and so the coroutines launched within the repo will not get cancelled
When using the coroutine builder functions (launch, async, etc.), the child job is created and linked for you. In that case, it creates a child scope with the
+
operator, then creates a third scope with a new
Job
that is linked to the original scope’s
Job
. If you create a new scope without specifying a Job (for example,
val childScope = parentScope + Dispatchers.Default
), then the `childScope`’s
Job
is the same instance is the same as
parentScope
. This is basically the same thing as using
withContext()
. But if you provide a
Job
with the
+
operator but don’t link it to its parent (for example,
val childScope = parentScope + Dispatchers.Default + SupervisorJob()
), you’re overriding the job for the child scope so that you can manage the scope yourself, but the
parentScope
does not know about
childScope
, so it cannot tell the child scope to cancel itself once the parent is cancelled. And likewise, the
childScope
doesn’t know about its parent, so it cannot tell the
parentScope
when the
childScope
has failed. So this breaks cooperative cancellation. So when you create a new scope with
+
operator, a good pattern is to either: 1) not override the job, or 2) make sure the overriding job is explicitly linked to the parent (for example,
val childScope = parentScope + Dispatchers.Default + SupervisorJob(parent = parentScope.coroutineContext.job)
). It’s a bit weird because you’re having to reference the
parentScope
twice, but the
+
is pretty low-level and doesn’t attempt to do this linking for you
r
Thank you for such a thorough explanation! I think all of this makes sense. W.r.t. the patterns you mentioned is one "better" than the other? What would be the benefit of creating a new scope and not overriding the job? Maybe to signal the intent that my repository has "narrower" scope than the application scope but without having to manage the job linkage myself?
c
If you’re creating a “child scope”, you almost certainly want to provide your own Job into that scope. By creating a new child scope, you’re basically stating that you are launching things into it in such a way that they are “owned” by that child scope, but they are not owned by the one passed in. The tasks launched in the child scope are owned by the child scope, so providing your own
Job
allows you to control all those child tasks. The alternative of not providing a
Job
at all would be considering the scope to simply be a “view” into a parent scope. In other words, the Repository itself is not meaningful, it’s not really doing anything, doesn’t have its own lifetime, isn’t handling errors, etc. I honestly can’t think of a use-case where you would create a child scope without a custom
Job
, since any situation where you would do this kind of thing would probably be done using
withContext
instead. In the case of using a Repository pattern, the Repository is its own thing and launched its own tasks, so it should have its own
Job
because it might have a lifetime that’s different from the parent scope, handle errors differently, etc. It’s linked to the parent scope only in the sense that when the parent is cancelled the Repository is also cancelled, but beyond that it should not expect the parent scope to handle errors thrown inside tasks launched by the Repository. For example, think of how unintuitive it would be if you had a RepositoryA throw an error, then both RepositoryA and RepositoryB get cancelled, despite RepositoryB not being related to RepositoryA at all. This is the kind of situation you might run into by not providing your own
Job
to each Repository’s child scope.
r
Yeah that makes a lot of sense, and to help convince myself I'm doing something more correct now I did some ol' trusty println debugging
Copy code
class Repository(parentScope: CoroutineScope) {
    private val repositoryScope = run {
        val job = SupervisorJob(parent = parentScope.coroutineContext[Job]).apply { 
            invokeOnCompletion { println("Repository scope is being cancelled.") }
        }
        parentScope + Dispatchers.Default + job
    }
} 

// Main.kt
@Composable
fun App() {
    val applicationScope = rememberCoroutineScope { Dispatchers.Main }
    val repository = Repository(applicationScope)
    // etc etc
}
and when I close the application I see exactly what I expected which is "Repository scope is being cancelled." Also to demonstrate to myself what you said about not overriding the parent scope's job I also tried leaving out
parent = parentScope.coroutineContext[Job]
in the above code and watched as closing the app did not kill the associated process. Thanks for all your help here, I appreciate the discussion!
643 Views