I have a problem: I need a dispatcher that'll only...
# coroutines
l
I have a problem: I need a dispatcher that'll only dispatch on a single thread, but, I still want to allow, say,
Dispatcher.Default
to use that thread. Is that possible? From my understanding,
newSingleThreadContext
would not allow this.
s
Depending on what you're trying to achieve,
Dispatchers.Default.limitedParallelism(1)
might be what you want. It won't guarantee to always use the same thread, though.
☝️ 1
l
@Sam exactly, it won't. I have to use the same thread for GLFW reasons, but I don't really need that thread to be exclusive to that dispatcher
s
You could look at it the other way, and just run all your coroutines on the single threaded dispatcher unless there's a specific reason to send them elsewhere. That's what we do with
Dispatchers.Main
in Android and Swing.
Is there a particular scenario where you're worried about unnecessary thread switching?
l
@Sam I'm gonna be using ECS, so almost all logic will be concurrent anyway; The "main thread" will almost never run anything, all it does is loop, call stuff, suspend till they're done. I'm not worried about thread switching simply because the hierarchy of my project will be so inherently concurrent and thread-switchy anyway.
k
You'll have to create your own dispatcher for this behavior, I think.
l
ic...
How would I approach this?
s
Well, there is prior art, with the IO dispatcher and the default dispatcher using a shared thread pool. But can you describe a scenario where you think it would be helpful for this single thread to be available for use by more than one dispatcher? I still don't quite understand the use case.
l
Alright, so. I'm making a game engine, with ECS (Entity Component System). Communicating with the OS for windowing purposes has to be done on the same thread, especially on MacOS. Now, systems (basically functions that iterate on stuff entities that have specific components) run in parallel, and they receive the current state of the world (entities and the components - the data - each has), and append modifications to some kind of channel (this is all simplified). Now, problem, say system A depends on the modifications of system B. For example, the physics system depends on the system that animates a gate opening, and say the lighting system depends on all systems that modify positioning of things. This creates a situation where, in the first frame, the animation system runs, then the frame after, the physics system runs, then the third frame, the lighting system runs, and we got a delay of three frames between the gate's movement and the lighting updating. This might not seem like a lot, but it is, especially when you have longer chains. How do I prevent this? By suspending systems until their dependencies have finished (I could calculate the dependency chain in the main loop; but considering systems can be added and removed dynamically, that'd create a bottleneck). Irl this is not dependent on frames, but will simply be executed whenever possible. TL;DR for this block of text: the project is designed to be incredibly concurrent in the first place, so the stir-off-
Dispatchers.Main
-when-needed approach simply wouldn't change anything: the main loop barely has any logic in it. Now, I don't absolutely need that extra thread, but in practice, it is wasted: it is only used once per frame to poll input from the system. And on low-end machines, this can be a significant impact to performance.
@Sam here's the use case
s
Thanks for the explanation! It's outside my wheelhouse, so I'll leave this for someone with more knowledge in this area. The problem you described is at least theoretically solvable, with the right arrangement of workers and work queues. I just think the complexity of the solution might outweigh any benefits. I wouldn't expect a single idle thread to be a cause for concern—in fact, I'd much rather have an idle main thread than a busy one—but my experience only comes from GUIs, not games.
l
> I wouldn't expect a single idle thread to be a cause for concern It is not 99% of the time, I'm only worried about situations when only 2 or 3 threads are available, say, on WasmJS on low-powered devices. Theoretically, the load will be split evenly, so this is concern of mine is to allow usage, not to force it.
d
There's no out-of-the-box solution to this exact problem statement, and I don't know of a single work-stealing implementation where this would be provided. The reason is, if you have a specialized thread that can do its tasks that no one else can, but can also be used for other needs, then the tasks that only this thread can do can get delayed and lost by other tasks that could be done by someone else, If this thread is busy with some nonsense, who is going to access GLFW? This is called starvation in multithreading lingo. I think there can be an alternative, though. https://www.glfw.org/docs/3.3/context_guide.html mentions that the context can be transferred between threads: "When moving a context between threads, you must make it non-current on the old thread before making it current on the new one." This is also not possible to express in the Kotlin/Native version of
kotlinx.coroutines
at the moment, but with https://github.com/Kotlin/kotlinx.coroutines/pull/4208 (expected to land in the next major version), it should be possible to have a
GlfwContextElement
that automatically registers and unregisters a thread that's currently running the coroutine as the one where the context is current.
l
Oh wow @Dmitry Khalanskiy [JB], that's amazing! So, it'll be released in 1.11? Or do you mean 2.0?
And how will it look like?
(and will it be supported by WasmJS?)
Ok wait I see a problem
Couldn't two threads use that corputine context concurrently?
d
Hopefully, yes, it will be available in 1.11. The PR contains the full implementation, and the API is described in this file: https://github.com/Kotlin/kotlinx.coroutines/blob/7182e4b734919db6e4312df8725cc0650937b8fa/kotlinx-coroutines-core/common/src/ThreadContextElement.common.kt it is already available on the JVM, so you can take a look. And yes, it will be available on all platforms. I don't quite understand the question about two coroutines using the same context. They can, if you want. If you don't, then don't run two coroutines with this context element in parallel.
l
Ok ic so it's my responsibility to only have one coroutine with this context a time
Thank you!
d
There can be more than one coroutine, as long as they don't run in parallel. For example, you could use
Dispatchers.IO.limitedParallelism(1)
to create a dispatcher that will run at most one coroutine at a time. As long as all coroutines with your context element use the same limited dispatcher, they won't be in conflict.
l
@Dmitry Khalanskiy [JB] right! I forgot about
limitedParallelism
(been some time since I messed with this). Thank you so much!
Wait, @Dmitry Khalanskiy [JB], I thought this context only applies to projects that use OpenGL?
d
The context that I linked to does, yes. Which other operations are you calling that need to be done on the same OS thread?
l
@Dmitry Khalanskiy [JB] Every single GLFW-related operation..
d
Looking into this more closely, I see in the "Thread safety" section of https://www.glfw.org/docs/3.3/intro_guide.html that there are GLFW operations that not only have to be called from one thread, but in fact must use the exact thread that called the
main
function of the program. So,
newSingleThreadContext
wouldn't help you, as it spawns a new thread, and similarly, you are not allowed to use
Dispatchers.Default
or
<http://Dispatchers.IO|Dispatchers.IO>
. The only thing you can do to use coroutines in this scenario at all is to do something like this:
Copy code
lateinit var glfwDispatcher: CoroutineDispatcher

fun main() {
  runBlocking {
    glfwDispatcher = coroutineContext[CoroutineDispatcher.Key]!!
    delay(Duration.INFINITE) // do not exit the main thread
  }
}
Then, using
glfwDispatcher
to run some tasks will make sure they happen on the thread calling
main()
.
l
That's roughly what I originally intended on doing, except channels instead of this dispatcher - haven't thought of doing it this way, thank you
@Dmitry Khalanskiy [JB] it seems like that throws an NPE?
d
Doesn't seem like it does: https://pl.kotl.in/YL5LiuUbF
l
@Dmitry Khalanskiy [JB] weird, it did crash for me, I'll send the error in a tad