``` scope.launch(<http://Dispatchers.IO|Dispatcher...
# coroutines
t
Copy code
scope.launch(<http://Dispatchers.IO|Dispatchers.IO>) {
   doSomeWork() // take 1 second
}
does using coroutine create memory footprint? because it has to use
Thread
at some point right? If I just launch 1 coroutine at the beginning and my app stay alive for 3 hours, does that
Thread
stay in memory for 3 hours?
d
Coroutines uses a threadpool which, as far as I know, is being created when the application starts, to be prepared for the tasks the have to do.
t
@Dennis Schröder
Coroutines uses a threadpool
Can we have a reference on this, from the source code maybe?
d
You'll find something in the documentation of
<http://Dispatchers.IO|Dispatchers.IO>
t
So the
shared pool of threads
does not go away once it got created?
p
Unless your Application process dies.
g
@Tuan Kiet Threads are part of the JVMs execution strategy. To run anything on the JVM, you need an associated Thread object. Thread-pools define behaviour around "if we have work create needed threads, if we have no work release as many threads to the GC as possible". Coroutines can run for hours (or days) and not be currently executing on any thread. Thus, you can have many coroutines, all of which are in flight (read: they were previously running something, and they will run something in the future), but not presently executing anything. In such cercomstance your system doesnt need to have any threads and will likely eventually release almost all of its current threads to GC. The way they do this is by finding threads when they need to do work. The device they use to do that is
Dispatchers
.
t
@groostav
Thread-pools define behaviour around "if we have work create needed threads, if we have no work release as many threads to the GC as possible"
can we get a reference on this?
After some investigation.
<http://Dispatchers.IO|Dispatchers.IO>
->
<http://DefaultScheduler.IO|DefaultScheduler.IO>
->
ExperimentalCoroutineDispatcher
which have
idleWorkerKeepAliveNs
which is
Copy code
@JvmField
internal val IDLE_WORKER_KEEP_ALIVE_NS = TimeUnit.SECONDS.toNanos(
    systemProp("kotlinx.coroutines.scheduler.keep.alive.sec", 5L)
)
while Worker is a Thread
Copy code
internal inner class Worker private constructor() : Thread()
and
Copy code
private val workers: Array<Worker?> = arrayOfNulls(maxPoolSize + 1)
and
Copy code
/*
                 * 5) It is safe to clear reference from workers array now.
                 */
                workers[lastIndex] = null
I suspect this is where coroutine implementation try to release unused thread so the system can GC it away. I’m correct?
And does simply set the worker reference to
null
will make it eligible to GC?
d
Dropping all references to an object makes it eligible for GC. Setting a reference to null, effectively drops the reference to the object. So determining whether that line alone makes it eligible for GC, is somewhat impossible.
TL;DR Not necessarily.
k
<http://Dispatchers.IO|Dispatchers.IO> – uses a shared pool of on-demand created threads and is designed for offloading of IO-intensive blocking operations (like file I/O and blocking socket I/O).
This means the thread pool isn't a constant size right?
t
the pool size is configurable
g
> Thread-pools define behaviour around "if we have work create needed threads, if we have no work release as many threads to the GC as possible"
can we get a reference on this?
I mean, this is more a jvm thing than a kotlin thing. I think a good start place would be https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executors.html. Reading the source code will likely be difficult because the Fork-join pool for example implements a pretty sohpshticated work sharing system. @Tuan Kiet are you new to both java threading and coroutines or just coroutines? I think its fair to say that kotlinx.coroutines uses a pretty safe, direct, and intuitive mapping of the exsiting conventions around threads from the JVM, with a few caveats.
This is an important question because the last 5 years of my development have been tortured by one bad form of concurrency after another. Having a firm grasp on this stuff before starting a project that involves it is important.
t
I’m new to both. Sure there is a lot to learn