Question for the coroutine experts here: If I have...
# coroutines
b
Question for the coroutine experts here: If I have a program that doesn't really do much blocking - like let's say, a single run of the program just does a bunch of math and local data collection transformations, and I run that program again and again as fast as possible, would there be much call for coroutines? I am thinking not - rather, that just having a fixed set of threads (1 per core) should be as effective. I'm thinking that coroutines really shine when there's significant blocking, is that correct?
t
For most things, ideally you want to run as many threads as you have CPUs (maybe -1). Maybe more true for more CPU intensive things, and maybe less true for other things. but a general rule of thumb.
coroutines can be helpful in dispatching work to threads/CPUs but if you already have a simple means to do that... then yeah, not sure it's buying you much.
b
helpful, because lots of blocking actions give us an opportunity to be doing something rather than waiting? And without which, coroutines don't buy much?
t
For me the major bang-for-the-buck in coroutines is the ability to write asynchronous code that looks synchronous. which has nothing to do with threads really... everything else (launchers, dispatchers, channels etc) is a bunch of syntactic sugar on top of threads. Maybe kinda useful I suppose... but not to me 😉
b
But it all gets back to blocking code, doesn't it? You write async because of blocking code, don't you?
err, I mean code that blocks and incurs delays
t
your choices for blocking core are 1) block the thread. 2) have a callback.
coroutines are a compiler trick to make #2 look like #1.
coroutines != threads. It's just a way to hide callbacks.
b
That helps, thank @TwoClocks
By the way, why do you say to run number of processors minus one?
t
you can add threading on top of that... but you can add threading on top of any fucntion call().
it really depends on what your doing exactly, but sometimes you leave a CPU for the OS/JVM to do to work.
on modern CPUs context switches are increadbly slow. like... really rreally rreally slow. It takes less time to send a packet to another machine than for a CPU to context switch.
So if the JVM/OS decide it's time to do some work, and context switch your code out... the results are bad
b
You've seen empirical evidence of that?
t
so you leave them a free CPU... so they don't destory the nice warm cache you have going in your CPU
what? The context switch? oh god yea. it's a really easy test. Just get two machine and do a ping-pong test vs a thread switching test. I do it on every new rev of CPU, and it's just gotten slower over the past 10 years.
b
I ask just because when I run with #cores == #threads, my performance graph looks very even for a long while. If what you're saying is true, wouldn't I see spikes in the graph?
t
it would depend on how cache effective your code is (e.g. what is the damage when the cache is blown out) and how long running your code is, etc. that kinda stuff is very code dependent.
b
I see. Cool, good insight.
t
it also depends on how you measure it. are you measuing runtime, or some inner loop cycle time?