groostav
04/19/2018, 11:35 PMSequentialPoolExecutor
, so I wrote one, and tested it. The idea is that every submit
or execute
call is sequential-ized such that no submitted job will run before another submitted job completes. This is not necessarily single threaded as one very simple optimization we can employ is to back this SequentialExecutor with a pool, where the first-available thread is selected to run the job.
My goal for such a component would be to allow me to write code like this:
class SomeStatefulComponent {
private val data = MutableDataStructureThatIsntThreadSafe()
suspend fun doMutation(args: Args): Double = run(SequentialExecutor) {
data += transform(args)
return data.moreComputation()
}
}
and not worry about using nasty @Volatile
or Unsafe
or AtomicReference
or more generally CAS/locking strategies.
Instead such an executor would elegantly serialize everything for me.
But the problem, thinking back to concurrency in practice, is that which @Volatile
was originally designed to solve: if some mutable state on data
is put into a thread-local cache, then even squentially run jobds might get their correctness ruined by such a cache.
Effectively this boils down to an apocalyptic assumption: is it really the case that fields not marked for explicit thread sharing cannot ever be shared between threads?
Does somebody have a clever way to make this non-functional problem into a functional one via use of a fuzz-testing or other concurrency-testing strategy?Vsevolod Tolstopyatov [JB]
04/20/2018, 8:19 AMEffectively this boils down to an apocalyptic assumption: is it really the case that fields not marked for explicit thread sharing cannot ever be shared between threads?It’s not. This is a classic problem which arises with advanced concurrency and JMM (Java Memory Model) 🙂 Even though it’s useful to know what stays behind
volatile
and what a memory fence is, this is not what you usually want to prove or understand the correctness of your concurrent code.
In your particular executor case, if execution of the second task happens-before
(in terms of JMM) after execution of the first task, everything will work as expected. Any sane implementation of executor will guarantee it (it’s pretty hard to construct executor that will be sequential, but won’t provide formal happens-before
between tasks).
I’d encourage you to read this awesome article https://shipilev.net/blog/2016/close-encounters-of-jmm-kind/ which explains why it’s hard to rely on low-level mechanics and formal JMM reasoning should be used insteadvolatile
or fences, right? One thing mutex (synchronized
, j.u.c.Lock
, w/e) guarantees is mutual exclusion and another one is happens-before
between monitor acquisitions and releases. It is the happens-before
which guarantees proper visibility and “thread cache machinery”.
In your example, the executor is not distinguishable from such mutex in terms of correctness and/or visibility