Unfortunately, especially on x86, hammering on stu...
# coroutines
m
Unfortunately, especially on x86, hammering on stuff hoping for rare failures tends to only produce results that imply that x86 has a very strong memory model. 😉 If you can't convince yourself with sufficient perusal of JCIP that there's a happens-before chain, it's probably wrong.
👍 1
g
thanks for your thoughts on this stuff. One other thing, and this is a key difference between using a
SequentialExecutor
and a
mutex
, is that I could have the SequentialExecutor preserve threads such that it adds a gaurentee that any job submitted at
T_lastJobFinish
+
X
is run on the same thread as lastJob, where
X
is a runtime-chosen delay (say 2 seconds). In this way, "hot" sequential executors would have all submitted jobs run on the same thread yielding strong isolation gaurentees. "cold" ones would be run on separate threads but should be given isolation guarantees so long as X is sufficiently large. Thinking about bell curves, if you consider the average time it takes for a thread-local cached value to get flushed, I suspect that 5 seconds is well outside 5 deviations away from the average, which I imagine is on the order of nano seconds, with "nasty" values hanging around for as long as micro-seconds. Perhaps this is just a losers strategy? Generally speaking we try to make asynchronous things have simple parameters and return values, which makes them very clean and referentially transparent and easy to parallelize, but with MVC and bit mutable view state you cant help but start thinking about some things in terms of shared state.
m
Without barriers, though, it's not just about "the hardware will probably have flushed its cache" by then. The compiler is allowed to do things like elide writes if it thinks it can get away with it, so the CPU will never even see the write if you're not telling the compiler "no really, this has to get written because some other thread might care eventually".