So I have been playing with more or less working ktor with coroutines, and most importantly benchmarking it. Long story short, local benchmarking of a web service is a lie. By playing with parallelism size (number of threads in pool), load size (number of threads making connections) and buffer size (number of bytes in buffers for transferring files) we can get results from 10 to 30 ops/ms for a simple “OK” response message, and from 10 to 22 ops/ms for small files, and up to 1.5 ops/ms for large (~1mb) files. Here “op” is a single request-response round trip with full transfer of data and connection recycle. I don’t really know if it’s good or bad, but various internet sources show about 20K ops per second on a comparable machine, so I assume it’s ok for now.
Jetty coroutine support is not fully working yet, @cy is on it. Also WS and HTTP2 are in progress as well. After that we will start moving non-coroutine changes to master first (there were various optimisation and improvements that don’t depend on underlying machinery). When difference between branches will only contain coroutines changes, we will make a critical maintenance branch for current version and switch master to 1.1.
If you didn’t migrate to 1.1 yet, it’s time to start 🙂