That's probably a better question to ask the other...
# http4k
d
That's probably a better question to ask the other library authors! 🙃 From our side, we are doing no magic or reflection so the JVM can optimise well, and http4k is actually a tiny library so isn't adding much overhead to the underlying platform. Possible reasons for this state of affairs: Maybe the Apache backend is so fast because they've had years and years to optimise it? Or maybe thread context switching is actually more expensive than is generally perceived? Or maybe these benchmarks aren't realistic usage? Or maybe the other web libraries are just coded to be complex and inefficient? Or maybe the dB driver used doesn't work well with async models? Or maybe everyone has just been misled about how great the latest shiny async models are when you're not dealing with Google sized traffic? 😀 Principly we've made http4k with developer experience and testability in mind, because developer time is much more expensive than k8s spinning up another instance. If it also performs well then that's great, but it's not something we concentrate on.