Hi folks, I've been building a simplicity-prioritized web application from the ground-up, with the fewest possible dependencies. That means I also built a server for this, using Socket and ServerSocket.
It's been of the ordinary blocking kind, and I'm pretty happy. But not complacent. Right now I'm doing some research to see if there's any low-hanging fruit in terms of massive speed gains.
The driving business paradigm is that the application is for timekeeping at a company. If I run the code to add a time entry below the server layer, I can add a million time entries per second (thread-safely! It's atomic indexes and ConcurrentHashMap for the shared mutable state). However, if I do the equivalent through the server layer, it's down to eleven thousand per second. Yuck. Well, I mean it's ok and all, but I want a million if there's some easy way to have it.
One of the big bottlenecks is the back-and-forth of the HTTP protocol, where the server examines the first line to see what it is (GET? POST? etc) and then reads the headers (Does it have a content-length?, Who is this, per the cookie?) and then assembles a response in kind. It also complicates things that I am handling keep-alive as well. That means the client might decide to stay on the socket for the next back-and-forth.
Which leads me to the idea of non-blocking. If I do that, sure, each individual request/response should be the same performance, but with all that waiting taking place in blocking servers, I would imagine I could parallelize this tremendously and get my million requests per second.
Questions:
1. Am I insane?
2. Is non-blocking the answer to this?
3. I've looked at Ktor's code. Is there other code that covers similar ground, written in pure Kotlin, test/quality-oriented, that's fast?