Hi, All! I got an issue with leaking sockets in CL...
# ktor
l
Hi, All! I got an issue with leaking sockets in CLOSE_WAIT state. I have a ktor server connecting with mobile clients using websockets. Websocket handling cycle is just copied from chat demo application for ktor: private suspend fun DefaultWebSocketServerSession.handleWebSocketSession() { val session = WebSocketSession(this, sessionService) try { for (frame in incoming) { if (frame is Frame.Binary) session.handleFrame(frame, endpoints) else if (frame is Frame.Text) close(CloseReason(CloseReason.Codes.PROTOCOL_ERROR, "Text frames are not supported")) } } finally { session.close() } } My server is behind an AWS Application Load Balancer. There are some descriptions of what to do with a growing number of CLOSE_WAIT sockets for netty, but I don't see how they can be applied to this cycle. Did anybody hit this issue or know where to look?
👀 2
Websockets options are: pingPeriod = Duration.ofSeconds(60) timeout = Duration.ofSeconds(15) maxFrameSize = 1 * 1024 * 1024 masking = false
The issue can be reproduced inside docker container with traefik as a load balancer, but not on the windows...
Seems that the issue was fixed in 1.2.3