From this issue: <https://youtrack.jetbrains.com/i...
# ktor
m
From this issue: https://youtrack.jetbrains.com/issue/KTOR-2187/How-to-detect-if-a-request-was-cancelled-from-client-on-Ktor-server this api used to exists, the
configure = ...
in this case to detect when the client dropped the channel, it no longer exists in ktor 3. Is there an equivalent? was it moved?
Copy code
embeddedServer(Netty, port = 8080, host = "0.0.0.0", configure = {
        channelPipelineConfig = {
            addLast("cancellationDetector", AbortableRequestHandler())
        }
    }) {
        configureRouting()
    }.start(wait = true)
it was an overload issue, it still exists just with different parameters
👌 1
after toying a bit with this, it's clear that the implementation in that ktor issue is broken since the handler is reused between requests, is there a proper way to detect the client dropping the connection?
right now I am relaying on (psude code)
Copy code
call.respondText {
   GlobalScope.launch { sendPingOrCloseScope() }
   someFlow.collect { sendStuff() }
}
It's not very elegant to depend on a
ChannelClosedException
from the ping when the flow is busy, but there seems no way to actually check for call "liveness"
👀 1
a
Can you please explain what you mean by the client dropping the channel?
m
just the frontend closing the connection, ie, the website closing an event stream (https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#closing_event_streams)
the callback happens in the netty engine at
override fun channelInactive(ctx: ChannelHandlerContext?)
, but from that ChannelHandlerContext it's not possible (AFAIK, maybe it is) to get back to the original request / scope handling the request to cancel it
so the currently what happens -> the frontend connects <- a slow emitting job starts emitting events in the server -> the client disconnects while the job is suspended doing busy work <- the jobs keeps going regardless of the client disconnected, the last emission fails (channelClosed), the job is cancelled that's why a keep alive job as bandaid worked in this case, it reached the "channel was closed" error without having to wait for the busy job to finish something I expected would be
Copy code
call.respondText {

  onClientDisconnected { 
      cancel the job / whatever is required
   }

   { ... do some slow work... }
}
a
Can you please explain why the solution from KTOR-2187 doesn't work for you?
Copy code
embeddedServer(Netty, serverConfig {
    module {
        install(SSE)

        routing {
            sse {
                while (true) {
                    send(ServerSentEvent("Sending"))
                    delay(1.seconds)
                }
            }
        }
    }
}) {
    connector {
        port = 8060
    }
    channelPipelineConfig = {
        addLast("cancellationDetector", object : ChannelInboundHandlerAdapter() {
            override fun channelInactive(ctx: ChannelHandlerContext?) {
                super.channelInactive(ctx)
            }
        })
    }

}.start(wait = true)
m
the implementation was broken for concurrent requests, this seems to work
Copy code
import io.ktor.server.application.ApplicationCall
import io.ktor.util.AttributeKey
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import java.util.concurrent.ConcurrentHashMap

private val ChannelInactiveKey = AttributeKey<() -> Unit>("OnChannelInactiveCallback")

fun ApplicationCall.onChannelInactive(fn: () -> Unit) {
    attributes.put(ChannelInactiveKey, fn)
}

class AbortableRequestHandler : ChannelInboundHandlerAdapter() {
    private val activeChannels = ConcurrentHashMap<ChannelHandlerContext, ApplicationCall>()
    override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
        super.channelRead(ctx, msg)
        if (msg is ApplicationCall) {
            activeChannels[ctx] = msg
        }
    }

    override fun channelInactive(ctx: ChannelHandlerContext) {
        super.channelInactive(ctx)
        val call = activeChannels.remove(ctx)
        if (call != null) {
            call.attributes.getOrNull(ChannelInactiveKey)?.invoke()
        }
    }
}
assuming it's 1 to 1, one ChannelHandlerContext to each call, seems to be the case
👌 1