We're using the `MicrometerMetrics` plugin, and se...
# ktor
x
We're using the
MicrometerMetrics
plugin, and seeing some pretty weird results from the
ktor.http.server.requests.active
metric. Looking at the code, I believe it's supposed to be a gauge that shows the current number of requests being handled. So if the service is idle, it'l go down to 0, ang generally it'll always be a non-negative integer moving up and down. However, what we're seeing is a graph that just increases until the server is restarted, at which point it drops back to 0. I'm wondering if this could be a bug in the implementation of
ktor.http.server.requests.active
, or if we've got some kind of configuration error causing this, or if I'm just misunderstanding what this metric is supposed to be. Does anyone else use this metric, and if so does it look like this?
🙌 1
h
I have noticed inconsistency with those metrics as well. Even reported a couple of weeks ago, but no response. https://kotlinlang.slack.com/archives/C0A974TJ9/p1701058942797339?thread_ts=1701058942.797339&cid=C0A974TJ9
a
Unfortunately, I cannot reproduce the problem with the following code:
Copy code
val reg = SimpleMeterRegistry()
embeddedServer(Netty, port = 5555) {
    install(MicrometerMetrics) {
        registry = reg
    }
    routing {
        get {
            call.respondText { reg.get("ktor.http.server.requests.active").gauge().value().toString() }
        }
    }
}.start(wait = true)
I did the test with
ab
and none of them showed a negative value:
Copy code
ab -n 5000 -c 32 -v 4 <http://localhost:5555/>
Can you tell me how to reproduce the bug?
x
Wow, that's really strange, @Helio! It's like the issues we're running into are almost opposite. Our code is similar to yours: we also just set the registry and a list of binders:
Copy code
val metricsBinders = listOf(
    ClassLoaderMetrics(),
    JvmGcMetrics(),
    JvmInfoMetrics(),
    JvmMemoryMetrics(),
    JvmThreadMetrics(),
    FileDescriptorMetrics(),
    ProcessorMetrics()
)

...

install(MicrometerMetrics) {
    registry = metricsRegistry
    meterBinders = metricsBinders
}
We used to be using a
DatadogMeterRegistry
and now we're using a
StatsdMeterRegistry
, but we see the same odd behavior with either: the
ktor.http.server.requests.active
metric continually increases until the server restarts, at which point it drops to zero before rising again. We have other gauge metrics (like those created by
JvmMemoryMetrics
and
JvmThreadMetrics
), and they all behave as expected, with the values they report rising and falling over time: