zed
03/15/2024, 9:45 AM“-XX:+UseParallelGC -XX:ActiveProcessorCount=2 -XX:MaxRAMPercentage=65.0 -XX:MinRAMPercentage=60.0”
• Http4k: 5.14.0.0
• Server: Undertow (but also netty)
• ~30 requests / second
• AWS, Kubernetes:
◦ requests: { memory: 1720Mi, cpu: 500m }
◦ limits: { memory: 1720Mi, cpu: 2000m }
Observations:
• When overall CPU usage starts to increase, minor GC usage completely stopped.
• Self-healing effect (over ~12 hours).
• Response time increases massively in the “explosion” phase (currently we have a timeout of 10s on client side).zed
03/15/2024, 9:47 AMRob Elliot
03/15/2024, 10:07 AMzed
03/15/2024, 10:18 AMRob Elliot
03/15/2024, 10:23 AMzed
03/15/2024, 10:27 AMdave
03/15/2024, 10:32 AMzed
03/15/2024, 10:40 AMhttp4k-format-jackson
). We basically use forkhandles Result4k
, in one case http4k-connect-redis
, in one case the official mongo client and for other service calls okhttp
client.dave
03/15/2024, 10:41 AMzed
03/15/2024, 10:45 AMJames Richardson
03/15/2024, 12:19 PMzed
03/15/2024, 1:35 PMYou memory graph shows heap, but can you also show other pools?You mean something like that?
zed
03/15/2024, 2:01 PMwe’ve run much heavier workloads with undertow in K8S in the past and haven’t ever found problems related to http4kOur guess is that we have some kind of misconfiguration - that somehow the interaction of k8s resources, the used server (undertow, netty, etc) & JVM are not optimal. Are there any good practices when it comes to resources, undertow config and JVM setup? There are so many degrees of freedom (which JVM version, which distribution, additional JAVA_OPT configuration, k8s resources, etc.) ... 😳
James Richardson
03/15/2024, 2:08 PMzed
03/15/2024, 2:12 PMJames Richardson
03/15/2024, 2:13 PMJames Richardson
03/15/2024, 2:32 PM