Does anyone have any tips for decreasing app start...
# ktor
s
Does anyone have any tips for decreasing app start times on the JVM? Ktor is fast for a JVM framework, but the JVM is pretty slow.
j
What's the goal? Start fast but be slower over the lifetime? Or start fast and stay fast? Or start fast and quit after 1 request? Something else?
d
You might be interested in GraalVM: https://ktor.io/docs/graalvm.html
j
You can use epsilon GC if you're short lived. You can use AppCDS to preload classes. You can use CIO instead of Jetty/Netty which doesn't take multiple seconds to start. You can disable C2 if you're short-lived.
s
@jw Fast start and stay fast - Running JVM Server in serverless environment.
@Dominik Sandjaja I paid a lot of attention to graalvm, but I've found maintaining it and getting everything working with it (especially netty) a HUGE pain. Almost worse than proguard.
d
[...] stay fast - Running [...] in serverless environment
Isn't this a bit of a contradiction? Isn't the whole idea to only handle a single request and then quickly shut down again?
s
@Dominik Sandjaja Take a look at https://cloud.google.com/run It automatically scales with your traffic. The Goal is to startup and then handle more concurrent requests until the traffic spike subsides.
You can use CIO instead of Jetty/Netty which doesn't take multiple seconds to start.
@jw This is interesting. I've been hesitant to use CIO, and stayed away from it because I assumed no one else was using it. Plus why would I use it when I could use something like Netty 😛 Maybe Netty is the problem though.
j
I use CIO but not for anything heavy. It much easier to make a minimal JRE with jlink using CIO than Jetty/Netty.
s
I guess the reason I'm not using GraalVM is because of Netty now that I think about it
CIO looks to have worse performance than netty, but I guess that's the tradeoff
d
I guess it makes sense to evaluate whether you do benefit from the performance benefits of a longer-running JVM vs. an amazingly quickly starting application. Depending on your usecase you could maybe even create two application variants, one for the longer running "baseline" and a graalVM based version that handles peak traffic. But this is now more an architectural discussion than a
ktor
one.
s
For now I'll give CIO a shot and also look into AppCDS.
c
JB started working on a ktor-compatible serverless toolkit, Kotless, but it looks like it hasn’t been updated in a while. But it mentions (among other features) AWS Lambda auto-warming
j
that name physically hurts
K 1
😅 5
p
Surely cant be any worse than the (circa 2016) “Gogland” that thankfully was renamed
👍 1
a
Here's some tips to keep serverless start times low. 1. Minimize JAR size. Eliminate bloated libraries like jackson, log4j, spring, guava, official aws sdk, and kotlin-reflect 2. Minimize config fetching. If your app needs to make external calls to get secrets and other config params, embed them as env variables instead 3. Optimize JVM AoT JiT compilation for cold-starts. Having fast start and fast run would be having your cake and eating it too. Set this env variable:
JAVA_TOOL_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1
4. I have no idea why and how you're running Netty in a Lambda to serve your traffic. You would ideally use an adapter to convert Ktor to the serverless programming model, without running an embedded server. This can apparently be done with Kotless (as shown above), but http4k's smaller footprint and modular architecture gives it an an edge in serverless environments. https://www.http4k.org/guide/reference/serverless/ 5. Eliminate reflection on app init. No AoP or DI tools (unless done at compile-time)
j
Optimize JVM AoT compilation for cold-starts.
This does not do anything for AOT. This is disabling C2. Both C1 and C2 are JIT compilers since they run at runtime, not ahead-of-time.
Both aspect-oriented and DI libraries can be used provided they run at compile-time and not runtime.
a
This does not do anything for AOT
Yes. I meant JiT. Corrected.
Both aspect-oriented and DI libraries can be used provided they run at compile-time and not runtime.
Good point
s
I have no idea why and how you're running Netty in a Lambda to serve your traffic.
lol! I'm not. hahaha I'm using Cloud Run. Cloud run lets you run a docker container, and then it stays running. Cloud Run scales at the docker container level. IMO it's much better than something like functions or lambda.
Having fast start and fast run would be having your cake and eating it too.
It'd be nice if I could focus on fast starts, and then transition to fast run.
V8 does a pretty nice job of starting way faster than Java, and then having similar performance.
j
yeah it's unfortunate you can't stop at tier 1 for a while and then enable C2 (tier 4) after some period of time (60 seconds). You could set the tier 2 and 3 thresholds to be equal to tier 4 and then set tier 4 at some level that approximates the amount of time your container would be running at 100% load for some period of time
s
It looks like you can change VM parameters at runtime -
com.sun.management.HotSpotDiagnosticMXBean#setVMOption(String, String)
I don't know if it would actually have an effect
You could set the tier 2 and 3 thresholds to be equal to tier 4 and then set tier 4
@jw What thresholds?
j
it's basically the number of invocations before the next level of compilation kicks in
s
how do I set those?
j
if you set 1 to 0 (immediately JIT compile all methods with no profiling data) and 2, 3, and 4 to, say, 10,000 (or some number) then C2 which is the most advanced JIT compilation
uhhhhh -Xint:Tier2Compilation=10000?
i have to google
s
I'll google for it
j
s
interesting, I remember reading about this a while back
I've been reading about this for a while. I'm really surprised how I can't find actual documentation for this. I'm trying to understand the difference between
CompileThreshold
and
Tier2CompileThreshold
. For some reason
CompileThreshold
default is 10k, but
Tier2CompileThreshold
default is 0 with
Tier3CompileThreshold
at 2k.
Tier1CompileThreshold
doesn't exist, and based on the defaults, I'd assume that
CompileThreshold
isn't Tier1.
@jw So I could do:
Tier2CompileThreshold=10000
Tier3CompileThreshold=10000
Tier4CompileThreshold=10000
But how would I set Tier1 to 0? Based on the defaults, it seems like
CompileThreshold
is not Tier1.
j
I'm not sure. Maybe it can't be controlled? Or maybe it kicks in right away anyway?
s
It looks like Netty finally fixed their graalvm issue yesterday - https://github.com/netty/netty/pull/13158
👌 2
128 Views