HI http4kers! Quick update for those of you that a...
# http4k
d
HI http4kers! Quick update for those of you that are running on Serverless AWS Lambda. As well as the the existing support for the various Lambda runtimes that we already supported (standard AWS (x86 and Graviton), custom http4k-JVM and GraalVM), we've just pushed support for running the http4k-JVM runtime on Graviton2 via the official http4k Docker Images. We've also upgraded the underlying infra so we're on the latest versions of Amazon Linux, Java (11,17,19) and Graal πŸ™ƒ This means you can take advantage of the both the better cost savings of Graviton and the http4k runtime to get better performance and lower memory footprint for less πŸ’΅ , even if you don't want to go all the way to compiling to native with GraalVM. There's a fully working example in the examples repo which uses Pulumi to setup the infrastructure, or you can go straight to the amd64 docker image repo to see how it works from the command line πŸ™‚
πŸ‘Œ 2
πŸŽ‰ 3
a
What is the advantage of running on the custom runtime? Does it just strip jackson from the classpath? Is this a worthwhile tradeoff against the cold-start hit you presumably get from using a docker image instead of an official runtime?
d
From our (fairly unscientific admittedly πŸ˜‰) experiments we see that it's significantly faster to startup from cold than the official one you might see. And it's not just the embedded Jackson initialisation that's bloating out the standard release - both in terms of size and launch speed - memory is hazy but pretty sure that I also saw log4j in there as well when I was investigating which didn't help. The custom adapter layer for the AWS events that we have built into the http4k runtime is based on Moshi, so that's much smaller/faster as well as there is no reflection at all required. There is definitely an experiment to be done to compare all these things - I have a half finished repo where we compare across AWS/http4k/Graal runtimes, various serverless implementations for http4k, spring, micronaut, quarkus, and AWS SDK vs http4k-connect. Snapstart will make yet another variant to all this to compare - or just might make the entire comparison moot! πŸ˜‚ There's also a trade off between lambda-size, memory,cold start time, build time (aka cycle time), and performance. I think it's all about options TBH and what your use-case is and how that all fits into the cost. As always, YMMV so for us it's all about giving people options! BTW - the docker images that we have above don't produce an image - they output a self-contained function ZIP file which runs on AmazonLinux2 - if that ends up in a docker image in lambda is unknown to me πŸ™‚ .
a
Well, I don't even have snapstart in ca-central-1, so I'll give it a shot! Any particular reason you stuck to java 11? I've been itching to upgrade my lambdas to 17.
d
No reason. There are images for 17 and 19 as well.' πŸ™ƒ
Will be interesting to hear your experiences so please report back ! πŸ™ƒ
a
Oh, I think I see how it works. The docker image isn't the lambda runtime; it's just a script that packages the jar to run with a bundled jvm on an AL2 runtime. I'll have to see if I can get this to work with SAM. Maybe tomorrow.
So, here's the results of my tests. My testbed was a small but fully featured rest API with nimbus JWT authorization and a swagger UI. There is a 7 MB reflectionless variant (with kotlinx-serialization), and a 10 MB variant with moshi-kotlin. All functions were running with 2048 MB memory and JIT optimized for cold-start (-XX:+TieredCompilation -XX:TieredStopAtLevel=1). Kotlinx-serialization on java11 runtime java11 runtime init: 350 ms App init: 1100 ms request: 50 ms Kotlinx-serialization on custom runtime provided.al2 runtime init: 380 ms custom runtime init: 500 ms app init: 550 ms request: 60 ms Moshi-Kotlin on java11 runtime java11 runtime init: 250 ms App init: 1500 ms request: 60 ms Moshi-Kotlin on custom runtime provided.al2 runtime init: 350 ms custom runtime init: 500 ms App init: 1000 ms request: 60 ms While David has demonstrated some truly impressive (125 ms) cold-start times, I haven't been able to get the same improvement for my function. I would be very interested to see what results others can produce. See updated results below. I was mistaken about the example's impressive 125 ms cold-start time; it had 1100 ms.
d
Thanks @Andrew O'Hara! Those super-fast times I demonstrated have admittedly been with a super lightweight function (no http4k-contract or outbound HTTP calls involved), but we did use Moshi with reflection in there. A further investigation is needed to see what's going on because the disparity is weird.. πŸ€”
a
Yeah, I should try working from the other way around: with a minimal function and slowly adding more components.
So here's the (updated) results of my tests, following some further optimizations I discovered with David. The built-in java11 runtime is still hard to beat if you have optimized dependencies. I'm still very interested to see if people can get better results with the custom runtime. An interesting thing to note is that the
Java8HttpClient
has a much better cold start time than the (usually recommended)
JavaHttpClient
. And of course, eliminating reflection will always result in better performance. I will however note that if you need features from newer JREs (beyond 11), the custom runtime is a great way to make them available on Lambda. https://docs.google.com/spreadsheets/d/1bMM6YygmCZUeCHeB326bUIZcoaeQWNbU1x78DGX6hYM/edit?usp=sharing