https://kotlinlang.org logo
Title
a

asad.awadia

03/20/2019, 3:03 AM
If i have a simple rest api on a server - whats the best way to test how many requests per seconds it can do? I am trying to load test it via bench/vegeta/jmeter etc but my client machine runs out of resources i think before i reach the peak
x

Xavier Hanin

03/20/2019, 6:05 AM
I suggest to have a look at gatling.io, which is asynchronous and doesn’t use threads for the virtuals users, so it’s very resource cheap: https://gatling.io/docs/current/
For very simple tests I’ve also found
ab
useful in the past: http://httpd.apache.org/docs/2.2/en/programs/ab.html
👍 1
a

asad.awadia

03/20/2019, 11:12 AM
Hmm i ve tried both
I dont think its the tooling - i think its the server/client machine tuning that is not right
s

Shawn A

03/20/2019, 3:24 PM
What does your environment look like? Are you able to spin up additional compute resources in "the cloud"? I've used locust on top of docker swarm to generate 5k TPS no problem.
a

asad.awadia

03/20/2019, 3:26 PM
Right now everything is just on local laptops
Do u have an example?
s

Shawn A

03/20/2019, 3:26 PM
thats not a very good way to test the throughput of your app unfortunately.
a

asad.awadia

03/20/2019, 3:26 PM
Yeah i know lol
That’s why Im posting here
s

Shawn A

03/20/2019, 3:27 PM
You should deploy onto whatever hardware you plan to host your app on in production
a

asad.awadia

03/20/2019, 3:27 PM
My loadtester runs out of resources before i reach maximum throughput on the server
s

Shawn A

03/20/2019, 3:28 PM
Is your server running locally too? Or is that deployed somewhere
You'll want to move the load testing tool off your local machine and onto a server or cluster of servers wherever you host your stuff.
a

asad.awadia

03/20/2019, 3:31 PM
I have tried running the server on a simple server on gcp
And then used kubernetes to scale up a cluster to spam it
But still ran out of resources on the cluster before the server starts failing
The instance is a tiny micro instance
And the server is just one endpoint that returns 200
s

Shawn A

03/20/2019, 3:48 PM
It returns a static hard coded 200? It does no other processing?
Why would you want to load test that?
a

asad.awadia

03/20/2019, 3:48 PM
Because i want to compare two frameworks
s

Shawn A

03/20/2019, 3:50 PM
Just pick one based on the features you need. You'll be fine.
👍🏾 2
e

earroyoron

03/20/2019, 7:15 PM
If run out of resources just looks like you need some non-blocking client to attack service… some node.js tool maybe?
d

dave

03/21/2019, 2:12 PM
@asad.awadia are these frameworks already covered by tech empower? might save you some effort 🙂
a

asad.awadia

03/21/2019, 2:34 PM
@dave they are
But I can’t even come close to those results
And also want to do my own verification
d

dave

03/21/2019, 2:37 PM
well, they do have a customised setup (which has taken years to hone into the state that it is now). Since all frameworks are all run on the same hardware under the same* conditions, the results are pretty much relative - so maybe you've already got an pretty good answer as to which one is fastest?
they also do a variety of tests, which you'd also have to replicate for real world usage (much more comprehensive than simply returning a 200 🙂 ).
a

asad.awadia

03/21/2019, 2:57 PM
I honestly just want to have a number for reference - A simple vertx or ktor or http4k server with one endpoint returning 200 on a normal aws/gcp instances can handle X requests per seconds
That’s all I want - regardless of whether that’s useful or not :/
😂 1