how can i add throttling to a code like this: ``` ...
# coroutines
c
how can i add throttling to a code like this:
Copy code
(0 until entries).map {
                    async {
                        repo.create(connection, user)
                    }
                }.forEach { it.await() }
t
delay
?
c
a delay would work here, but I’m asking whats the coroutines way of doing throttling. for example i want to create 10.000 entries but make sure that the async create invocations are limited to 100
t
fixedThreadPool maybe then? Still not cr way, but should work
c
Could use .chunk() but tbh throttling comes out of the box as the underlying implementation uses finite threads.
c
i want to throttle the user create calls and those are non blocking
c
What does repo.create return?
c
it creates a row in a database and returns a user object. I now ended up with this:
Copy code
test("bulk inserts") {
            val channel = Channel<Deferred<User>>(capacity = 40)
            val entries = 1000
            launch {
                connection.transaction {
                    repeat(entries) {
                        channel.send(async {
                            repo.create(connection, user)
                        })
                    }
                }
            }
            repeat(entries) { channel.receive().await() }
        }
c
I presume you’re worried about opening too many sockets with the db. If you just want to limit that, you could probably do something like this.
Copy code
someList.chunked(maxSize).flatMap { it.map { async { repo.create(connection, user) } }.awaitAll() }
Although, you seem to be wanting to use transactions as well. Something tells me parallelising requests in a transaction isn’t going to be feasible. Wouldn’t the transaction be happening in a single db connection?
c
its a nonblocking r2dbc client and i just wanted to check if its any faster with a bit of concurrency over one connection. and i need to throttle it because the r2dbc postgresql driver has an internal queue of 256 pending requests per connection and throws when it reaches that limit.
c
Ah. In general, NIO isn’t going to be faster in terms of request time as there’s overhead involved. It’s going to allow you to scale vertically though and handle more connections as you’re no longer handling each call on a separate thread.