you can use `async {}` coroutine builder and then ...
# announcements
l
you can use
async {}
coroutine builder and then create a list (of say, max 100 elements) from the resulting `deferred`s, then
.awaitAll()
and process them. do this for as many entries as you need, maybe by chunking your entries into blocks of max-length 100 before an then just
Copy code
map { async { api.fetchEntry(it) } }.awaitAll()
👍 1
🧵 4
f
Ok, so you would basically chunk the work to keep the memory footprint low. Would work. Doesn't not achieve the max performance (if e.g. only a single entry is slow) but it's already a good start! Thanks
l
You could use channels or something to determine how much memory is used dynamically and then dynamically change the amount of requests you do or process in parallel. this would get pretty complex though
f
I dont't think it needs to be so sophisticated. max number of parallel requests should worl. In multi-threadding you would probably limit the exdecutor size / have a semaphore. Maybe the
buffer()
in the flow api works similar.