I've got a big JSON file that I want to read; for ...
# coroutines
b
I've got a big JSON file that I want to read; for each entry in an array that is read lazily, I want to make a request to an external system; however, I don't want to send each request at once but use a maximum of x requests at once; I also don't want to evaluate more objects than I can currently send; is there an easy way to do that?
1
j
You can send all values to a channel and consume the channel from a set of x coroutines. If the channel is a rendez-vous channel (0 buffer), the producer of values will be suspended while consuming coroutines are busy, meaning you won't generate more values than you should.
b
I would have expected an API similar to something like this
Copy code
generateSequence { readNextElement() }
        .filterNotNull()
        .parallel(8)
        .forEach {
            executeRequest(it)
        }
is that Flow?
j
Yes, but there is no built-in operator yet for concurrent processing of elements: https://github.com/Kotlin/kotlinx.coroutines/issues/1147
b
thank you
j
There are many examples in this issue about how you could go about it, though, if you want to express it concisely with a flow. It should be doable if you know precisely your constraints.
c
My library Pedestal has an operator to do this: https://pedestal.opensavvy.dev/api-docs/cache/opensavvy.cache/batching-cache.html (internally, it uses the Channel solution mentioned above)
b
thank you
loads everything into memory though, correct?
c
It depends on the way you initialize it. In the example shown in the page I linked, only the elements in the in-progress batch are loaded into memory. You can use
Copy code
batchingCache { … }
  .storedInMemory()
if you want to store all requested elements in memory.
b
ah, great, thank you
c
I'm planning on reworking that library in the future (with backwards-compatibility), so if you do try it out and have any feedback, please create an issue about it or mention it in #C078Z1QRHL3 🙏
b
will do, I'm working on a PoC right now so I'm just adding FIXMEs for now