<@U66EJ2C07> You’ll need to offload your blocking ...
# coroutines
e
@littlelightcz You’ll need to offload your blocking ops to a thread-pool. First, define a pool:
Copy code
val ioPool = newThreadPoolContext(n, "ioPool")
Then you can wrap you IO operations into
run(ioPool) { ... }
. This will be a suspending operations from the invoker stand-point, the actual execution happening in the thread pool you’ve defined, so that the invoker thread (UI thread, of some other event thread like netty or vert.x) is not blocked.
l
Thanks, I was just wondering whether it would be possible to budle e.g. 5 parallel HTTP requests in just 2 threads at once. People in #general channel suggested using java.nio package which should be implemented in non-blocking way, so I plan to experiment with it this weekend and I will see what happens 🙂
e
You can do it if you use async HTTP client library
s
Check out OkHttp!
l
Thanks for the tips. I already tried some async NIO based HTTP client from Apache, but I was rather unsuccessful achieving the results I was expecting (maybe because it's my first time with NIO and I don't know how to use it properly 😀). I will give it one more try with the OkHttp and I will see 🙂. Anyway maybe it will be a good idea if one day e.g. Roman could write a blog post with an example showing how to do it properly. It is nice that you can launch milion coroutines at once not killing your RAM, but this use case feels to me rather fictional than what I would want to do in reality. Mostly what I want to paralelize are IO operations (especially network ones). So let's say if we had a limited thread pool size, it would be great if this article could demonstrate what benefits can we gain while using coroutines compared to threads. Can we launch more IOs simultaneously? Will it take less RAM? (and how much less if it does?) Etc... the only noticable benefit I've seen so far is that a couroutine will launch faster, so I can imagine if I had a lot of tasks which will take only little time to complete, then using coroutines could improve the throughput significantly, but is this it or can we get much more from it? 🙂 That's the question that interests me