Right, that was one thing that I probably got wron...
# squarelibraries
d
Right, that was one thing that I probably got wrong the first time, thanks. How about the concurrent download in pieces, you have any recommendations?
g
I updated sample with concurrent downloading.
But not in pieces. You right, to start downloading from particular part you can use
source.skip(offset)
(actually doesn’t make any sense on practice, see my comment below), also you probably need appendingSink if you want to append data to the same file
d
Is that better or equivalent to specifying the Range header in the rquest and doing multiple requests? Also in one request I can have multiple buffers reading from different parts of the same source?
g
Oh, I see what you mean about downloading different parts in parallel
You can write such code of course, but it’s pretty advanced feature, but I don’t see any problems with it, just do multiple requests under the hood. Never did it, but don’t see why this can be not possible, maybe more efficient would be download to a multiple files and then concat them
What do you mean “better or equivalent to specifying the Range header in the rquest and doing multiple requests”? You need support from your server if you want to that in efficient way, not just skip bytes
Also in one request I can have multiple buffers reading from different parts of the same source?
I don’t think so, you need performance, as I understand, so in this case I would make a multiple requests with range header, you can write some convenient API for that (with automatic split parts, automatic concatenation, progress merge and so on), but it not what you can do with one request, it’s just a stream of bytes
d
I'm using s3 from DreamObjects and DigitalOcean spaces that both support Range header
Oh, so skip still reads the bytes...
So Range it'll be, I guess, thanks!
g
Yes, skip reads and discards bytes