https://kotlinlang.org logo
#coroutines
Title
# coroutines
c

camdenorrb

07/30/2020, 12:07 AM
I'm trying to use Channels to make a DirectByteBufferPool but I have been experiencing a lot of issues ByteBufferPool: https://github.com/camdenorrb/Netlius/blob/master/src/main/kotlin/me/camdenorrb/netlius/net/DirectByteBufferPool.kt Testing file: https://github.com/camdenorrb/Netlius/blob/master/src/main/kotlin/me/camdenorrb/netlius/Main.kt It refuses to go higher than the specified BUFFER_COUNT when I would ideally like to support thousands of clients at once. For some reason Channel doesn't seem to be queuing up all the receivers like expected or something. I'm not fully sure on what is exactly wrong with this one and I have been struggling to do anything about it. Any help is much appreciated.
o

octylFractal

07/30/2020, 12:10 AM
not a solution, but a recommendation -- the ByteBufferPool should ideally have its own scope, rather than using the global scope. it'd allow for it to take on a proper lifecycle
c

camdenorrb

07/30/2020, 12:12 AM
Alright done @octylFractal, no change in result of course
o

octylFractal

07/30/2020, 12:25 AM
personally I don't have a clue right now, will take another look later if it's still unsolved, as everything seems "right"
c

camdenorrb

07/30/2020, 12:25 AM
Alright
j

julian

07/30/2020, 12:37 AM
@camdenorrb It would be easier to assist you if you provided code that isolated the coroutine behavior that's causing problems, without the netlius or nio dependencies. If it's not possible to reproduce the issue that way, then you know the cause has nothing to do with how you're using coroutines.
c

camdenorrb

07/30/2020, 12:54 AM
@julian While I wasn't able to isolate the problem in a separate project, I just found out that replacing the receive part with direct allocations fixes it, so it has to be a coroutine issue in some regard. Not to mention that the result changes based on the BUFFER_SIZE.
Right now I'm kinda brain dead, but if I need to I'll try again later
Isolated it down to this one
https://hastebin.com/cibitulapo.pl Here is a full Thread Dump, I was going through the debugger and it appears to be stuck on Unsafe.park
Looks like there is more DirectByteBuffers than expected
o

octylFractal

07/30/2020, 2:51 AM
do note that some protocols for debugging also use DBBs, you would need to see if they are being held by your GC roots and not something else
c

camdenorrb

07/30/2020, 2:52 AM
True
o

octylFractal

07/30/2020, 2:53 AM
intersting thread dump, it seems you've deadlocked your coroutines somehow 🤔
c

camdenorrb

07/30/2020, 2:54 AM
Yeah, not sure how tho
Maybe I should just make an issue for the coroutine library
o

octylFractal

07/30/2020, 2:55 AM
this is too general for an issue imo
ah, got it figured out finally 😛
c

camdenorrb

07/30/2020, 4:57 AM
Oh?
o

octylFractal

07/30/2020, 4:57 AM
you're opening a new client every single time you communicate, and not closing any of them
so there's more and more pending
readString()
operations
c

camdenorrb

07/30/2020, 4:57 AM
Yeah
o

octylFractal

07/30/2020, 4:57 AM
so eventually, it starves your buffer pool
because there's nothing to read
c

camdenorrb

07/30/2020, 4:57 AM
Oh..
Lmao
I wonder what's a better way to do that hm
o

octylFractal

07/30/2020, 4:58 AM
if you just do:
Copy code
val client = Netlius.clientSuspended("127.0.0.1", 25565)
        repeat(100) {
            client.queueAndFlush(Packet().string("Meow${count++}"))
        }
it's fine
re-using the same client
c

camdenorrb

07/30/2020, 4:59 AM
Yeah but this is supposed to be a simulation for my future server implementation which may have thousands of connections
o

octylFractal

07/30/2020, 4:59 AM
I believe that would be an indication that you should add read timeouts 🙂
c

camdenorrb

07/30/2020, 5:00 AM
Fair enough, thank you for looking so far into it!
What are your thoughts on my api btw?
o

octylFractal

07/30/2020, 5:03 AM
I think it's okay, I might be tempted to serve the
onX
functions as a
Flow<Event>
instead though
c

camdenorrb

07/30/2020, 5:03 AM
Ah true, that might be better
o

octylFractal

07/30/2020, 5:05 AM
usage for e.g. onConnect:
Copy code
server.events().filterIsInstance<ConnectEvent>().onEach { evt -> newClientScope().launch { handleClient(evt.connection) }.launchIn(serverScope)
it's certainly more verbose, but I prefer to be as correct as possible in managing scope and really binding them to a lifecycle where possible
also, you may want to look into using Ktor rather than reworking from the ground up -- not a requirement, but just a suggestion
c

camdenorrb

07/30/2020, 5:06 AM
Hm alright, I might have to do something like that in my EventBus sometime
My eventbus is so outdated
o

octylFractal

07/30/2020, 5:06 AM
don't recall if Ktor exposes the raw TCP socket though....
c

camdenorrb

07/30/2020, 5:06 AM
Eh I think Ktor is a bit too bloated for my needs
But curious on their ByteBufferPool implementation
it's quite complex
c

camdenorrb

07/30/2020, 5:22 AM
Hm interesting
o

octylFractal

07/30/2020, 5:22 AM
an interesting note is that I think they are unbounded pools, i.e. if it needs an instance if will create it, and if it exceeds the capacity on return it will discard the extra instance
so it will never deadlock like in your case
c

camdenorrb

07/30/2020, 5:24 AM
message has been deleted
And..... Fixed lmao
Altho I should make an auto clear
Oh that actually didn't fix it hm
Ah it's null only if channel is closed
4 Views