I'm trying to use Channels to make a DirectByteBuf...
# coroutines
c
I'm trying to use Channels to make a DirectByteBufferPool but I have been experiencing a lot of issues ByteBufferPool: https://github.com/camdenorrb/Netlius/blob/master/src/main/kotlin/me/camdenorrb/netlius/net/DirectByteBufferPool.kt Testing file: https://github.com/camdenorrb/Netlius/blob/master/src/main/kotlin/me/camdenorrb/netlius/Main.kt It refuses to go higher than the specified BUFFER_COUNT when I would ideally like to support thousands of clients at once. For some reason Channel doesn't seem to be queuing up all the receivers like expected or something. I'm not fully sure on what is exactly wrong with this one and I have been struggling to do anything about it. Any help is much appreciated.
o
not a solution, but a recommendation -- the ByteBufferPool should ideally have its own scope, rather than using the global scope. it'd allow for it to take on a proper lifecycle
c
Alright done @octylFractal, no change in result of course
o
personally I don't have a clue right now, will take another look later if it's still unsolved, as everything seems "right"
c
Alright
j
@camdenorrb It would be easier to assist you if you provided code that isolated the coroutine behavior that's causing problems, without the netlius or nio dependencies. If it's not possible to reproduce the issue that way, then you know the cause has nothing to do with how you're using coroutines.
c
@julian While I wasn't able to isolate the problem in a separate project, I just found out that replacing the receive part with direct allocations fixes it, so it has to be a coroutine issue in some regard. Not to mention that the result changes based on the BUFFER_SIZE.
Right now I'm kinda brain dead, but if I need to I'll try again later
Isolated it down to this one
https://hastebin.com/cibitulapo.pl Here is a full Thread Dump, I was going through the debugger and it appears to be stuck on Unsafe.park
Looks like there is more DirectByteBuffers than expected
o
do note that some protocols for debugging also use DBBs, you would need to see if they are being held by your GC roots and not something else
c
True
o
intersting thread dump, it seems you've deadlocked your coroutines somehow 🤔
c
Yeah, not sure how tho
Maybe I should just make an issue for the coroutine library
o
this is too general for an issue imo
ah, got it figured out finally 😛
c
Oh?
o
you're opening a new client every single time you communicate, and not closing any of them
so there's more and more pending
readString()
operations
c
Yeah
o
so eventually, it starves your buffer pool
because there's nothing to read
c
Oh..
Lmao
I wonder what's a better way to do that hm
o
if you just do:
Copy code
val client = Netlius.clientSuspended("127.0.0.1", 25565)
        repeat(100) {
            client.queueAndFlush(Packet().string("Meow${count++}"))
        }
it's fine
re-using the same client
c
Yeah but this is supposed to be a simulation for my future server implementation which may have thousands of connections
o
I believe that would be an indication that you should add read timeouts 🙂
c
Fair enough, thank you for looking so far into it!
What are your thoughts on my api btw?
o
I think it's okay, I might be tempted to serve the
onX
functions as a
Flow<Event>
instead though
c
Ah true, that might be better
o
usage for e.g. onConnect:
Copy code
server.events().filterIsInstance<ConnectEvent>().onEach { evt -> newClientScope().launch { handleClient(evt.connection) }.launchIn(serverScope)
it's certainly more verbose, but I prefer to be as correct as possible in managing scope and really binding them to a lifecycle where possible
also, you may want to look into using Ktor rather than reworking from the ground up -- not a requirement, but just a suggestion
c
Hm alright, I might have to do something like that in my EventBus sometime
My eventbus is so outdated
o
don't recall if Ktor exposes the raw TCP socket though....
c
Eh I think Ktor is a bit too bloated for my needs
But curious on their ByteBufferPool implementation
it's quite complex
c
Hm interesting
o
an interesting note is that I think they are unbounded pools, i.e. if it needs an instance if will create it, and if it exceeds the capacity on return it will discard the extra instance
so it will never deadlock like in your case
c
message has been deleted
And..... Fixed lmao
Altho I should make an auto clear
Oh that actually didn't fix it hm
Ah it's null only if channel is closed