How do I drop all connections from my R2dbc client...
# komapper
s
How do I drop all connections from my R2dbc client?
Copy code
io.r2dbc.postgresql.ExceptionFactory$PostgresqlNonTransientResourceException: [53300] sorry, too many clients already]
I have a query which sometimes gets called very often in a short timeframe and for some reason it sometimes doesn't reuse other connections but creates new ones without releasing them until it hits my db's max connections
So, is there a way to release a bunch of connections from the client? Because if it hit the max and gets stuck I wont even be able to do
Copy code
select pg_terminate_backend(pid) from pg_stat_activity where datname = 'name';
t
Can you show me your code? I wonder why those connections are not released. What framework are you using, Spring WebFlux, Ktor, or something else?
s
sure, I'll add the code. And it's Ktor. It's a rather unfortunate nested GraphQL query that does this. So first I'm fetching a list of ~200 items and for each of those it runs a separate db query. Sometimes it works, sometimes it doesn't. I haven't quite figured it out yet so my plan was to just surround it with a try catch, drop all connections manually and try again to circumvent it.
https://pastebin.com/4yNjNEi4 admittedly weird with the two caches but that's the exact code that causes this issue
t
Thanks for sharing your code with me. Please try to change the code from lines 34 to 38 as follows.
Copy code
conn.runQuery {
    val w1 = where { wordLevels.userId eq userId }
    val w2 = where { wordLevels.wordId eq wordId }
    QueryDsl.from(wordLevels).where(w1.and(w2))
}.firstOrNull()
Do you see any improvement with this change?
s
so just moving the
firstOrNull()
? doesn't seem to change anything sadly
but something is really odd about all of this. not even the cache tied to the get/fetchWordLevel in the repo class is working properly. while it's working for all other functions in there.
I removed the cache in the repo file and extended the one in the service class - now it still uses up all connections and throws the exception but it releases them again afterwards
t
Thanks for trying it. Are there too many concurrent accesses? Try r2dbc-pool if you are not using it. https://github.com/r2dbc/r2dbc-pool
s
ah sounds like a good idea. but do I have to change anything else than?
Copy code
.option(ConnectionFactoryOptions.DRIVER, "pool")
.option(ConnectionFactoryOptions.PROTOCOL, "postgresql")

implementation("io.r2dbc:r2dbc-pool:0.9.2.RELEASE")
seems to work locally but when running it in docker I"m getting
Copy code
Unable to create a ConnectionFactory for 'ConnectionFactoryOptions{options={database=, host=, driver=pool, password=REDACTED, protocol=postgresql, user=}}'. Available drivers: [ postgresql ]
t
The DRIVER and PROTOCOL options appear to be correct. However, please check other options, such as HOST and PORT.
s
those options are all correct. the problem is likely somewhere else then if I'm not missing any komapper settings. and I can also connect to the production db when running the app on my machine, just not from the container on the server. it's like the docker image is missing the r2dbc-pool dependency
yeah it was indeed the dockerfile I had to rewrite. I got some
Caused by: java.lang.IllegalArgumentException: Could not find delegating driver [postgresql]
in between, but now everything works. and I guess the connection pools should prevent the issue to begin with, thanks for the help!
382 Views