Hello! I have to make import which consists of 6 m...
# exposed
r
Hello! I have to make import which consists of 6 milions of SQL requests. I wanted to use coroutines to speed things up, but it didn't help because DB is still using single core (I checked docker stats to be sure). As far as I know single connection to DB will only use one core, so I tried to connect with HikariPooling:
Copy code
with(databaseConnection) {
    val config = HikariConfig().apply {
        jdbcUrl = databaseUrl
        driverClassName = driver.driverClass
        username = user.userName
        password = user.password
        maximumPoolSize = 20
    }
    val dataSource = HikariDataSource(config)
    Database.connect(dataSource)
}
But still, nothing changed, postgres is using only one core. I think that I have to create multiple transactions to force new connections? For context I'll add simplified code of SQL execution:
Copy code
suspend fun executeSqls(sqlList: List<String>) {
    newSuspendedTransaction {
        val transaction = this
        val coroutineContext = Dispatchers.IO.limitedParallelism(20)
        sqlList.forEachParallel(coroutineContext) { sql -> transaction.exec(sql) }
    }
}
Probably executing newSuspendedTransaction for every sql is a bad idea? Should I split my list of sqls in parts and then use single transaction for one part?
To update I did try to call "newSuspendedTransaction" for each SQL and now postgres is using all cores. Is this an issue from Exposed perspective? Some memory problems?
a
Yes, running everything in a single transaction like that means only 1 connection would ever be used from the pool. A connection is bound to
TransactionInterface
- so to obtain connections concurrently, you would make multiple calls to one of the transaction builders 👍
Just make sure you pass the correct context to newSuspendedTransaction, either implicitly using something like withContext or explicitly by passing it as a parameter