r2dbc write performance is really poor comparing with JDBC performance (number of parrallel request + database CPU utlisation)
arjunbalussery opened this issue · comments
Discussed in #624
Originally posted by arjunbalussery January 1, 2024
I was doing the analysis between r2dbc and jdbc with a springboot application.
Springboot3.2 + jooq(3.18)+ r2bc + r2dbc pool+ postgres (16.1)
-
implementation("io.r2dbc:r2dbc-spi:1.0.0.RELEASE")
-
implementation("io.r2dbc:r2dbc-pool:1.0.1.RELEASE")
-
implementation("org.postgresql:r2dbc-postgresql:1.0.2.RELEASE")
-
I have created an end to end nonblocking spring boot application
-
Which uses the suspend function from controller to repository
r2dbc Configuration
package com.example.kotlin.config
import io.r2dbc.pool.PoolingConnectionFactoryProvider.*
import io.r2dbc.postgresql.PostgresqlConnectionFactoryProvider.*
import io.r2dbc.spi.ConnectionFactories
import io.r2dbc.spi.ConnectionFactory
import io.r2dbc.spi.ConnectionFactoryOptions
import io.r2dbc.spi.ConnectionFactoryOptions.*
import org.springframework.boot.autoconfigure.r2dbc.R2dbcProperties
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.context.annotation.Primary
import org.springframework.r2dbc.connection.R2dbcTransactionManager
import org.springframework.r2dbc.core.DatabaseClient
import org.springframework.transaction.ReactiveTransactionManager
import org.springframework.transaction.annotation.EnableTransactionManagement
import java.time.Duration
@Configuration
class R2DBCConfiguration(
private val properties: R2dbcProperties
) {
@Bean
fun databaseClient(
connectionFactory: ConnectionFactory
): DatabaseClient = DatabaseClient.builder()
.connectionFactory(connectionFactory)
.build()
@Bean
@Primary
fun connectionFactory(properties: R2dbcProperties): ConnectionFactory {
val options: MutableMap<String, String> = HashMap()
return ConnectionFactories.get(
ConnectionFactoryOptions.builder()
.option(PROTOCOL, "postgresql")
.option<String>(DRIVER, "pool")
.option<String>(HOST, "postgres")
.option<Int>(PORT, 5432)
.option<String>(USER, "postgres")
.option(PASSWORD, "pass123")
.option<String>(DATABASE, "postgres")
.option(MAX_SIZE, 20)
.option(INITIAL_SIZE, 20)
.option(SCHEMA, "public")
.option(MAX_ACQUIRE_TIME, Duration.ofSeconds(30))
.option(MAX_IDLE_TIME, Duration.ofSeconds(30))
.build()
)
}
@Bean
fun transactionManager(connectionFactory: ConnectionFactory): ReactiveTransactionManager {
return R2dbcTransactionManager(connectionFactory)
}
}
Repository
package com.example.kotlin.repository
import com.example.generated.Tables.PRODUCT
import com.example.kotlin.model.Product
import kotlinx.coroutines.flow.Flow
import kotlinx.coroutines.flow.map
import kotlinx.coroutines.reactive.asFlow
import kotlinx.coroutines.reactive.awaitFirst
import kotlinx.coroutines.reactive.awaitFirstOrNull
import org.jooq.DSLContext
import org.springframework.cache.annotation.Cacheable
import org.springframework.context.annotation.ComponentScan
import org.springframework.stereotype.Repository
import org.springframework.transaction.reactive.TransactionalOperator
import org.springframework.transaction.reactive.executeAndAwait
import java.util.*
@Repository
@ComponentScan("com.example.generated.tables.daos")
class ProductRepository(
private val jooq: DSLContext,
private val transactionalOperator: TransactionalOperator
) {
suspend fun save(product: Product): UUID =
transactionalOperator.executeAndAwait { reactiveTransaction ->
runCatching {
println("******reached here********")
jooq
.insertInto(PRODUCT)
.columns(PRODUCT.ID, PRODUCT.NAME)
.values(product.id, product.name)
.awaitFirst()
}.onFailure {
// Rollback the transaction in case of an exception
reactiveTransaction.setRollbackOnly()
}.getOrThrow().also {
println("******completed********")
}
product.id
}
}
analysis
I have done load testing using k6 to see how many parallel requests it can process.
R2dbc analysis
- when configured MaxSIze=20 and IntiialSize=10
- The application is only able to process 20 parallel requests at a time.
- When anything above wenty success rate started to decline
- CPU spike started increasing like anything- attaching the screenshot
whereas JDBC with Hikari pool was able to handle 120 plus parallel requests smoothly with the same configuration.
repository: https://github.com/arjunbalussery/kotlin-async
Run setup.sh bash file to make the application running
./setup.sh
you can use k6 run k6/loadtest.js command to run load testing
k6 run k6/loadtest.js
Run stop.sh bash file to stop the application
./stop.sh
blocking repository used for testing jdbc with Hikari
Closing as duplicate of #624