When i run my test I have this configuration here when defining tasks for tests -> ```withType&lt...
m
When i run my test I have this configuration here when defining tasks for tests ->
Copy code
withType<Test>().configureEach {
        useJUnitPlatform()

        testLogging {
            showExceptions = true
            showStackTraces = true
            exceptionFormat = FULL
            events = setOf(TestLogEvent.PASSED, TestLogEvent.SKIPPED, TestLogEvent.FAILED)
        }

        // For å øke hastigheten på build kan vi benytte disse metodene
        maxParallelForks = Runtime.getRuntime().availableProcessors() / 2
        reports.forEach { report -> report.required.value(false) }
    }
I also tried so speed up the build by setting these configurations in
gradle.properties
->
Copy code
org.gradle.parallel=true
org.gradle.caching=true
org.gradle.unsafe.configuration-cache=true
My question is.. Are this config
org.gradle.parallel=true
and this config
Copy code
maxParallelForks = Runtime.getRuntime().availableProcessors() / 2
the same configuration? Or is the first one for build and the second one for tests ? Please explain me, tried to understand from the documentation but it not made my question clear.
a
org.gradle.parallel=true
means multiple subprojects will run their tasks in parallel. So if you have 10 subprojects, multiple subprojects can do work at the same time. There’s a good visualization in the docs (Figure 1. and Figure 2.) https://docs.gradle.org/current/userguide/performance.html#parallel_execution
m
So if my project does not contain subprojects then the config is unnecessary?
a
yeah
maxParallelForks
affects how Gradle will launch different Java processes. If you enable multiple forks, then run the tests and look at your task manager you’ll probably see lots of jvm processes launching and dying. https://docs.gradle.org/8.1.1/userguide/performance.html#execute_tests_in_parallel
m
And what is the recommended or best practice to set this settings ->
maxParallelForks = Runtime.getRuntime().availableProcessors() / 2
? I have seen multiple options here. Some may have
Copy code
maxParallelForks = (Runtime.getRuntime().availableProcessors() / 2).takeIf { it > 0 } ?: 1
and some only have
Copy code
maxParallelForks = Runtime.getRuntime().availableProcessors()
a
the Gradle docs recommend the first one, which is good for personal computers because your machine won’t freeze when running tests. Maybe some the second on dedicated ci/cd build machines, because they can use all the CPU on tests.
m
Thank you so much 🙂
a
no problem :)
v
I prefer not setting it at all but let the test framework do parallelization. Jupiter can do it too, but I prefer Spock. Both can run tests in parallel and more efficiently than this setting ever could
But your tests have to be compatible of course.
m
I am using Kotest. So you recommend not use the
maxParallelForks
settings @Vampire?
v
I have no idea, I don't use kotest. I said if the framework supports parallel running like Jupiter or Spock, I would use that as it is better and more efficient.
a
Kotest uses Kotlinx Coroutines pretty heavily to run tests in parallel.
maxParallelForks
will probably help. but it’s best to experiment with the different options to see what works (as with all questions about optimisation!)
m
Thanks for reply 🙂
When tests the difference, what should I look after when generating the Gradle report?
Sorry about stupid questions. Trying to learn Gradle 🙂
a
So for a basic check you can just run
./gradlew check
and see how long the build takes! It’s a good idea to re-run that a few times, in case the tests need to ‘warm up’ (by downloading dependencies or re-compiling). If you have Build Cache enabled, then add
--no-build-cache
and/or
--rerun-tasks
to make the playing field fair, so each run has a fresh start. BUT it’s also worth testing with a more ‘real life’ scenario. Usually you’re running tests locally on your machine, so try and optimise for a fully-cached project. An easier solution is to use Gradle Build Scan to send build metrics to Gradle and get a fancy HTML report. That will help visualising how a build performs, and digging into any problems. You can also add the
--profile
profile that will generate a high-level performance report in
$buildDir/reports/profile
. It’s more basic than Build Scan, but it’s faster to open and it should give a good indication.
they’re not stupid questions :)
here are a couple of other tips to help with performance: • you mentioned you only had one subproject, so try and think about if you can split it up into smaller modules. Then not only will you be able to use
org.gradle.parallel=true
, but also Gradle will be able to cache the outputs of projects, so Gradle will only re-run tests in subprojects that have changes • try enabling Build Cache. This will also help on your build server. For example, if make a change to README then Gradle will realise “there are no code changes, therefore I can assume the output of compiling
src/main
is the same, and because the tests passed last time, I can assume they’ll pass again.” There’s a blog with more info: https://blog.gradle.org/stop-rerunning-tests
m
I am using the
Build Cache
in github actions 🙂
a
Great! Try adding reproducible archives config too, it will help make the outputs stable so Build Cache can cache more outputs
v
You don't need reproducible archives for the build cache, do you? Where would that improve the cacheability and not just lose information? Reproducible archives are good if your build needs be able to build the bytewise same result from scratch when run repeatedly. But for cacheability it should afair only be relevant if those archives are input to other cacheable tasks that are not checking the actual contents for inputs.
And for repeated running, consider using
--rerun
instead of
--rerun-tasks
if your Gradle version is new enough and your are only interested in running that one task.
And for profiling, there is also https://github.com/gradle/gradle-profiler which might help.
m
When using the config
org.gradle.parallel=true
, and if I am changing my test, where I try to fail a test, it still get build success, how can I configure that when I change my tests or code it should rerun the tests?
v
Unless some cache is corrulpted, tests are of course rerun if any input changes. Does it show as UP-TO-DATE though you changed something?
m
Strange. When i change my tests i still get same result. It does not get UP-TO-DATE and rerung the tests 😕
My
build.gradle.kts
file
Copy code
import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar
import org.gradle.api.tasks.testing.logging.TestExceptionFormat.FULL
import org.gradle.api.tasks.testing.logging.TestLogEvent
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
import org.openapitools.generator.gradle.plugin.tasks.GenerateTask

plugins {
    kotlin("jvm") version "1.8.21"
    kotlin("plugin.serialization") version "1.8.21"
    id("org.openapi.generator") version "6.6.0"
    id("com.github.johnrengelman.shadow") version "8.1.1"

    application
}

group = "no.nav.sokos"

repositories {
    mavenCentral()
    maven { url = uri("<https://maven.pkg.jetbrains.space/public/p/ktor/eap>") }
}

val ktorVersion = "2.3.0"
val logbackVersion = "1.4.7"
val logstashVersion = "7.3"
val jacksonVersion = "2.15.1"
val prometheusVersion = "1.11.0"
val kotlinLoggingVersion = "3.0.5"
val janionVersion = "3.1.9"
val natpryceVersion = "1.6.10.0"
val kotestVersion = "5.6.2"

dependencies {

    // Ktor server
    implementation("io.ktor:ktor-server-core-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-call-logging-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-call-id-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-netty-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-content-negotiation-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-swagger:$ktorVersion")

    // Ktor client
    implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion")
    implementation("io.ktor:ktor-client-core-jvm:$ktorVersion")
    implementation("io.ktor:ktor-client-apache-jvm:$ktorVersion")

    implementation("io.ktor:ktor-serialization-jackson-jvm:$ktorVersion")

    // Security
    implementation("io.ktor:ktor-server-auth-jvm:$ktorVersion")
    implementation("io.ktor:ktor-server-auth-jwt-jvm:$ktorVersion")

    // Jackson
    implementation("io.ktor:ktor-serialization-jackson:$ktorVersion")
    implementation("com.fasterxml.jackson.core:jackson-databind:$jacksonVersion")
    implementation("com.fasterxml.jackson.module:jackson-module-kotlin:$jacksonVersion")
    implementation("com.fasterxml.jackson.datatype:jackson-datatype-jsr310:$jacksonVersion")

    // Monitorering
    implementation("io.ktor:ktor-server-metrics-micrometer-jvm:$ktorVersion")
    implementation("io.micrometer:micrometer-registry-prometheus:$prometheusVersion")

    // Logging
    implementation("io.github.microutils:kotlin-logging-jvm:$kotlinLoggingVersion")
    runtimeOnly("org.codehaus.janino:janino:$janionVersion")
    runtimeOnly("ch.qos.logback:logback-classic:$logbackVersion")
    runtimeOnly("net.logstash.logback:logstash-logback-encoder:$logstashVersion")

    // Config
    implementation("com.natpryce:konfig:$natpryceVersion")

    // Test
    testImplementation("io.kotest:kotest-assertions-core-jvm:$kotestVersion")
    testImplementation("io.kotest:kotest-runner-junit5:$kotestVersion")
    testImplementation("io.kotest:kotest-assertions-core:$kotestVersion")
}

application {
    mainClass.set("no.nav.sokos.prosjektnavn.ApplicationKt")
}

sourceSets {
    main {
        java {
            srcDirs("$buildDir/generated/src/main/kotlin")
        }
    }
}

kotlin {
    jvmToolchain {
        languageVersion.set(JavaLanguageVersion.of(17))
    }
}

tasks {

    withType<KotlinCompile>().configureEach {
        dependsOn("openApiGenerate")
    }

    withType<GenerateTask>().configureEach {
        generatorName.set("kotlin")
        generateModelDocumentation.set(false)
        inputSpec.set("$rootDir/src/main/resources/openapi/pets.json")
        outputDir.set("$buildDir/generated")
        globalProperties.set(
            mapOf(
                "models" to ""
            )
        )
        configOptions.set(
            mapOf(
                "library" to "jvm-ktor",
                "serializationLibrary" to "jackson"
            )
        )
    }

    withType<ShadowJar>().configureEach {
        enabled = true
        archiveFileName.set("app.jar")
        manifest {
            attributes["Main-Class"] = "no.nav.sokos.prosjektnavn.ApplicationKt"
        }
    }

    ("jar") {
        enabled = false
    }


    withType<Test>().configureEach {

        testLogging {
            showExceptions = true
            showStackTraces = true
            exceptionFormat = FULL
            events = setOf(TestLogEvent.PASSED, TestLogEvent.SKIPPED, TestLogEvent.FAILED)
        }

        reports.forEach { report -> report.required.value(false) }
    }
}
And my
gradle.properties
file:
Copy code
kotlin.code.style=official
org.gradle.caching=true

# Får du en feil som henvender deg til denne
# <https://docs.gradle.org/8.0/userguide/configuration_cache.html#config_cache:requirements:disallowed_types>
# så markere du ut linjen nedenfor
org.gradle.unsafe.configuration-cache=true
v
It does not get UP-TO-DATE and rerung the tests
UP-TO-DATE
would be strange as you throw away any previous result using
clean
and thus void some of the biggest strengths of Gralde. But I hope you just do it for your experiments and not braught that over from Maven where you needed it for correct results.
When i change my tests i still get same result
Well, debug your tests then. Also, never trust a test you did not see fail for the correct reason.
m
Ok. So i should only run without clean.
./gradlew build shadowJar
Ah it looks like I cant run my tests. Strange.. Tried manually to run my tests but it give me error that No tests found for given result..
I found the issue 🙂
v
I guess
useJunitPlatform()
or when using the new test suites DSL
useKotest(...)
🙂
m
Correct @Vampire 🙂 i can use
useJunitPlatform()
but no options give me `useKotest()`https://kotest.io/docs/quickstart/
v
As I said, the
useKotest()
is not for the test task itself, but if you use the new JVM testsuites DSL. There no
useJunitPlatform()
exists, but framwork-specific methods like
useKotest()
where you can also specify a version and do not need to add Kotest as test dependency manually, but get it automatically.
m
Do you have any examples or doc?
Oh, no, sorry.
I thought it also has
useKotest()
but is hasn't. There is only JUnit 4, Jupter, Spock and TestNG.
m
https://kotlinlang.slack.com/archives/C19FD9681/p1685737301549569?thread_ts=1685732449.564469&amp;cid=C19FD9681 @Vampire what would it mean that the tests should be compatible? We are using mockk library and use mockkStatic from time to time for the legacy code that is not easy to mock or fake otherwise. With parallel test execution we see errors which I think indicate that something is mocked when it shouldn't or vice versa. This made me think the static mocks are leaking between the test suites. But I would assume such as not possible given that they run in separate processes with their own class loader etc.
v
If you talk about tests that run in separate processes, then I don't understand your question. Because what I wrote that you linked to was about running tests in parallel within the same process, using parallel test execution capability of the test framework, not the build multi-process approach Gradle provides.
118 Views