Hi all, I have a dependency to a jar file that con...
# gradle
h
Hi all, I have a dependency to a jar file that contains an openapi-spec JSON file which I want to use in my project. My project is being build with Java 11 and the dependent JAR was build with Java 17. I just need the dependency on my module to extract the JSON to use it to generate classes. When I add the dependency gradle knows that it's not compatible with my project (java 11). What is the best way to approach this issue without needing to upgrade my project to Java 17?
v
Gradle uses the attribute
org.gradle.jvm.version
(
TargetJvmVersion
) to define with which Java version a lib is compatible and which you need. You can either use a component metadata rule to overwrite the attribute of the library so that it is defined as Java 11 compatible, or you set the attribute to 17 on your dependency, to get the version compatible with Java 17, as you know better than Gradle in this case that is compatible with your intended usage. I'd probably prefer the latter in your case.
h
Okay so what you are saying (latter version) I should set
org.gradle.jvm.version
to 17 in my gradle.properties? Will this affect all dependencies then? I just want this single dependency to be added without errors. (maybe an attribute on the dependency itself)?
v
in my gradle.properties
I did not say anything like that. I said set an attribute on your dependency.
1
And pages around
Btw. this question has nothing to do with Kotlin and thus is actually off-topic here. Please always consider channel topics in communities. 😉
1
h
I'll take a look at that documentation.
a
How are you adding the dependency at the moment? Something like
implementation("...")
? But then you don't use the compiled JVM library, just a JSON config file? If so, it would make sense to create a new Configuration specifically for this dependency - then you wouldn't be dependent on an existing Configuration which specifies it needs JVM 17 code
☝️ 2
something like this...
👀 1
h
Thanks Adam I’ll try today your solution, seems that’s what I’m looking for.
Adam you used here Sync::class, how will this work when extracting multiple jar files? Wont it clear the directory every time and how will it add those to the classpath if they are cleared for the next extracted jar file?
a
Sync is usually better, because it will gather all input files, and then copy them into the destination, and remove any old files. If you used Copy then removed a dependency from the
openApiSpec
configuration, if you re-ran the task then the old files would still be in the destination dir.
h
Yes but extracting multiple files in the same output directory might overwrite some files which have the same name?
a
the Sync and Copy task are the same in that regard
I can't find a good doc page for it, but basically both Copy and Sync have a duplicate strategy By default both will throw an error if they notice there are duplicate files.
I can see it might be a problem if you have 2 or more API specs in different files. It'd be nice if the files were renamed based on the dependency names... I'll see if I can cook something up.
1
it's a bit tough to rename the files and unzip them in the same task, so what's probably best is to split up the downloading and extracting/renaming into two tasks.
This task will fetch the files. I've refactored it a bit based on your comments in the other thread where you wanted to filter by a specific name. In this version it will fetch the Maven coordinates by using
ResolvedArtifactResult.id
, and trying to cast it to
ModuleComponentIdentifier
. This way it doesn't matter what the file is called - it only matters what the Maven coordinates are. But this isn't critical, and using the file name also works.
h
It's getting more complicated each time..
a
welcome to Gradle :)
h
Question, do I need:
openApi.incomming.artifactView{}.artifacts
or can I use the artifacts directly without View?
openApi.incomming.artifacts
What is the difference? (gradle 8.3 and kotlin 1.9.10)
And i'm not able to do
artifacts.filter {}
code. Seems that is not an option?
a
I think both are the same, but the documentation isn't clear
h
@Adam S this part doesn't work for me:
Copy code
.map { artifacts ->
        artifacts
          .filter {
            val id = (it.id as? ModuleComponentIdentifier) ?: return@filter false
            id.module.endsWith("spec")
          }
      }
  )
}
lambda (should be artifact instead of artifacts right?) still it's not happy with the filter on it.
sorry the above does works, only when typing it was not giving me the options.
@Adam S how do I combine those two tasks? I've now got:
Copy code
configurations.create("openApiSpec") {
    isCanBeResolved = true
    isCanBeConsumed = false
}
tasks {
    register("resolveOpenApiSpecDep", Sync::class) {
        from(configurations.getByName("openApiSpec")
            .incoming
            .artifactView { }
            .artifacts
            .resolvedArtifacts
            .map { x -> x.filter {
                val id = (it.id as? ModuleComponentIdentifier) ?: return@filter false
                id.module.endsWith("specs") || id.module.endsWith("api")
            } }
        )
        into(layout.buildDirectory.dir("extracted"))
    }
}
But no file is generated there.
a
so when you run
resolveOpenApiSpecDep
it doesn't download any files, and
build/extracted
is empty?
h
Yes
a
hmmmmm
oh - I missed a bit. Map each artifact to a file, after the filter
Copy code
openApiSpec.incoming
      .artifactView {}
      .artifacts
      .resolvedArtifacts
      .map { artifacts ->
        artifacts
          .filter {
            val id = (it.id as? ModuleComponentIdentifier) ?: return@filter false
            id.module.endsWith("spec")
          }.map { 
            it.file
          }
      }
h
It's still empty, is there a way to debug it?
a
hmmm that sucks
try putting some `println("...")`s in the
map {}
h
Copy code
.map { x ->
                x.filter {
                    println("filter: ${it.id}")
                    val id = (it.id as? ModuleComponentIdentifier) ?: return@filter false
                    println("filter2: $id")
I only get output for the first filter and not for the filter2
a
ahh okay. What sort of dependencies are you adding to the Configuration? Are they from a Maven repo, or from another subproject?
h
internal project (nexus)
maybe converting the it.id to a ModuleComponentIdentifier is returning false?
a
definitely
two options: either try and get the exact class of
it.id
and cast to it (which will probably be complicated!), or use
it.id.displayName
and try doing some string matching on that.
1
h
I'll see what I can get out there, have a meeting now so will continue with it in between.. (will be slower).
👍 1
What should come out of the task after the resolvedArtifacts method?
if I do the following:
Copy code
from(configurations.getByName("openApiSpec")
            .incoming
            .artifacts
            .resolvedArtifacts
            .map {
                x -> x.map{ it.file }
            }
I do get the downloaded jar file in the extracted directory..
@Adam S the first part is working, getting the download jar file in the downloaded folder.
Copy code
tasks {
    register("resolveOpenApiSpecDep", Sync::class) {
        group = "openapi tools"
        from(configurations.getByName("openApiSpec")
            .incoming
            .artifacts
            .resolvedArtifacts
            .map { resolvedArtifactResults ->
                resolvedArtifactResults.map { it.file }
            }
        )
        into(layout.buildDirectory.dir("downloaded"))
    }
}
🎉 1
I do assume that only files that needs to be extracted will be added to this configuration. So I don't see a reason for a filter of the jar.
a
that's a good assumption to make
h
I'm still struggling with a way to extract those zip/jar files in a separate/own directory using the Copy or Sync class.
Copy code
sourceSets {
    main {
        resources {
            srcDirs(tasks.getByName("resolveOpenApiSpecDep", Sync::class).destinationDir)
        }
    }
}
I think something like this is a way to get the desination folder on the classpath, but the task should give more than one that I need to loop over it.
a
yes, I was struggling with that too. I couldn't see a 'one liner' way to both unzip a JAR and rename the files using the name of the JAR. Instead, either write a task that will loop over each downloaded JAR, unzip each, and then rename the file OR you can rename the JSON files by reading the contents of the file
for option 1, a task for unzipping & renaming each file, try this
for option 2, something like this (so no additional task is needed)
h
Working on option 1, but seems to miss setting the SourceSets
a
you can do that using a task reference
Copy code
kotlin {
  sourceSets {
    main {
      resources.srcDir(prepareOpenApiSpecs)
    }
  }
}
That way Gradle will know it needs to run the task, and then use the output directory of the task as a resource directory
h
Now it does extract the jar file in the
$build/openApiSpecs/<jarfilename>
which is very good. (need to apply that regex to strip that version and extenstion. But it works.
a
nice!
h
It's a very complicated way to get this done, Lets see what my colleague will find from this tomorrow.
v
Another way to unzip / rename / transform / whatever the incoming files of dependencies is to use an artifact transform
h
Any example on how to do that?
v
The docs have an example where things are minified and an example where things are unzipped: https://docs.gradle.org/current/userguide/artifact_transforms.html
h
Tried that out but the extracted files will be placed in my gradle cache instead of the project build dir. If I do the project build dir, it still does something in the cache directory, next run when I clean my project it does not extract it, because it is marked somehow in the cache that it's already done. Maybe it's me not knowing how to do advance gradle stuff the right way.
v
Well, it depends on how you need to use those files. If you just need them as input for some other task, you can just use the configuration and have the files in the cache directory. If you need them in the project directory for some reason, you could still use a
Sync
task, but it will then be as trivial as
from(configuration.foo); into(whatever)
.
h
Hello @Adam S and @Vampire, could you take a look and give your opinion about the following approach (tried to combine the better pieces of what Adam made to a simpeler solution.
Copy code
val openApiSpec by configurations.creating {
    isCanBeResolved = true
    isCanBeConsumed = false
 }

tasks {
    register("prepareOpenApiSpecs") {
        group = "openapi tools"
        val archives = serviceOf<ArchiveOperations>()
        val fileOps = serviceOf<FileSystemOperations>()

        outputs.dir(layout.buildDirectory.dir("openApiSpecs"))

        doLast {
            fileOps.sync {
                val deps = openApiSpec.incoming.files
                for (dep in deps) {
                    from(archives.zipTree(dep)) {
                        val name = Regex("^.+(?=-)").find(dep.name)!!.value
                        into(name)
                    }
                }
                into(layout.buildDirectory.dir("openApiSpecs"))
            }
        }
    }
    getByName(openApiGenerate.name).dependsOn("prepareOpenApiSpecs")
}

openApiGenerate {}
• This seems to do everything I wanted in one task • Not sure if the outputs is needed and can be omitted?
v
Besides that you use various bad practices, ...
h
Can you clarify what are the bad practices? Is using
doLast
an bad practice?
v
Too much text for mobile, I'll write them together later maybe.
And no,
doLast
per se is fine
You asked for it, so don't complain. 😄 Here are some: •
serviceOf
is not public API • You use
ArchiveOperations
and
FileSystemOperations
to avoid using the
Project
alternatives, but you do not do the same for
ProjectLayout
• You do not declare your inputs, you should always declare all inputs and outputs of a task properly, only then the up-to-date check can properly work and some safety checks like tasks that use outputs of other tasks wihtout properly being wired and so on • You have the reference to the
openApiGenerate
task, then you get its name, just to get the task again by name. This is an unnecessary round-trip and also you disturb task-configuration avoidance by forcing its realisation. If you would want to do that - which you shouldn't -, do it like
openApiGenerate { dependsOn(...) }
• Don't depend on a task by
String
when you already have a reference to it or a task provider for it (return value of
register
, but depend on that instance directly, if you need to do it, which you shouldn't • Don't use explicit
dependsOn
unless on the lefthand side is a lifecycle task. Instead wire task outputs to task inputs. So if you properly declared the outputs of the task, and the
openApiGenerate
task properly declares the inputs it has, if you then properly wire the tasks inputs and outputs together, you automatically get the necessary task dependency where needed explicitly. • The task is not configuration cache safe • I still think an artifact transform would be more appropriate
1
h
I've tried the artifact transform and it was unzippnig somewhere in the gradle cache directory. If only I can achieve that functionality above in the artifact transform would be nice, but I spend a lot of hours trying it out and not getting the result in my local project. • I should use unZip functionallity instead of serviceOf()? Thought that this was better because Adam came up with it. • How do I use the project alternatives (ArchiveOperations and FileSystemOperations) • What should I declare as inputs and outputs in this case? The openApiGenerate task comes with the plugin
id("org.openapi.generator")
and I can't declare inputs for that task.
I've made some changes on your comment above:
Copy code
tasks {
    register("prepareOpenApiSpecs") {
        group = "openapi tools"

        inputs.files(openApiSpec.incoming.files)
        outputs.dir(layout.buildDirectory.dir("openApiSpecs"))

        doLast {
            sync {
                for (dep in openApiSpec.incoming.files) {
                    from(zipTree(dep)) {
                        val name = Regex("^.+(?=-)").find(dep.name)!!.value
                        into(name)
                    }
                }
                into(layout.buildDirectory.dir("openApiSpecs"))
            }
        }
    }
    openApiGenerate.get().dependsOn("prepareOpenApiSpecs")
}
v
I've tried the artifact transform and it was unzippnig somewhere in the gradle cache directory
Yes, as I said, taht's expected and not a problem at all usually. If you really need the file in your project, you could still have a simple
Sync
tasks that copies them where you want them, but most often this is simply not necessary, but you can just use the files from where they are. You should not configure paths to those files anyway, but wire things together instead and it will just take the files from where they are.
• I should use unZip functionallity instead of serviceOf()? Thought that this was better because Adam came up with it.
Well, @Adam S unfortunately is not right here,
serviceOf
as I said is an internal utility that you should not use in your build. If you use a proper task class instead, you can just
@Inject
those services.
• How do I use the project alternatives (ArchiveOperations and FileSystemOperations)
Just using
sync { ... }
, or
zipTree { ... }
does it, but then again makes your task even less configuration cache compatible.
• What should I declare as inputs and outputs in this case?
Everything the task uses as inputs and produces as outputs. So in your case,
inputs.files(openApiSpec).withProeprtyName("inputFiles")
(the latter part is only for better log and error output)
and I can't declare inputs for that task.
You can the same way you do it for you ad-hoc task, but you shouldn't need to. The task should declare which of its properties are inputs and outputs already and you just need to set the values properly by wiring your task outputs to their inputs.
I've made some changes on your comment above:
Partly. You still are CC incompatible if that is a concern, your input is not optimal, you still distrub task-configuration avoidance by using
openApiGenerate.get()
, and you still have the explicit task dependency.
a
could you take a look and give your opinion about the following approach (tried to combine the better pieces of what Adam made to a simpeler solution.
Good job! Generally looks good, but yeah, register the input files as a file input. The rest is just tidying so that all the task logic is associated together
Copy code
val openApiGenerate = tasks.register("openApiGenerate") {
  // Register the task dependency. 
  // (if you use the top-level tasks.register then you can 
  // access other top-level variables out-of-order)
  dependsOn(prepareOpenApiSpecs)
  
  // (rest of task config...)
}

val prepareOpenApiSpecs = tasks.register("prepareOpenApiSpecs") {
  group = "openapi tools"
  val archives = serviceOf<ArchiveOperations>()
  val fileOps = serviceOf<FileSystemOperations>()

  outputs.dir(layout.buildDirectory.dir("openApiSpecs"))

  // register the files as an input so that Gradle knows
  // if it needs to re-run the task when the files change
  val openApiSpecFiles = openApiSpec.incoming.files
  inputs.files(openApiSpecFiles).withPropertyName("openApiSpecFiles")

  doLast {
    fileOps.sync {
      for (dep in openApiSpecFiles) {
        from(archives.zipTree(dep)) {
          val name = Regex("^.+(?=-)").find(dep.name)!!.value
          into(name)
        }
      }
      into(layout.buildDirectory.dir("openApiSpecs"))
    }
  }
}
Not sure if the outputs is needed and can be omitted?
From the point of view of running the task, no, it's not strictly necessary. Gradle will run tasks that are not configured correctly. However, it's best to set up the inputs/outputs for a few reasons: Like Gradle will check to make sure other tasks don't have conflicting outputs, which helps avoid mistakes; or thanks to Build Cache and Task Avoidance Gradle will be able to realise tasks are up-to-date, which will make your builds way quicker
1
h
Isn’t it better to create a plugin instead of chaining tasks?
v
I don't get the question. A plugin is most often doing nothing else. Registering several tasks and wiring them together.
h
From user perspective is applying a plugin better than having a bunch of tasks defined somewhere. Then the plug-in could come with its own configuration in it and transparently do all the things we want. Downloading jars, extracting them, set classpath, etc etc.
Finally creating a jar and publish it for reuse within the organization
v
Yes, of course. If you have downstream users for whom the plugin is vs. telling them to create X tasks and wire them together it is of course preferable, that's exactly what plugins are for. 🙂