In a gradle plugin, how do I replace maven plugin 'uploadArchives' with artifactory plugin 'artifactoryPublish'? - plugins

Yes, I do need to apply both the 'maven' and 'artifactory' plugins. However, there are many, many existing Jenkins jobs which explicitly call 'uploadArchives' (supplied by the maven plugin). I would like to override that in my own plugin to force it to call 'artifactoryPublish' when 'uploadArchives' is used.
I want to do this programmatically in my plugin, rather than changing hundreds of build.gradle files, and I have tried every combination of calls I can think of with no luck so far.
The last caveat is that this has to work back to Gradle 2.4 at a minimum, as the gradle wrapper is being used and several different versions of gradle are invoked over a set of builds.
Anyone have an idea on how to accomplish this?
Here's the latest thing I tried:
project.getTasks().replace('uploadArchives', MyTask)
Although the constructor is called (verified by a println message), the task itself is never called (verified by adding an explicit 'throw' statement and never seeing the exception).

Just to clarify...
I want to override uploadArchives for 3 main reasons:
Many existing jobs (as well as programmer's fingers) with
uploadArchives programmed in.
We WANT the extra info artifactoryPublish creates and uploads,
especially if done via the Jenkins plugin, so you get Jenkins job
number, etc.
We do NOT want anyone using plain uploadArchives and publishing
WITHOUT the extra build info.
I finally found the answer myself... it's not exactly straightforward, but does work as we wished, except for the extra output line about uploadArchives during a build.
project.plugins.apply('com.jfrog.artifactory')
project.getTasks().create([name: 'uploadArchives', type: MyOverrideTask, overwrite: true, dependsOn: ['artifactoryPublish'] as String[], group: 'Upload'])
Yes, I could have just disabled uploadArchives after making it depend on artifactoryPublish, but then instead of just printing 'uploadArchives' during the build, it prints 'uploadArchives [Skipped]', and we didn't want there to be any confusion.

Related

How to re-use compiled sources in different machines

To speed up our development workflow we split the tests and run each part on multiple agents in parallel. However, compiling test sources seem to take most of the time for the testing steps.
To avoid this, we pre-compile the tests using sbt test:compile and build a docker image with compiled targets.
Later, this image is used in each agent to run the tests. However, it seems to recompile the tests and application sources even though the compiled classes exists.
Is there a way to make sbt use existing compiled targets?
Update: To give more context
The question strictly relates to scala and sbt (hence the sbt tag).
Our CI process is broken down in to multiple phases. Its roughly something like this.
stage 1: Use SBT to compile Scala project into java bitecode using sbt compile We compile the test sources in the same test using sbt test:compile The targes are bundled in a docker image and pushed to the remote repository,
stage 2: We use multiple agents to split and run tests in parallel.
The tests run from the built docker image, so the environment is the
same. However, running sbt test causes the project to recompile even
through the compiled bitecode exists.
To make this clear, I basically want to compile on one machine and run the compiled test sources in another without re-compiling
Update
I don't think https://stackoverflow.com/a/37440714/8261 is the same problem because unlike it, I don't mount volumes or build on the host machine. Everything is compiled and run within docker but in two build stages. The file modified times and paths are retained the same because of this.
The debug output has something like this
Initial source changes:
removed:Set()
added: Set()
modified: Set()
Invalidated products: Set(/app/target/scala-2.12/classes/Class1.class, /app/target/scala-2.12/classes/graph/Class2.class, ...)
External API changes: API Changes: Set()
Modified binary dependencies: Set()
Initial directly invalidated classes: Set()
Sources indirectly invalidated by:
product: Set(/app/Class4.scala, /app/Class5.scala, ...)
binary dep: Set()
external source: Set()
All initially invalidated classes: Set()
All initially invalidated sources:Set(/app/Class4.scala, /app/Class5.scala, ...)
Recompiling all 304 sources: invalidated sources (266) exceeded 50.0% of all sources
Compiling 302 Scala sources and 2 Java sources to /app/target/scala-2.12/classes ...
It has no Initial source changes, but products are invalidated.
Update: Minimal project to reproduce
I created a minimal sbt project to reproduce the issue.
https://github.com/pulasthibandara/sbt-docker-recomplile
As you can see, nothing changes between the build stages, other than running in the second stage in a new step (new container).
While https://stackoverflow.com/a/37440714/8261 pointed at the right direction, the underlying issue and the solution for this was different.
Issue
SBT seems to recompile everything when it's run on different stages of a docker build. This is because docker compresses images created in each stage, which strips out the millisecond portion of the lastModifiedDate from sources.
SBT depends on lastModifiedDate when determining if sources have changed, and since its different (the milliseconds part) the build triggers a full recompilation.
Solution
Java 8:
Setting -Dsbt.io.jdktimestamps=true when running SBT as recommended in https://github.com/sbt/sbt/issues/4168#issuecomment-417655678 to workaround this issue.
Newer:
Follow recomendation in https://github.com/sbt/sbt/issues/4168#issuecomment-417658294
I solved the issue by setting SBT_OPTS env variable in the docker file like
ENV SBT_OPTS="${SBT_OPTS} -Dsbt.io.jdktimestamps=true"
The test project has been updated with this workaround.
Using SBT:
I think there is already an answer to this here: https://stackoverflow.com/a/37440714/8261
It looks tricky to get exactly right. Good luck!
Avoiding SBT:
If the above approach is too difficult (i.e. getting sbt test to consider that your test classes do not need re-compiling), you could instead avoid using sbt but instead run your test suite using java directly.
If you can get sbt to log the java command that it is using to run your test suite (e.g. using debug logging), then you could run that command on your test runner agents directly, which would completely preclude sbt re-compiling things.
(You might need to write the java command into a script file, if the classpath is too long to pass as a command-line argument in your shell. I have previously had to do that for a large project.)
This would be a much hackier approach that the one above, but might be quicker to get working.
A possible solution might be defining your own sbt task without dependencies or try to change the test task. For example you could create a task to run a JUnit runner if that was your testing framework. To define a task see this on Implementing Tasks.
You could even go as far as compiling sending the code and running the remotes from the same task as it is any scala code you want. From the sbt reference manual
You could be defining your own task, or you could be planning to redefine an existing task. Either way looks the same; use := to associate some code with the task key

Is it necessary to "clean" SBT before "stage" for production builds?

We are using Jenkins and SBT pluging to build, test and deploy our application. I see everywhere that people use "clean" command before "test" or "stage" which leads to very long compile times.
Is it necessary to clean everything for each build? What is the risk of using the incremental compiler for production builds?
I think it's a matter of taste. For continuous integration builds, it's ok to rely on the incremental compiler; if something fails I can investigate quite quickly.
But if, for whatever reason, the incremental compiler decides to ignore some changes (and it already happened to me in development), you may get some difficult bugs to resolve.
So, if you want something that's reproducible from a fresh code checkout; just run "clean" before everything else.
Most of the time, I configure jenkins with two build tasks: test builds, triggered by code changes, and packaging builds targeted for deployment and triggered periodically (or manually in some cases, with automatic deployment).
If the build script and/or command line to sbt have not changed then it should not be necessary to do clean.
I have been using Spark for several years - and it is one of the most complex sbt builds you can find. It works just fine to avoid doing clean as long as the build process itself (as embodied in the project/sbt build artifacts and the sbt command line) are the same as previous runs.

A hard way to get rid of everything generated by sbt

An easy way to get rid of *everything* generated by SBT? asks for an easy way to clean up all files generated from sbt, and didn't find one. I'll ask for a hard one. How do I get a make cleanall with sbt?
UPDATE: For a definition of everything, you have to know why you ever need to clean all. There are two common reasons:
To distribute something: You need to confirm that someone can take a fresh machine, download your code, and build it without problem. (Or, someone can't, and you want to debug).
When something has gone horribly wrong. Some funny version of some jar somewhere is breaking something somehow. Sometimes it's easiest to just start fresh, rather than try to debug... (See Jim Gray's Why Computers Stop)
Here are the usual suspects of things you need to clean:
~/.sbt/boot - Safe cache of sbt itself to avoid disappearing JAR oddiites
~/.sbt/**/target - Compiled global plugin/build definitions.
~/.ivy2/cache - Ivy cache of resolved dependencies
~/.ivy2/local - publishLocal files
<cwd>/project/**/target - Compiled build/plugin definitioins
<cwd>/**/target - Artifacts from building your project, including compiler caches, classfiles, etc.
sbt is unable to provide a "clean everything" task directly, because deleting the JARs/classes sbt is actively using to run your build leads to super odd and evil behavior. But you could write a simple BASH script which can accomplish this if desired.

How to use Jenkins Multi-Configuration (Matrix) type Projects?

Jenkins official Wiki page for Matrix projects isn't really helping me; so I have a few questions.
We're trying to build a couple of projects that are all essentially the same, just some are being branded differently for our customers. In other words, the software / tests / etc. are all identical, except for some tweaks to turn BrandA into BrandB (or BrandC, etc.)
I figure I should be using a Matrix project to create builds for BrandA, BrandB, etc. While I haven't figured out all my steps yet (including how to rename executables after they're built) I know that I will need to pass the Brand Name to many of my Jenkins Powershell scripts during the build process, and then use that brand n the script.
How do I get these variables into my scripts? Are they automatically passed in to every build step in Jenkins? What is the variable name to use?
Finally, is there a good resource on building these multi-configuration projects in Jenkins? I can't seem to find anything comprehensive online.
If you usually build the job for BrandA and only occasionally for BrandB and BrandC a matrix project may not be what you want. I recommend, instead, using a parameterized job where the brand is a parameter whose default value is BrandA. If the parameter is named BRAND the parameter is accessible in all of the builds and publish steps with ${BRAND} and as an environment variable as %BRAND%.
I refer you to the parameterized build wiki for more details.
Yes, ${BRAND} and %BRAND% should work fine.
If you're using Maven, ${env.BRAND} does this too.
There's a plugin that you can see all Environment Variables that are available to your job/build.
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
I'm not aware of that kind of process but I suggest you tu use the Copy project functionnality.
New Job
Copy From existing job
You will have a copy of your Job and you'll be able to setup easily all specific fields.

How to parameterize Bamboo builds?

Please note, although my specific example here involves Java/Grails, it really applies to any type of task available in Bamboo.
I have a task that is a part of a Bamboo build where I run a Java/Grails app like so:
grails run-app -Dgrails.env=<ENV>
Where "<ENV>" can be one of several values (dev, prod, staging, etc.). It would be nice to "parameterize" the plan so that, sometimes, it runs like so:
grails run-app -Dgrails.env=dev
And other times, it runs like so:
grails run-app -Dgrails.env=staging
etc. Is this possible, if so, how? And does the REST API allow me to specify parameter info so I can kick off different-parameterized builds using cURL or wget?
This seems to be a work around but I believe it can help resolve your issue. Atlassian has a free plugin call Bamboo Inject Variables Plugin. Basically, with this plugin, you can create an "Inject Bamboo Variables from file" task to read a variable from a file.
So the idea here is to have your script set the variable to a specific file then kick off the build; the build itself will read that variable from the file and use it in the grails task.
UPDATE
After a search, I found that you can use REST API to change plan variables (NOT global). This would make your task simpler: just define a plan variable (in Plan Configuration -> tab Variables) then change it every time you need to. The information on how to change is available at Bamboo Knowledge Base