Jib Maven Plugin: How to always pull base image before image build? - jib

I'm using the Jib Maven plugin to build a container image, and have the following issue: sometimes the locally available base image (e.g. foo/bar/base/parent-image:latest) becomes older than the one available in the remote repository.
Invoking manually docker pull parent-image:latest pulls the most current :latest version, but I'd like this to happen automatically every time my image is built, similarly to the --pull parameter of Docker CLI: docker build --pull my-image:latest.
Can this (always pulling the base image before image build) be obtained with current version (3.2.1) of Jib Maven plugin?
I tried prefixing the base image with registry:// like described in "Setting the base image", and it makes no difference:
<from>

</from>
I could try to set the base image cache directory, described in "System Properties", to a random temp directory, but I don't want such a overkill workaround.

Related

SBT always downloads the packages/scala libraries on Docker, docker-compose

I have recently installed SBT on Docker Ubuntu machine to get started with Scala. When I started docker initially, it started grabbing all the Java, sbt JAR's from the remote locations (https://repo.scala-sbt.org/scalasbt/debian/sbt-0.13.17.deb).
But, whenever I run sbt command, it again starts downloading the sbt JAR. Is there a way of maintaining a global cache whereby artifacts are only downloaded once and not every time I remote to docker container?
My solution to this was a multi stage build.
Have a “base” docker image.
Copy in only build.sbt, projects.sbt and the file which sets out the sbt version from your project.
That defines the required dependencies. The last line in that base image is “sbt update” - I.e fetch them. That “base image” has the dependencies in it… and is reuseable. Just remember to run it when you change library versions etc to rebuild it.
In the ”build” image… copy over the project and proceed as normal… make sure sbt is resolving from maven-local, and it should use the “cache”… which is already in place from the paragraph above.
I’d be interested to hear other answers, but that’s my solution… YMMV :-).
That works for me on a cloud / Kube pipeline.

Installing Jenkins plugin (mercurial) in Docker shows in plugins folder but not in Jenkins itself

Problem: I can't seem to successfully install the mercurial plugin to jenkins using the Dockerfile and plugins.txt combination.
What I've done so far:
I have a Dockerfile that's loading Jenkins. It has the following lines:
FROM jenkins:1.642.1
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
My plugins.txt has this line:
mercurial:1.54
When I build the image and run the container, everything seems to work, there are no errors or complaints. But the Mercurial plugin isn't marked as installed when I go to Manage Plugins, and if I try to make a build, Mercurial isn't an option under Source Code Management.
I've tried going to:
<jenkins ip address>:8080/reload
As well as the "Reload Configuration from Disk" option in Manage Jenkins. Mercurial still isn't visibly installed after either of these.
I've also done this on the command line:
docker exec -i -t container bash
ls /var/jenkins_home/plugins/
And at this point I'm totally confused, because there's mercurial, mercurial.jpi and mercurial.jpi.pinned right there in the list. Does anyone have any ideas on this? I would like to have Mercurial installed on Jenkins as soon as it's loaded from the Dockerfile without having to do it manually...
Also, I tried doing this with git-changelog as well to see if another plugin would work better, and had the same result.
As you can see on the Mercurial Plugin wiki page, the plugin currently has four mandatory dependencies, and one optional:
credentials
matrix-project
multiple-scms (optional)
ssh-credentials
scm-api
The plugin installation mechanism that you're using with the Jenkins Docker image does not automatically install dependent plugins for you, as mentioned in the documentation for the jenkins image:
All plugins need to be listed as there is no transitive dependency resolution.
Therefore you need to additionally list those plugins, and any of their transitive dependencies, in your plugins.txt file.
At the moment, the simplest way to get the full list would be to start your container (potentially without plugins.txt), and then install the Mercurial plugin via the Plugin Manager, which will be installed along with all of its dependencies. Then you can see a list of which plugins are required via $JENKINS_HOME/plugins.

Deployable JAR file from JB Plugin Repo does not contain my files, but the plugin runs correctly locally

Background
I am working on a simple plugin, and have already deployed to the Plugin Repository once before (successfully).
Since my last successful deployment, I found that I had a lot of issues with the IDE. After completely upgrading, and modifying my plugin's directory structure, I have been able to get the plugin to Run again.
Issue
tl;dr - I have an updated plugin in the JetBrain's Plugin Repository that does not work as intended, and I cannot update it correctly!
When I run the plugin, a second instance of the IDE comes up with my plugin working correctly. I edit my code and run the plugin again - the plugin runs smoothly and the updates are applied!!
With all of this, I decided to deploy my updated plugin to the Repository again. Once that was done, I decided to download the plugin and try it out myself; just to make sure things worked.
The issue is that nothing can be found in the plugin file!! Just the updated plugin.xml file and Manifest.mf file. The total size of the archive file is around 500bytes. I know a correct archive would have more files in it, and in my case, the file size should be around 6kb (based on my first successful archive file).
So how can my local IDE instance find the files correctly, but the deployment feature cannot? How does the deployment feature actually work? I get the feeling I have the structure wrong, eventhough the new IDE instance works perfectly
Plugin
GitHub
JetBrain's Plugin Repository
When you install the plugin, the version is shown as v1.1; however, that is not true, in reality. One of the easiest features to determine the actual version of the plugin is the Folded Text foreground color.
v1.0 - RED
v1.1 - YELLOW
Deployment
Preparing Plugin Module for Deployment + resulting plugin.jar file
Contents of plugin.jar
It seems possible that because of the restructuring an old ChroMATERIAL.xml file was left somewhere in the build output. Somehow this could end up in the plugin jar. An invocation of Build > Rebuild Project should fix this problem.
There could also be problems in the project or module configuration, but the project files are not included in the GitHub repository, so that cannot be checked.

How to change sbt-docker settings to choose an specific route for artifacts

Right now, whenever I execute the sbt docker command of the sbt-docker plugin within my project, it generates the artifacts (dockerfile and jars) under the [app-route]/target/docker/ folder.
Is there a way to change that "default" route, so It can generate the artifacts elsewhere? Let's say, in [app-route]/docker instead?
You can change the staging directory by setting target in docker, for example target in docker := "docker".
Thanks to the creator of the sbt-docker plugin that answered the question here.

adding my own jar file as javaagent on bluemix

I want to make a custom buildpack on bluemix, as part of it I am trying to add my own jar file as a javaagent. I used to work with Tomcat where I just added the extra agent to the catalina.sh script.
On bluemix those are the steps I took:
I create new project and uploaded my code.
I cloned the default java buildpack to my own git repository.
On the repository I added the .jar file on /lib/java_buildpack folder.
Now is the step I have trouble with, I located the:
java_opts.add_javaagent(#droplet.sandbox + 'javaagent.jar')
function call which according to the comments should so exactly what I am looking for.
the issue is that when I check the function I see that it calls the following function:
qualify_path(path, root = #droplet_root)
"$PWD/#{path.relative_path_from(root)}"
I cant figure out where is this #droplet_root position is, if I could find it I could upload my jar file there.
I tried adding the relative position like this:
java_opts << "java_buildpack/myAgent.jar"
But it didnt work.
Any suggestions on how it might be achieved? where should I place the file or is there any other way?
Forking the buildpack is one way to achieve this. You can implement this as a "framework" in the Java buildpack. Here are a few samples you can refer to which also adds an agent jar:
https://github.com/cloudfoundry/java-buildpack/blob/master/lib/java_buildpack/framework/new_relic_agent.rb
https://github.com/cloudfoundry/java-buildpack/blob/master/lib/java_buildpack/framework/jrebel_agent.rb
Another little hacky way is to simply add the agent jar to your application package, and then add a Java option to enable the agent, using the JAVA_OPTS environment variable. That requires you to find out the path where the agent jar ends up in the running application container. You can browse to it by using "cf files". This will have a dependency on the internal structure of the droplet so it may get broken if the buildpack changes the droplet structure.