How to decide Quarkus application arguments in Kubernetes at run-time? - kubernetes

I've built a Quarkus 2.7.1 console application using picocli that includes several subcommands. I'd like to be able to run this application within a Kubernetes cluster and decide its arguments at run-time. This is so that I can use the same container image to run the application in different modes within the cluster.
To get things started I added the JIB extension and tried setting the arguments using a configuration value quarkus.jib.jvm-arguments. Unfortunately it seems like this configuration value is locked at build-time so I'm unable to update this at run-time.
Next I tried setting quarkus.args while using default settings for JIB. The configuration value documentation makes it sound general enough for the job but it doesn't seem to have an affect when the application is run in the container. Since most references to this configuration value in documentation are in the context of Dev Mode I'm wondering if this may be disabled outside of that.
How can I get this application running in a container image with its arguments decided at run-time?

You can set quarkus.jib.jvm-entrypoint to any container entrypoint command you want, including scripts. An example in the doc is quarkus.jib.jvm-entrypoint=/deployments/run-java.sh. You could make use of $CLI_ARGUMENTS in such a script. Even something like quarkus.jib.jvm-entrypoint=/bin/sh,-c,'/deployments/run-java.sh $CLI_ARGUMENTS' should work too, as long as you place the script run-java.sh at /deployments in the image. The possibility is limitless.
Also see this SO answer if there's an issue. (The OP in the link put a customer script at src/main/jib/docker/run-java.sh (src/main/jib is Jib's default "extra files directory") so that Jib places the script in the image at /docker/run-java.sh.

I was able to find a solution to the problem with a bit of experimenting this morning.
With the quarkus-container-image-docker extension (instead of quarkus.jib.jvm-arguments) I was able to take the template Dockerfile.jvm and extend it to pass through arguments to the CLI. The only line that needed changing was the ENTRYPOINT (details included in the snippet below). I changed the ENTRYPOINT form (from exec to shell) and added an environment variable as an argument to pass-through program arguments.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
# [== BEFORE ==]
# ENTRYPOINT [ "/deployments/run-java.sh" ]
# [== AFTER ==]
ENTRYPOINT "/deployments/run-java.sh" $CLI_ARGUMENTS

I have tried the above approaches but they didn't work with the default quarkus JIB's ubi8/openjdk-17-runtime image. This is because this base image doesn't use /work as the WORKIR, but instead the /home/jboss.
Therefore, I created a custom start-up script and referenced it on the properties file as following. This approach works better if there's a need to set application params using environment variables:
File: application.properties
quarkus.jib.jvm-entrypoint=/bin/sh,run-java.sh
File: src/main/jib/home/jboss/run-java.sh
java \
-Djavax.net.ssl.trustStore=/deployments/truststore \
-Djavax.net.ssl.trustStorePassword="$TRUST_STORE_PASSWORD" \
-jar quarkus-run.jar

Related

Yocto: Data file clashes build error while enabling libvirt

While enabling libvirt in yocto, I am seeing below data file clash issue while building yocto image,
Below are the packages I am trying to append install to my yocto image
IMAGE_INSTALL_append = " \
packagegroup-core-boot \
qemu \
libvirt \
libvirt-libvirtd \
libvirt-virsh \
kernel-module-kvm \
kernel-module-kvm-intel \
"
But I see below issue, when I build image enabling above packages,
`Collected errors:
check_data_file_clashes: Package iptables wants to install file /-/-/-/rootfs/etc/ethertypes
But that file is already provided by package * ebtables`
FYI: I see that libvirt has both iptables and ebtables dependency.
can someone help on understanding this and how to resolve?
I tried to remove ebtables with PACKAGECONFIG_remove = "ebtables" and image is built but I when starting libvirtd service it is always in dead mode and I see some issue related to socket.
Actually, this is a problem that has no solution except:
Remove one of the packages (ebtables or iptables)
Remove the file ethertypes from one of the recipes
libvirt depends on iptables on compile time only, so I did not know why iptables is present in the image ?
Anyways, it has a config on ebtables and from your comment when you removed it from PACKAGECONFIG it failed to work. So:
I suggest check if iptables is required by other package at run time, if not remove it.
If both are required in your case, then go for the second solution which is removing the file from one of the recipes, using a bbappend file for one of them:
The block that you may need to add is:
do_install_append() {
rm ${D}/etc/ethertypes
}
either to:
meta-custom/recipes-filter/ebtables/ebtables_%.bbappend
or to:
meta-custom/recipes-extended/iptables/iptables_%.bbappend
NOTE
If you go for the second solution, you need to make sure that the file is not present in the FILES variable of the recipe that you will remove the file from, FILES_ebtables or FILES_iptables.

Yocto: Nothing provides python-re-native

I'm running into an issue including python pyparted as a native dependency in one of my image creation bbclasses.
There is a python scrip that runs to create a partitioned image file, normally I run sudo apt install python-pyparted to have pyparted in the environment in ubuntu. But I'm not sure what I did (update??), the ubuntu environment is completely ignored now. I tried figuring out how to make sure the dependencies are correct in my sdimage bbclass.
do_image_sdimage[depends] = "parted-native:do_populate_sysroot \
dosfstools-native:do_populate_sysroot \
mtools-native:do_populate_sysroot \
virtual/kernel:do_deploy \
splash-images:do_deploy \
python3-native:do_populate_sysroot \
python3-pyparted-native:do_populate_sysroot \
${#d.getVar('IMAGE_BOOTLOADER', True) and d.getVar('IMAGE_BOOTLOADER', True) + ':do_deploy' or ''}"
I get an error showing
ERROR: Nothing PROVIDES 'python3-re-native' (but virtual:native:/home/dev/app/OS/sources/meta-openembedded/meta-python/recipes-extended/python-pyparted/python3-pyparted_3.10.7.bb DEPENDS on or otherwise requires it). Close matches:
python3-rpm-native
python3-native
python3-nose-native
python3-native RPROVIDES python3-re-native
ERROR: Required build target 'my-image-default' has no buildable providers.
Missing or unbuildable dependency chain was: ['my-image-default', 'python3-pyparted-native', 'python3-re-native']
based on this it looks like I should be able to do this, but the depency chain ignores python3-native's RPROVIDES?

How to pass credentials to sbt build cli?

I'm trying to Dockerize an old sbt build that needs access to a private nexus repo. I've previously been using a local credentials file referenced from the build.sbt, but that doesn't really fit my use currently since I want to bootstrap everything from a Dockerfile build. I rather not have to output stuff to a file and then copy it into my docker build container but rather just pass it as docker ARG's.
Under
https://www.scala-sbt.org/1.0/docs/Publishing.html
I read I can pass it like so:
credentials += Credentials("Some Nexus Repository Manager",
"my.artifact.repo.net", "admin", "admin123")
So therefore I figured I could do something like:
ARG REPO_USER
ARG REPO_PWD
RUN sbt ";credentials += Credentials(\"Some Nexus Repository Manager\", \"repo.host.com\", ${REPO_USER}, ${REPO_PWD}) ;package"
and then run
docker build . --build-arg REPO_USER=foobar --build-arg REPO_PWD=*****
in my Dockerfile but that didn't work. I still get:
Unable to find credentials for [Sonatype Nexus Repository Manager # repo.host.com]
Is there any nice way to pass repo credentials to sbt from cli?
Update:
I tried a a file approach but that didn't solve the problem so I guess I might be on the wrong track on what's actually wrong here.
RUN echo "realm=Sonatype Nexus Repository Manager" >> .credentials && \
echo "host=repo.host.se" >> .credentials && \
echo "user=$REPO_USER" >> .credentials && \
echo "password=$REPO_PWD" >> .credentials && \
export SBT_CREDENTIALS=.credentials && \
sbt package
Update 2
I think this is no longer a Docker question at all since I've debugged it in the docker container sbt simply wouldn't pick my creds up any way I passed it according to the sbt docs.
I'll answer my own question.
You can use environment variables. You can set them in dockerfile either straight or from args. Something like this:
ARG REPO_USR
ARG REPO_PWD
ENV REPO_USR = ${REPO_USR}
ENV REPO_PWD = ${REPO_PWD}
Then you can use the environment variables in sbt:
val repoUser = sys.env.get("REPO_USR").getOrElse("")
val repoPass = sys.env.get("REPO_PWS").getOrElse("")
credentials += Credentials("Repo Realm", "repo.url.com", repoUser, repoPass)
Then you can basically pass args to docker build and they will be passed on to sbt.
The sbt docs are misleading at best or simply just wrong. After debugging this to bits in the docker container I found that there was no way I could pass the creds cli so they got picked up. The SBT_CREDENTIALS variable just doesn't work either.
This comment finally saved me: SBT is unable to find credentials when attempting to download from an Artifactory virtual repo
The least intrusive way I've got working is to add an sbt config file in the docker image's home dir:
RUN mkdir .sbt/0.13/plugins && \
echo "credentials += Credentials(\"Sonatype Nexus Repository Manager\", \"repo.host.se\", \"$REPO_USER\", \"$REPO_PWD\")" >> .sbt/0.13/plugins/creds.sbt
RUN sbt package

Buildroot Config Option for applying custom patch

I am new to buildroot and working to build Linaro with buildroot ..I have multiple fragment kernel config files and specified that in buildroot defconfig.
I have specified a custom kernel patches directory with BR2_LINUX_PATCH_DIR .
I dont have some of the config flags not set which are supposed to be there in the .config files..so i suspect that the Patches are applied successfully..so i tried giving a non existing location as Linux Patch dir and it does not give any error..
Is there anything other than giving value to BR2_LINUX_PATCH_DIR and what should be the format of the dir structure...in buildroot manual it says it should be
Package_name/patch name..For linux what should be the package name? It should be the same with which linux dir is created.for example for me it is linux-custom
Plz suggest and guide me in this.
Thanks in Advance
The option is named BR2_LINUX_KERNEL_PATCH, there is nothing named BR2_LINUX_PATCH_DIR. It applies all patches listed in this option (if those are files), or all files named *.patch if what's given in this option is a directory. See the code in linux/linux.mk:
define LINUX_APPLY_LOCAL_PATCHES
for p in $(filter-out ftp://% http://% https://%,$(LINUX_PATCHES)) ; do \
if test -d $$p ; then \
$(APPLY_PATCHES) $(#D) $$p \*.patch || exit 1 ; \
else \
$(APPLY_PATCHES) $(#D) `dirname $$p` `basename $$p` || exit 1; \
fi \
done
endef
Also, I would recommend that you watch the output of Buildroot: it shows everything it is doing, especially it lists the patches it applied. Look at the line >>> linux .... Patching, which is the marker for the beginning of the patching step of the linux package.

How can I change the installation path of an autotools-based Bitbake recipe?

I have an autotools-based BitBake recipe which I would like to have binaries installed in /usr/local/bin and libraries installed in /usr/local/lib (instead of /usr/bin and /usr/lib, which are the default target directories).
Here's a part of the autotools.bbclass file which I found important.
CONFIGUREOPTS = " --build=${BUILD_SYS} \
--host=${HOST_SYS} \
--target=${TARGET_SYS} \
--prefix=${prefix} \
--exec_prefix=${exec_prefix} \
--bindir=${bindir} \
--sbindir=${sbindir} \
--libexecdir=${libexecdir} \
--datadir=${datadir} \
--sysconfdir=${sysconfdir} \
--sharedstatedir=${sharedstatedir} \
--localstatedir=${localstatedir} \
--libdir=${libdir} \
...
I thought that the easiest way to accomplish what I wanted to do would be to simply change ${bindir} and ${libdir}, or perhaps change ${prefix} to /usr/local, but I haven't had any success in this area. Is there a way to change these installation variables, or am I thinking about this in the wrong way?
Update:
Strategy 1
As per Ross Burton's suggestion, I've tried adding the following to my recipe:
prefix="/usr/local"
exec_prefix="/usr/local"
but this causes the build to fail during that recipe's do_configure() task, and returns the following:
| checking for GLIB... no
| configure: error: Package requirements (glib-2.0 >= 2.12.3) were not met:
|
| No package 'glib-2.0' found
This package can be found during a normal build without these modified variables. I thought that adding the following line might allow the system to find the package metadata for glib:
PKG_CONFIG_PATH = " ${STAGING_DIR_HOST}/usr/lib/pkgconfig "
but this seems to have made no difference.
Strategy 2
I've also tried Ross Burton's other suggestion to add these variable assignments into my distribution's configuration file, but this causes it to fail during meta/recipes-extended/tzdata's do_install() task. It returns that DEFAULT_TIMEZONE is set to an invalid value. Here's the source of the error from tzdata_2015g.bb
# Install default timezone
if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
install -d ${D}${sysconfdir}
echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
else
bberror "DEFAULT_TIMEZONE is set to an invalid value."
exit 1
fi
I'm assuming that I've got a problem with ${datadir}, which references ${prefix}.
Do you want to change paths for everything or just one recipe? Not sure why you'd want to change just one recipe to /usr/local, but whatever.
If you want to change all of them, then the simple way is to set prefix in your local.conf or distro configuration (prefix = "/usr/local").
If you want to do it in a particular recipe, then just assigning prefix="/usr/local" and exec_prefix="/usr/local" in the recipe will work.
These variables are defined in meta/conf/bitbake.conf, where you can see that bindir is $exec_prefix/bin, which is probably why assigning prefix didn't work for you.
Your first strategy was on the right track, but you were clobbering more than you wanted by changing only "prefix". If you look in sources/poky/meta/conf/bitbake.conf you'll find everything you are clobbering when you set the variable "prefix" to something other than "/usr" (like it was in my case). In order to modify only the install path with what would manually be the "--prefix" option to configure, I needed to set all the variables listed here in that recipe:
prefix="/your/install/path/here"
datadir="/usr/share"
sharedstatedir="/usr/com"
exec_prefix="/usr"