Jenkins deployment with Kustomize - how to add JENKINS_OPTS - kubernetes

I feel like this should be an already asked question, but I'm having difficulties finding a concrete answer. I'm deploying Jenkins through ArgoCD by defining the deployment via kustomize (kubernetes yaml). I want to inject a prefix to have Jenkins start on /jenkins, but I don't see a way to add it. I saw online that I can have a env tag, but no full example of this was available. Where would I inject a prefix value if using kubernetes yaml for a Jenkins deployment?

So, I solved this issue myself, and I'd like to post the answer as this is the top searched question when searching "Kustomize Jenkins_opts".
In your project, assuming you are using Kustomize to deploy Jenkins (This will work with any app deployment where you want to inject values when deploying), you should have a project structure similar to this:
ProjectA
|
|---> app.yaml //contains the yaml definitions for your deployment
|---> kustomize.yaml //entry file to run Kustomize to deploy your app
Add a new file to your project structure. Name it whatever you want, I named mine something like app-env.yaml. It will look something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
template:
spec:
containers:
- name: jenkins
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
This will specifically inject the --prefix flag to assign the prefix value for the URL to Jenkins on deployment to the Jenkins container. You can add multiple env variables. You can inject any value you want. My example is using Jenkins specific flags as this question centered around Jenkins, but it works for any app. Add this file to your Kustomize file from earlier:
namePrefix: kustomize-
resources:
- app.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- app-env.yaml
When your app is deployed via K8s, it will run the startup process for your app, while passing the values defined in your env file. Hope this helps anyone else.

Related

Kustomize HelmChartInflationGeneration Error With ChartName Not Found

I have the following chartInflator.yml file:
apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
When I ran it using this, I got the error message below:
$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
Here is my project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes:
Include the inflator directly in your kustomization.yaml in the helmCharts section.
Use name instead of chartName.
Set chartHome in the helmGlobals section rather than per-chart.
That gets us something like this in our kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory.
You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml.
Update
To make sure things are clear, this is what I'm recommending:
Remove the kustomize bits from your helm chart, so that it looks like this.
Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/.
Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod

How to change application log lvl in Kubernetes? [duplicate]

This question already has answers here:
Kubernetes deployment - Externalizing log file
(2 answers)
Closed 1 year ago.
I have a java application, which has to be deployed to different environments via gitlab-ci. One of such environments is Kubernetes cluster. My app has some log configs. For Kubernetes cluster they live in logback-k8s.xml.
So, in logback i have something like
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
In a Dockerfile I have something like
ENTRYPOINT ["java","-jar","run/app.jar","-Dlogback.configurationFile=/run/classes/logback_k8s.xml"]
My app is deployed via Deployment.yaml, where I have something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
containers:
- image: registry.myreg.ru/app:1.1.1
name: app
In .gitlab-ci.yml I have something like
Kuber:
stage: Kubectl
script:
- kubectl apply -f kuber/Deployment.yaml -n development
Having all of this, how can I change logging lvl of my app, when it already is deployed to the cluster? Like, the silly way I can imagine is to change logback configs in a project and then rerun pipeline. But it looks like too much actions. What if I have any troubles with my current running version of the app, and all I want - is to restart it with DEBUG lvl to inspect the situation? What the best practices?
UPD: already answered here
The simplest approach here is to let your app read log level from environment variables. This way - you don't need to change it and the behaviour will depend on the environment. To find out how to add an env. variable to your container in your deployment you can do: kubectl explain deployment.spec.template.spec.containers.env --api-version=apps/v1 (You can always use kubectl explain to understand how to configure particular kubernetes resource.
So, in your case, you can configure your deployment, like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- image: registry.myreg.ru/app:1.1.1
name: app
env:
- name: LOG_LEVEL
value: "INFO"
If you app ignores LOG_LEVEL env. variable you can use variable substitution in logback configuration. Also, don't forget to specify a selector for your deployment: kubectl explain deployment.spec.selector --api-version=apps/v1 https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

ConfigMap that can reference current Namespace

I'm working with a Pod (Shiny Proxy) that talks to Kubernetes API to start other pods. I'm wanting to make this generic, and so don't want to hardcode the namespace (because I intend to have multiple of these, deployed probably as an OpenShift Template or similar).
I am using Kustomize to set the namespace on all objects. Here's what my kustomization.yaml looks like for my overlay:
bases:
- ../../base
namespace: shiny
commonAnnotations:
technical_contact: A Local Developer <somedev#example.invalid>
Running Shiny Proxy and having it start the pods I need it to (I have service accounts and RBAC already sorted) works, so long as as in the configuration for Shiny Proxy I specify (hard-code) the namespace that the new pods should be generated in. The default namespace that Shiny Proxy will use is (unfortunately) 'default', which is inappropriate for my needs.
Currently for the configuration I'm using a ConfigMap (perhaps I should move to a Kustomize ConfigMapGenerator)
The ConfigMap in question is currently like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
...
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
...
The above works, but 'shiny' is hardcoded; I would like to be able to do something like the following:
namespace: { .metadata.namespace }
But this doesn't appear to work in a ConfigMap, and I don't see anything in the documentation that would lead to believe that it would, or that a similar thing appears possible within the ConfigMap machinery.
Looking over the Kustomize documentation doesn't fill me with clarity either, particularly as the configuration file is essentially plain-text (and not a YAML document as far as the ConfigMap is concerned). I've seen some use of Vars, but https://github.com/kubernetes-sigs/kustomize/issues/741 leads to believe that's a non-starter.
Is there a nice declarative way of handling this? Or should I be looking to have the templating smarts happen within the container, which seems kinda wrong to me, but I am still new to Kubernetes (and OpenShift)
I'm using CodeReady Containers 1.24 (OpenShift 4.7.2) which is essentially Kubernetes 1.20 (IIRC). I'm preferring to keep this fairly well aligned with Kubernetes without getting too OpenShift specific, but this is still early days.
Thanks,
Cameron
If you don't want to hard-code a specific data in your manifest file, you can consider using Kustomize plugins. In this case, the sedtransformer plugin may be useful. This is an example plugin, maintained and tested by the kustomize maintainers, but not built-in to kustomize.
As you can see in the Kustomize plugins guide:
Kustomize offers a plugin framework allowing people to write their own resource generators and transformers.
For more information on creating and using Kustomize plugins, see Extending Kustomize.
I will create an example to illustrate how you can use the sedtransformer plugin in your case.
Suppose I have a shiny-proxy ConfigMap:
NOTE: I don't specify a namespace, I use namespace: NAMESPACE instead.
$ cat cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: NAMESPACE
something_else:
something: something
To use the sedtransformer plugin, we first need to create the plugin’s configuration file which contains a YAML configuration object:
NOTE: In argsOneLiner: I specify that NAMESPACE should be replaced with shiny.
$ cat sedTransformer.yaml
apiVersion: someteam.example.com/v1
kind: SedTransformer
metadata:
name: sedtransformer
argsOneLiner: s/NAMESPACE/shiny/g
Next, we need to put the SedTransformer Bash script in the right place.
When loading, kustomize will first look for an executable file called
$XDG_CONFIG_HOME/kustomize/plugin/${apiVersion}/LOWERCASE(${kind})/${kind}
I create the necessary directories and download the SedTransformer script from the Github:
NOTE: The downloaded script need to be executable.
$ mkdir -p $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ cd $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ wget https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer
$ chmod a+x SedTransformer
Finally, we can check if it works as expected:
NOTE: To use this plugin, you need to provide the --enable-alpha-plugins flag.
$ tree
.
├── cm.yaml
├── kustomization.yaml
└── sedTransformer.yaml
0 directories, 3 files
$ cat kustomization.yaml
resources:
- cm.yaml
transformers:
- sedTransformer.yaml
$ kustomize build --enable-alpha-plugins .
apiVersion: v1
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
something_else:
something: something
kind: ConfigMap
metadata:
name: shiny-proxy
Using the sedtransformer plugin can be especially useful if you want to replace NAMESPACE in a number of places.
I found the easiest way of doing this was to use an entrypoint script in the container that harvested the downward API (?) service credentials (specifically the namespace secret) that get mounted in the container, and exposes that as an environment variable.
export SHINY_K8S_NAMESPACE=`cat /run/secrets/kubernetes.io/serviceaccount/namespace`
cd /opt/shiny-proxy/working
exec java ${JVM_OPTIONS} -jar /opt/shiny-proxy/shiny-proxy.jar
Within the application configuration (shiny-proxy), it supports the use of environment variables in its configuration file, so I can refer to the pod's namespace using ${SHINY_K8S_NAMESPACE}
Although, I've just now seen the idea of a fieldRef (from https://docs.openshift.com/enterprise/3.2/dev_guide/downward_api.html), and would be generalisable to things other than just namespace.
apiVersion: v1
kind: Pod
metadata:
name: dapi-env-test-pod
spec:
containers:
- name: env-test-container
image: gcr.io/google_containers/busybox
command: ["/bin/sh", "-c", "env"]
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never

Azure CD Pipeline to push image into the AKS (Kubernetes pipeline)

I am very new creating CD pipeline to grape image from Azure Container Registry(ACR) and push it into the Azure Kubernetes(AKS),
In first part like in CI pipeline I am able to push my .netcore api image into the ACR, now my aim is to
Create CD pipeline to grape that image and deploy it to Kubernetes
Although I have created Kubernetes cluster in Azure with running 3 agents. I want to make it very simple without involving any deployment.yaml file etc,
Can any one help me out how i can achieve this goal and
What is the exact tasks in my CD pipeline ?
Thanks for the help in advance
Creating the YAML file is critical for being able to redeploy and track what is happening. If you don't want to create YAML then you have limited options. You could execute the imperative command from Azure DevOps by using a kubectl task.
kubectl create deployment <name> --image=<image>.azureacr.io
Or you can use the Kubernetes provider for Terraform to avoid creating YAML directly.
Follow up:
So if you are familiar with the Kubernetes imperative commands you can use that to generate your YAML by using the --dry-run and --output options. Like so:
kubectl create deployment <name> --image=<image>.azureacr.io --dry-run --output yaml > example.yaml
That would produce something like looks like this which you can use to bootstrap creating your manifest file.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: example
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Now you can pull that repo or an artifact that contains that manifest into your Azure DevOps Release Pipeline and add the "Deploy to Kubernetes Cluster" task.
This should get you pretty close to completing a pipeline.
this is impossible doesn't really makes sense without any deployment.yaml file or something similar. you can use:
kubectl create deployment %name% --image=your_image.azurecr.io
but this is not really flexible and won't get you anywhere. If you want to use kubernetes you have to understand deployments\pods\services\etc. No way of getting around that

Conditional container declaration in k8s config

I am using v1beta2 of kubernetes and I have Deployment kind of configuration.
In this configuration, I have a base conf of my app and I want to add conditionally a second docker image (container) in the same pod.
My config file :
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: ${MY_APP_NAME}
spec:
containers:
- name: my_first_container
image: image_url
[...]
- name: my_second_container <------ I want to put conditional declaration of this container
[...]
I don't want to add the second container in a spearate pod.
The condition is based on a variable like ${K8S_CONTAINER2_CONDITION} valued by sed command in linux.
This command replace variables like ${MY_APP_NAME}.
How can I put conditional declaration of this container ?
For some applications, I need to deploy both containers and for others, only the first one. But I have only one k8s configuration file (yaml).
you should look at helm charts for customizing the deployment file at deploy time