How to pull variable environment into deployment.yml? - kubernetes

I created a process from Jenkins that builds a dockerfile and then creates a chart for me through the helm. The problem is that the name of the image I'm pushing to the dockerhub then repository changes according to the Jenkins build number.
deployment.yaml
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
values.yaml:
image:
repository: photop/micro_focus
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "%image_tag%"
Jenkinsfile:
stage ('Deploy&Operate HM'){
steps{
script{
bat 'minikube start'
bat 'kubectl create deployment %BUILD_NUMBER% --image="%BUILD_NUMBER%":latest'
bat 'helm install test-%BUILD_NUMBER% ./micro --set image_tag=%BUILD_NUMBER%'
Output:
Failed to apply default image tag "photop/micro_focus:%image_tag%": couldn't parse image reference "photop/micro_focus:%image_tag%": invalid reference format
How to change the variable of the Jenkins build and don't %image_tag% :
photop/micro_focus:%image_tag%

You just create the deployment which automatically applies to your cluster since you did not specify the --dry-run=client param. Therefore i do not understand why you would use helm install, it feels ambigious in this way. But i could be wrong and don't understand this way doings this way.

Related

Integration test for kubernetes deployment with helm on openshift

I am trying to use ansible or helm test to test all resources are up and running after the deployment of ansible automation platform (automation controller, private-automation-hub) on openshift.
Currently, I am using ansible assertion to check the deployments but seems like I can use --atomic with helm commands and check the all resources are up after the helm deployment.
Can you help me with ansible to check all the resources (not only deployments but all resources I deployed with helm chart)? maybe example code or also if possible with helm test some examples?
Thank you.
- name: Test deployment
hosts: localhost
gather_facts: false
# vars:
# deployment_name: "pah-api"
tasks:
- name: gather all deployments
shell: oc get deployment -o template --template '{{"{{"}}range.items{{"}}"}}{{"{{"}}.metadata.name{{"}}"}}{{"{{"}}"\n"{{"}}"}}{{"{{"}}end{{"}}"}}'
register: deployed_resources
# - name: print the output of deployments
# debug:
# var: deployed_resources.stdout_lines
- name: Get deployment status
shell: oc get deployment {{ item }} -o=jsonpath='{.status.readyReplicas}'
with_items: "{{ deployed_resources.stdout_lines }}"
register: deployment_status
failed_when: deployment_status.rc != 0
- name: Verify deployment is running
assert:
that:
- deployment_status.stdout != 'null'
- deployment_status.stdout != '0'
fail_msg: 'Deployment {{ deployed_resources }} is not running.'
Currently I only check for deployments but it would be nice to check all resources (I deployed with helm chart) with ansible or via helm test?
You could use the Ansible Helm module. The atomic parameter is available out of the box: https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html

Where is .Values taken from in kubernetes configuration yaml file

I can't seem to find the formal definition of .Values (taken from here)
image: {{ .Values.image.repo }}/rs-mysql-db:{{ .Values.image.version }}
From the docs, it is definitely related to helm chart:
Note that all of Helm's built-in variables begin with an uppercase letter to easily distinguish them from user-defined values: .Release.Name, .Capabilities.KubeVersion.
But in the above example (robot-shop/K8s/helm/templates) I don't see any values.yaml file - what am I missing?
It's under the helm folder:
https://github.com/instana/robot-shop/blob/master/K8s/helm/values.yaml
# Registry and repository for Docker images
# Default is docker/robotshop/image:latest
image:
repo: robotshop
version: latest
pullPolicy: IfNotPresent

Helm. Execute bash script to choose proper image

Helmfile:
spec:
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.image.name }} --> execute shell script here
imagePullPolicy: Always
ports:
- containerPort: 8081
env:
- name: BACKEND_HOST
value: {{ .Values.backend.host }}
I want to execute bash script to check if this image exists. If not, than other image would be taken. How to do it with helm? Or is there any solution to do it?
Helm doesn't have any way to call out to other processes, make network connections, or do any other sort of external lookup (with one specific exception where it can read Kubernetes objects out of the cluster). You'd have to pass this value in when you run the helm install command instead:
helm install release-name ./chart-directory \
--set image.name=$(the command you want to run)
If this is getting run from part of some larger process, you may find it easier to write a JSON or YAML file that can be passed to the helm install -f option instead of dynamically calling out to the script; the helm install --set option has some unusual syntax and behavior. You can even go one step further and check that per-installation YAML file into source control, and have another step in your deployment pipeline notice the commit and actually do the installation ("GitOps" style).

Running Skaffold fails if configured to work with Helm

I am trying to make Skaffold work with Helm.
Below is my skaffold.yml file:
apiVersion: skaffold/v2beta23
kind: Config
metadata:
name: test-app
build:
artifacts:
- image: test.common.repositories.cloud.int/manager/k8s
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: my-release
artifactOverrides:
image: test.common.repositories.cloud.int/manager/k8s
imageStrategy:
helm: {}
Here is my values.yaml:
image:
repository: test.common.repositories.cloud.int/manager/k8s
tag: 1.0.0
Running the skaffold command results in:
...
Starting deploy...
Helm release my-release not installed. Installing...
Error: INSTALLATION FAILED: failed to download ""
deploying "my-release": install: exit status 1
Does anyone have an idea, what is missing here?!
I believe this is happening because you have not specified a chart to use for the helm release. I was able to reproduce your issue by commenting out the chartPath field in the skaffold.yaml file of the helm-deployment example in the Skaffold repo.
You can specify a local chart using the deploy.helm.release.chartPath field or a remote chart using the deploy.helm.release.remoteChart field.

How to set java environment variables in a helm chart?

What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..."
Similarly, my prod variables would something like
"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..."
Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in
helm install my-app --set env=prod ./test-chart
or
helm install my-app --set env=dev ./test-chart
The best way is to use single deployment template and use separate value file for each environment.
It does not need to be only environment variable used in the application.
The same can be apply for any environment specific configuration.
Example:
deployment.yaml
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "{{ .Values.opts }}"
values-dev.yaml
# system opts
opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc "
values-prod.yaml
# system opts
opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc "
Then specify the related value file in the helm command.
For example, deploying on dev enviornemnt.
helm install -f values-dev.yaml my-app ./test-chart