How to keep helm value file up to date in gitlab ci pipeline? - deployment

I have two different repos for backend and frontend. These repos have pipelines that build and push docker images with new version tags. After pushing new images they trigger a deploy pipeline in a different repo that has a helm value file and pass the version tag as a variable which is then used in the command line to deploy the new version like this:
helm upgrade my-app ./my-chart -n staging -f values.yaml --set frontend.version=$FRONTEND_TAG
But the new frontend version is not saved so when the backend repo triggers deployment it will redeploy the old version of frontend set in the values file.
Is there a way to keep the separate version values of frontend and backend up to date in the helm values file without updating it manually?

Why don't you split into 2 different repos (backend and frontend)?
For frontend:
helm upgrade my-app ./my-chart-frontend -n staging -f values.yaml --set frontend.version=$FRONTEND_TAG
For backend:
helm upgrade my-app ./my-chart-backend -n staging -f values.yaml --set frontend.version=$BACKEND_TAG

Related

Problem with fetching helm chart from kubernetes-charts

I have a deployment script that is fetching rabbitmq from kubernetes-charts reports
- name: rabbitmq
version: 0.6.17
repository: https://kubernetes-charts.storage.googleapis.com/
It seems like this is not working from yesterday. Does someone which repository url I can use instead?
The official chart repository is deprecated. For the specific Helm chart, you should check https://artifacthub.io/.
In the case of rabbitmq, you can find it here. And here are the Helm commands.
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/rabbitmq

How can I pass the correct parameters to Helm, using Ansible to install GitLab?

I'm writing an Ansible task to deploy GitLab in my k3s environment.
According to the doc, I need to execute this to install GitLab using Helm:
$ helm install gitlab gitlab/gitlab \
--set global.hosts.domain=DOMAIN \
--set certmanager-issuer.email=me#example.com
But the community.kubernetes.helm doesn't handle --set parameters and only call helm with the --values parameter.
So my Ansible task looks like this:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global.hosts.domain: example.com
certmanager-issuer.email: info#example.com
But the helm chart still return the error You must provide an email to associate with your TLS certificates. Please set certmanager-issuer.email.
I've tried manually in a terminal, and it seems that the GitLab helm chart requires --set parameters and fail with --values. But community.kubernetes.helm doesn't.
What can I do?
Is there a bug on GitLab helm chart side?
it seems that the GitLab helm chart requires --set parameters and fail with --values
That is an erroneous assumption; what you are running into is that --set splits on . because otherwise providing fully-formed YAML on the command line would be painful
The correct values are using sub-objects where the . occurs:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global:
hosts:
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L47
domain: example.com
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L592-595
certmanager-issuer:
email: info#example.com

Helm: Conditional deployment of a dependent chart, only install if it has not been installed before

While installing a helm chart a condition for the dependency works well as the following Chart.yaml file. But it doesn't allow to apply the condition based on the existing Kubernetes resource.
# Chart.yaml
apiVersion: v1
name: my-chart
version: 0.3.1
appVersion: 0.4.5
description: A helm chart with dependency
dependencies:
- name: metrics-server
version: 2.5.0
repository: https://artifacts.myserver.com/v1/helm
condition: metrics-server.enabled
I did a local install of the chart (my-chart) in a namespace(default), then I try to install the same chart in another namespace(pb) I get the following error which says the resource already exists. This resource, "system:metrics-server-aggregated-reader", has been installed cluster wide as previous dependency (metrics-server). Following is the step to reproduce.
user#hostname$helm install my-chart -n default --set metrics-server.enabled=true ./my-chart
NAME: my-chart
LAST DEPLOYED: Wed Nov 25 16:22:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
My Cluster
user#hostname$helm install my-chart -n pb --set metrics-server.enabled=true ./my-chart
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "system:metrics-server-aggregated-reader" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "pb": current value is "default"
There is a way to modify the template inside the metrics-server chart to conditionally generate the manifest files as described in Helm Conditional Templates. In order to do this I have to modify and maintain the metrics-server chart in internal artifact which will restrict me using the most recent charts.
I am looking for an approach to query the existing Kubernetes resource, "system:metrics-server-aggregated-reader", and only install the dependency chart if such resource do not exists.

kube helm charts - multiple values files

it might be simple question but can't find anywhere if it's duable;
Is it possible to have values files for helm charts (stable/jenkins let's say) and have two different values files for it?
I would like in values_a.yaml have some values like these:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
...
password: {{ .Values.secrets.masterPassword }}
and in the values_b.yaml - which will be encrypted with AWS KMS
secrets:
masterPassword: xxx
the above code doesn't work and wanted to know, as you can put those vars in kube manifests like it
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.config.name }}
namespace: {{ .Values.config.namespace }}
...
can they be somehow passed to other values files
EDIT:
If it was possible I would just put the
master:
password: xxx
in values_b.yaml but vars cannot be duplicated, and the official helm chart expects the master.password val from that file - so has to somehow pass it there but in encrypted way
I'm not quite sure but this feature of helm might help you.
Helm gives you the functionality to pass custom Values.yaml which have higher precedence over the fields of the main Values.yaml while performing helm install or helm upgrade.
For Helm 3
$ helm install <name> ./mychart -f myValues.yaml
For Helm 2
$ helm install --name <name> ./mychart --values myValues.yaml
The valid answer is from David Maze in the comments of the response of Kamol Hasan.
You can use multiple -f or --values options: helm install ... -f
values_a.yaml -f values_b.yaml. But you can't use templating in any
Helm values file unless the chart specifically supports it (using the
tpl function).
If you use multiple -f then latest value files override earlier ones.

Should jx step helm apply create/produce a helm release

I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:
sh 'jx step helm build'
sh 'jx step helm apply'
It finishes with success and deploys pods/creates deployment etc. however, helm list is empty.
When I execute something like helm install ... or helm upgrade --install ... it creates a release and helm list shows that.
Is it correct behavior?
More details:
EKS installed with:
eksctl create cluster --region eu-west-2 --name integration --version 1.12 \
--nodegroup-name integration-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto \
--full-ecr-access \
--vpc-cidr "172.20.0.0/16"
Then I set up ingresses (external & internal) with some kubectly apply command (won't share the files). Then I set up routes and vpc related stuff.
JX installed with:
jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \
--ingress-class='internal-nginx' \
--ingress-deployment='nginx-internal-ingress-controller' \
--ingress-service='internal-ingress-nginx' --on-premise \
--external-ip='#########' \
--git-api-token=######### \
--git-username=######### --no-default-environments=true
Details from the installation:
? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles
? Would you like wait and resolve this address to an IP address and use it for the domain? No
? Domain ###########
? Cloud Provider eks
? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes
? Your custom DNS name: ###########
? Would you like to enable Long Term Storage? A bucket for provider eks will be created No
? local Git user for GitHub server: ###########
? Do you wish to use GitHub as the pipelines Git server: Yes
? A local Jenkins X versions repository already exists, pull the latest? Yes
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Then I set up helm:
kubectl apply -f tiller-rbac-config.yaml
helm init --service-account tiller
where tiller-rbac-config.yaml is:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
helm version says:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
jx version says:
NAME VERSION
jx 2.0.258
jenkins x platform 2.0.330
Kubernetes cluster v1.12.6-eks-d69f1b
helm client Client: v2.13.1+g618447c
git git version 2.17.1
Operating System Ubuntu 18.04.2 LTS
Applications were imported this way:
jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git#github.com:#####.git
And environment was created this way:
jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test"
Going throught the changelog, it seems that the tillerless mode has been made the default mode since version 2.0.246.
In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.
The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as you can read on the Jenkins X docs.
What --no-tiller means is to switch helm to use template mode which
means we no longer internally use helm install mychart to install a
chart, we actually use helm template mychart instead which generates
the YAML using the same helm charts and the standard helm
confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
As mentioned by James Strachan in the comments, when using the tillerless mode, you can view your deployments using jx step helm list