How to Deploy a Custom Docker Image using Bitnami Postgresql Helm Chart - kubernetes

I'm attempting to use a Bitnami Helm chart for Postgresql to spin up a custom Docker image that I create (I take the Bitnami Postgres Docker image and pre-populate it with my data, and then build/tag it and push it to my own Docker registry).
When I attempt to start up the chart with my own image coordinates, the service spins up, but no pods are present. I'm trying to figure out why.
I've tried running helm install with the --debug option and I notice that if I run my configuration below, only 4 resources get created (client.go:128: [debug] creating 4 resource(s)), vs 5 resources if I try to spin up the default Docker image specified in the Postgres Helm chart. Presumably the missing resource is my pod. But I'm not sure why or how to fix this.
My Chart.yaml:
apiVersion: v2
name: test-db
description: A Helm chart for Kubernetes
type: application
version: "1.0-SNAPSHOT"
appVersion: "1.0-SNAPSHOT"
dependencies:
- name: postgresql
version: 11.9.13
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
My values.yaml:
postgresql:
enabled: true
image:
registry: myregistry.com
repository: test-db
tag: "1.0-SNAPSHOT"
pullPolicy: always
pullSecrets:
- my-reg-secret
service:
type: ClusterIP
nameOverride: test-db
I'm starting this all up by running
helm dep up
helm install mydb .
When I start up a Helm chart (helm install mychart .), is there a way to see what Helm/Kubectl is doing, beyond just passing the --debug flag? I'm trying to figure out why it's not spinning up the pod.

The issue was on this line in my values.yaml:
pullPolicy: always
The pullPolicy is case sensitive. Changing the value to Always (note capital A) fixed the issue.
I'll add that this error wasn't immediately obvious, and there was no indication in the output from running the Helm command that this was the issue.
I was able to discover the problem by looking at how the statefulset got deployed (I use k9s to navigate Kubernetes resources): it showed 0/0 for the number of pods that were deployed. Describing this statefulset, I was able to see the error in capitalization in the startup logs.

Related

How to deploy prometheus using prometheus operator?

I'm trying to deploy Prometheus using Prometheus operator. I have used the documentation and helm charts from https://github.com/prometheus-operator/prometheus-operator.
Since I need the charts for future reference, rather then directly installing the charts from repository I made a Chart.yaml file and added the repository as dependency.
apiVersion: v2
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
engine: gotpl
type: application
maintainers:
- name:
email:
name: kube-prometheus-stack
sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
version: 32.2.1
appVersion: 0.54.0
kubeVersion: ">=1.16.0-0"
home: https://github.com/prometheus-operator/kube-prometheus
keywords:
- operator
- prometheus
- kube-prometheus
annotations:
"artifacthub.io/operator": "true"
"artifacthub.io/links": |
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
- name: Upstream Project
url: https://github.com/prometheus-operator/kube-prometheus
dependencies:
- name: kube-state-metrics
version: "4.4.*"
repository: https://prometheus-community.github.io/helm-charts
condition: kubeStateMetrics.enabled
- name: prometheus-node-exporter
version: "2.5.*"
repository: https://prometheus-community.github.io/helm-charts
condition: nodeExporter.enabled
- name: grafana
version: "6.21.*"
repository: https://grafana.github.io/helm-charts
condition: grafana.enabled
Chart.yaml file
Then I execute the following cmds
hem dependency update
helm install <chartname> .
Every thing works fine but when I check the pods only the operator pod is created and running with other services and grafana.
Is this the default behavior of the Prometheus operator.
I thought it might be the default behavior of Prometheus so I tried to deploy redis-cluster using redis-cluster operator and also rabbitmq-cluster with rabitmq-cluster operator but each one creates only the operator pod and not cluster pods.
an operator pod acts as a controller that listens to events regarding specific custom resources. if you only deploy the operator, you have to seperately deploy the custom resource you wish to be created.
with the prometeus-operator, that would be a custom resource of kind "prometheus". if the helm chart you choose is capable to also deploy this (or not) should be indicated
in the charts values.yaml and documented on their github page.
you can also use the examples from the prometheus-operator repo to create prometheus instances. check out these files to do so: https://github.com/prometheus-operator/prometheus-operator/tree/main/example/rbac/prometheus

IBM-MQ kubernetes helm chart ImagePullBackOff

I want to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I've found this link and did everything as described in the guide: https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev.
But the pod is not starting with the error: "ImagePullBackOff". What could be the problem? My helmfile:
...
repositories:
- name: ibm-stable-charts
url: https://raw.githubusercontent.com/IBM/charts/master/repo/stable
releases:
- name: ibm-mq
namespace: test
createNamespace: true
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- ./ibm-mq.yaml
ibm-mq.yaml:
- - -
license: accept
security:
initVolumeAsRoot: true/false // I'm not sure about this, I added it just because it wasn't working.
// Both of the options don't work too
queueManager:
name: "QM1"
dev:
secret:
adminPasswordKey: adminPassword
name: mysecret
I've created the secret and seems like it's working, so the problem is not in the secret.
The full error I'm getting:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = Unknown desc = Error response from daemon: manifest for ibmcom/mq:9.1.5.0-r1 not found: manifest unknown: manifest unknown
I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2
I will leave some things as an exercise to you, but here is what that tutorial says:
helm repo add ibm-stable-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable
You don't really need to do this, since you are using helmfile.
Then they say to issue:
helm install --name foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--tls
which is targeted towards helm2 (because of those --name and --tls), but that is irrelevant to the problem.
When I install this, I get the same issue:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/ibmcom/mq:9.1.5.0-r1": failed to resolve reference "docker.io/ibmcom/mq:9.1.5.0-r1": docker.io/ibmcom/mq:9.1.5.0-r1: not found
I went to the docker.io page of theirs and indeed such a tag : 9.1.5.0-r1 is not present.
OK, can we update the image then?
helm show values ibm-stable-charts/ibm-mqadvanced-server-dev
reveals:
image:
# repository is the container repository to use, which must contain IBM MQ Advanced for Developers
repository: ibmcom/mq
# tag is the tag to use for the container repository
tag: 9.1.5.0-r1
good, that means we can change it via an override value:
helm install foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--set image.tag=latest # or any other tag
so this works.
How to set-up that tag in helmfile is left as an exercise to you, but it's pretty trivial.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

helm deploy with no objects

I'm doing a very simple chart with helm.
It consists on deploying a chart with just one object ("/templates/pod.yaml"), that have to be deployed just if a parameter of file Values.yaml is true.
To provide an example of my case, this is what I have:
/templates/pod.yaml
{{- if eq .Values.shoudBeDeployed true }}
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
{{- end}}
Values.yaml
shoudBeDeployed: true
So when I use shoudBeDeployed with true value, helm installs it correctly.
My problem is that when shoudBeDeployed is false, helm doesn't deploy anything (as I expected), but helm shows the following message:
Error: release CHART_NAME failed: no objects visited
And if I execute helm ls I get that CHART_NAME is deployed with STATUS FAILED.
My question is if there is a way to not have it as a failed helm deploy. So I would like to not see it when using the command helm ls
I know that I could move the logic of shoudBeDeployed variable outside the chart, and then deploy the chart or not depending on its value, but I would like to know if there is a solution just using helm.
#pcampana I think there is no way to stop helm deployment if there is nothing to deploy. But here is a trick that you can use to delete a helm chart if it is
FAILED.
helm install --name temp demo --atomic
where demo is the helm chart directory and temp is release name .
release name is mandatory for this to work.
One scenario is when you see error
Error: release temp failed: no objects visited
you can use above command to deploy helm chart.
I think this might be useful for you.

How to set a different namespace for child helm charts?

When you install a chart with a child chart that doesn't specify a namespace, Helm will use the one specified on command line via --namespace. Is it possible to override this flag for a specific child chart?
For example if I have chart A which depends on chart B and I specify --namespace foo, I want to be able to customize the resources of chart B to be installed into some other namespace bar instead of foo.
Update 2:
Helm 3 added support for multi namespaces https://github.com/helm/helm/issues/2060
Update 1:
If a resource template specifies a metadata.namespace, then it will be installed in that namespace. For example, if I have a pod with metadata.namespace: x and I run helm install mychart --namespace y, that pod will be installed in x. I guess you could use regular helm templates with the namespace to parameterize it.
Original answer:
We do not plan on fully supporting multi-namespaced releases until Helm 3.0
https://github.com/kubernetes/helm/issues/2060#issuecomment-306847365
As a workaround, you install for each namespace individually using --skip-dependencies or with dependency conditions
If you already have different charts then you can use helmfile to achieve this.
Step 1:
create the following folder.
my-awesome-infrastructure/
helm
helmfile
helmfile.yaml
Where helm and helmfile are the binary executables.
Step 2: install the helm diff plugin which is needed used helmfile.
helm plugin install https://github.com/databus23/helm-diff
Step 3: declare your charts in the helmfile.yaml.
helmBinary: ./helm
repositories:
- name: ingress-nginx
url: https://kubernetes.github.io/ingress-nginx
- name: bitnami
url: https://charts.bitnami.com/bitnami
releases:
- name: nginx-ingress
namespace: nginx-ingress
createNamespace: true
chart: ingress-nginx/ingress-nginx
version: ~4.1.0
- name: jupyterhub
namespace: jupyterhub
createNamespace: true
chart: bitnami/jupyterhub
version: ~1.1.12
- name: metrics-server
namespace: metrics-server
createNamespace: true
chart: bitnami/metrics-server
version: ~5.11.9
Step 4: run helmfile to deploy all charts.
./helmfile apply
In the above example, you are deploying three separate charts to three separate namespaces.
Under the covers, helmfile will run helm install separately and create separate releases.