IBM-MQ kubernetes helm chart ImagePullBackOff - kubernetes

I want to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I've found this link and did everything as described in the guide: https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev.
But the pod is not starting with the error: "ImagePullBackOff". What could be the problem? My helmfile:
...
repositories:
- name: ibm-stable-charts
url: https://raw.githubusercontent.com/IBM/charts/master/repo/stable
releases:
- name: ibm-mq
namespace: test
createNamespace: true
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- ./ibm-mq.yaml
ibm-mq.yaml:
- - -
license: accept
security:
initVolumeAsRoot: true/false // I'm not sure about this, I added it just because it wasn't working.
// Both of the options don't work too
queueManager:
name: "QM1"
dev:
secret:
adminPasswordKey: adminPassword
name: mysecret
I've created the secret and seems like it's working, so the problem is not in the secret.
The full error I'm getting:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = Unknown desc = Error response from daemon: manifest for ibmcom/mq:9.1.5.0-r1 not found: manifest unknown: manifest unknown
I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2

I will leave some things as an exercise to you, but here is what that tutorial says:
helm repo add ibm-stable-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable
You don't really need to do this, since you are using helmfile.
Then they say to issue:
helm install --name foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--tls
which is targeted towards helm2 (because of those --name and --tls), but that is irrelevant to the problem.
When I install this, I get the same issue:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/ibmcom/mq:9.1.5.0-r1": failed to resolve reference "docker.io/ibmcom/mq:9.1.5.0-r1": docker.io/ibmcom/mq:9.1.5.0-r1: not found
I went to the docker.io page of theirs and indeed such a tag : 9.1.5.0-r1 is not present.
OK, can we update the image then?
helm show values ibm-stable-charts/ibm-mqadvanced-server-dev
reveals:
image:
# repository is the container repository to use, which must contain IBM MQ Advanced for Developers
repository: ibmcom/mq
# tag is the tag to use for the container repository
tag: 9.1.5.0-r1
good, that means we can change it via an override value:
helm install foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--set image.tag=latest # or any other tag
so this works.
How to set-up that tag in helmfile is left as an exercise to you, but it's pretty trivial.

Related

How to Deploy a Custom Docker Image using Bitnami Postgresql Helm Chart

I'm attempting to use a Bitnami Helm chart for Postgresql to spin up a custom Docker image that I create (I take the Bitnami Postgres Docker image and pre-populate it with my data, and then build/tag it and push it to my own Docker registry).
When I attempt to start up the chart with my own image coordinates, the service spins up, but no pods are present. I'm trying to figure out why.
I've tried running helm install with the --debug option and I notice that if I run my configuration below, only 4 resources get created (client.go:128: [debug] creating 4 resource(s)), vs 5 resources if I try to spin up the default Docker image specified in the Postgres Helm chart. Presumably the missing resource is my pod. But I'm not sure why or how to fix this.
My Chart.yaml:
apiVersion: v2
name: test-db
description: A Helm chart for Kubernetes
type: application
version: "1.0-SNAPSHOT"
appVersion: "1.0-SNAPSHOT"
dependencies:
- name: postgresql
version: 11.9.13
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
My values.yaml:
postgresql:
enabled: true
image:
registry: myregistry.com
repository: test-db
tag: "1.0-SNAPSHOT"
pullPolicy: always
pullSecrets:
- my-reg-secret
service:
type: ClusterIP
nameOverride: test-db
I'm starting this all up by running
helm dep up
helm install mydb .
When I start up a Helm chart (helm install mychart .), is there a way to see what Helm/Kubectl is doing, beyond just passing the --debug flag? I'm trying to figure out why it's not spinning up the pod.
The issue was on this line in my values.yaml:
pullPolicy: always
The pullPolicy is case sensitive. Changing the value to Always (note capital A) fixed the issue.
I'll add that this error wasn't immediately obvious, and there was no indication in the output from running the Helm command that this was the issue.
I was able to discover the problem by looking at how the statefulset got deployed (I use k9s to navigate Kubernetes resources): it showed 0/0 for the number of pods that were deployed. Describing this statefulset, I was able to see the error in capitalization in the startup logs.

How can I pass the correct parameters to Helm, using Ansible to install GitLab?

I'm writing an Ansible task to deploy GitLab in my k3s environment.
According to the doc, I need to execute this to install GitLab using Helm:
$ helm install gitlab gitlab/gitlab \
--set global.hosts.domain=DOMAIN \
--set certmanager-issuer.email=me#example.com
But the community.kubernetes.helm doesn't handle --set parameters and only call helm with the --values parameter.
So my Ansible task looks like this:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global.hosts.domain: example.com
certmanager-issuer.email: info#example.com
But the helm chart still return the error You must provide an email to associate with your TLS certificates. Please set certmanager-issuer.email.
I've tried manually in a terminal, and it seems that the GitLab helm chart requires --set parameters and fail with --values. But community.kubernetes.helm doesn't.
What can I do?
Is there a bug on GitLab helm chart side?
it seems that the GitLab helm chart requires --set parameters and fail with --values
That is an erroneous assumption; what you are running into is that --set splits on . because otherwise providing fully-formed YAML on the command line would be painful
The correct values are using sub-objects where the . occurs:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global:
hosts:
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L47
domain: example.com
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L592-595
certmanager-issuer:
email: info#example.com

Helm: Conditional deployment of a dependent chart, only install if it has not been installed before

While installing a helm chart a condition for the dependency works well as the following Chart.yaml file. But it doesn't allow to apply the condition based on the existing Kubernetes resource.
# Chart.yaml
apiVersion: v1
name: my-chart
version: 0.3.1
appVersion: 0.4.5
description: A helm chart with dependency
dependencies:
- name: metrics-server
version: 2.5.0
repository: https://artifacts.myserver.com/v1/helm
condition: metrics-server.enabled
I did a local install of the chart (my-chart) in a namespace(default), then I try to install the same chart in another namespace(pb) I get the following error which says the resource already exists. This resource, "system:metrics-server-aggregated-reader", has been installed cluster wide as previous dependency (metrics-server). Following is the step to reproduce.
user#hostname$helm install my-chart -n default --set metrics-server.enabled=true ./my-chart
NAME: my-chart
LAST DEPLOYED: Wed Nov 25 16:22:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
My Cluster
user#hostname$helm install my-chart -n pb --set metrics-server.enabled=true ./my-chart
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "system:metrics-server-aggregated-reader" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "pb": current value is "default"
There is a way to modify the template inside the metrics-server chart to conditionally generate the manifest files as described in Helm Conditional Templates. In order to do this I have to modify and maintain the metrics-server chart in internal artifact which will restrict me using the most recent charts.
I am looking for an approach to query the existing Kubernetes resource, "system:metrics-server-aggregated-reader", and only install the dependency chart if such resource do not exists.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

Helm to install Fluentd-Cloudwatch on Amazon EKS

While trying to install "incubator/fluentd-cloudwatch" using helm on Amazon EKS, and setting user to root, I am getting below response.
Command used :
helm install --name fluentd incubator/fluentd-cloudwatch --set awsRegion=eu-west-1,rbac.create=true --set extraVars[0]="{ name: FLUENT_UID, value: '0' }"
Error:
Error: YAML parse error on fluentd-cloudwatch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 38: did not find expected ',' or ']'
If we do not set user to root, then by default, fluentd runs with "fluent" user and its log shows:
[error]: unexpected error error_class=Errno::EACCES error=#<Errno::
EACCES: Permission denied # rb_sysopen - /var/log/fluentd-containers.log.pos>`
Based on this looks like it's just trying to convert eu-west-1,rbac.create=true to a JSON field as field and there's an extra comma(,) there causing it to fail.
And if you look at the values.yaml you'll see the right separate options are awsRegion and rbac.create so --set awsRegion=eu-west-1 --set rbac.create=true should fix the first error.
With respect to the /var/log/... Permission denied error, you can see here that its mounted as a hostPath so if you do a:
# (means read/write user/group/world)
$ sudo chmod 444 /var/log
and all your nodes, the error should go away. Note that you need to add it to all the nodes because your pod can land anywhere in your cluster.
Download and update values.yaml as below. The changes are in awsRegion, rbac.create=true and extraVars field.
annotations: {}
awsRegion: us-east-1
awsRole:
awsAccessKeyId:
awsSecretAccessKey:
logGroupName: kubernetes
rbac:
## If true, create and use RBAC resources
create: true
## Ignored if rbac.create is true
serviceAccountName: default
# Add extra environment variables if specified (must be specified as a single line
object and be quoted)
extraVars:
- "{ name: FLUENT_UID, value: '0' }"
Then run below command to setup fluentd on Kubernetes cluster to send logs to CloudWatch Logs.
$ helm install --name fluentd -f .\fluentd-cloudwatch-values.yaml incubator/fluentd-cloudwatch
I did this and it worked for me. Logs were sent to CloudWatch Logs. Also make sure your ec2 nodes have IAM role with appropriate permissions for CloudWatch Logs.