Helm chart ignoring config file or given key value - kubernetes

I am not sure if the issue is related to promtail (helm chart used) or to helm itself.
I want to update the default host value for loki chart to a local host used on kubernetes, so I tried with this:
helm upgrade --install --namespace loki promtail grafana/promtail --set client.url=http://loki:3100/loki/api/v1/push
And with a custom values.yaml like this:
helm upgrade --install --namespace loki promtail grafana/promtail -f promtail.yaml
But it still uses wrong default url:
level=warn ts=2021-10-08T11:51:59.782636939Z caller=client.go:344 component=client host=loki-gateway msg="error sending batch, will retry" status=-1 error="Post \"http://loki-gateway/loki/api/v1/push\": dial tcp: lookup loki-gateway on 10.43.0.10:53: no such host"
If I inspect the config.yaml its using it doesnt use the internal url I gave during the installation:
root#promtail-69hwg:/# cat /etc/promtail/promtail.yaml
server:
log_level: info
http_listen_port: 3101
client:
url: http://loki-gateway/loki/api/v1/push
Any ideas? or anything I am missing?
Thanks

I don't think client.url is a value in the helm chart, but rather one inside a config file that your application is using.
Try setting config.lokiAddress:
config:
lokiAddress: http://loki-gateway/loki/api/v1/push
It gets templated into the config file I mentioned.

Related

How to Deploy a Custom Docker Image using Bitnami Postgresql Helm Chart

I'm attempting to use a Bitnami Helm chart for Postgresql to spin up a custom Docker image that I create (I take the Bitnami Postgres Docker image and pre-populate it with my data, and then build/tag it and push it to my own Docker registry).
When I attempt to start up the chart with my own image coordinates, the service spins up, but no pods are present. I'm trying to figure out why.
I've tried running helm install with the --debug option and I notice that if I run my configuration below, only 4 resources get created (client.go:128: [debug] creating 4 resource(s)), vs 5 resources if I try to spin up the default Docker image specified in the Postgres Helm chart. Presumably the missing resource is my pod. But I'm not sure why or how to fix this.
My Chart.yaml:
apiVersion: v2
name: test-db
description: A Helm chart for Kubernetes
type: application
version: "1.0-SNAPSHOT"
appVersion: "1.0-SNAPSHOT"
dependencies:
- name: postgresql
version: 11.9.13
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
My values.yaml:
postgresql:
enabled: true
image:
registry: myregistry.com
repository: test-db
tag: "1.0-SNAPSHOT"
pullPolicy: always
pullSecrets:
- my-reg-secret
service:
type: ClusterIP
nameOverride: test-db
I'm starting this all up by running
helm dep up
helm install mydb .
When I start up a Helm chart (helm install mychart .), is there a way to see what Helm/Kubectl is doing, beyond just passing the --debug flag? I'm trying to figure out why it's not spinning up the pod.
The issue was on this line in my values.yaml:
pullPolicy: always
The pullPolicy is case sensitive. Changing the value to Always (note capital A) fixed the issue.
I'll add that this error wasn't immediately obvious, and there was no indication in the output from running the Helm command that this was the issue.
I was able to discover the problem by looking at how the statefulset got deployed (I use k9s to navigate Kubernetes resources): it showed 0/0 for the number of pods that were deployed. Describing this statefulset, I was able to see the error in capitalization in the startup logs.

IBM-MQ kubernetes helm chart ImagePullBackOff

I want to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I've found this link and did everything as described in the guide: https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev.
But the pod is not starting with the error: "ImagePullBackOff". What could be the problem? My helmfile:
...
repositories:
- name: ibm-stable-charts
url: https://raw.githubusercontent.com/IBM/charts/master/repo/stable
releases:
- name: ibm-mq
namespace: test
createNamespace: true
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- ./ibm-mq.yaml
ibm-mq.yaml:
- - -
license: accept
security:
initVolumeAsRoot: true/false // I'm not sure about this, I added it just because it wasn't working.
// Both of the options don't work too
queueManager:
name: "QM1"
dev:
secret:
adminPasswordKey: adminPassword
name: mysecret
I've created the secret and seems like it's working, so the problem is not in the secret.
The full error I'm getting:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = Unknown desc = Error response from daemon: manifest for ibmcom/mq:9.1.5.0-r1 not found: manifest unknown: manifest unknown
I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2
I will leave some things as an exercise to you, but here is what that tutorial says:
helm repo add ibm-stable-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable
You don't really need to do this, since you are using helmfile.
Then they say to issue:
helm install --name foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--tls
which is targeted towards helm2 (because of those --name and --tls), but that is irrelevant to the problem.
When I install this, I get the same issue:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/ibmcom/mq:9.1.5.0-r1": failed to resolve reference "docker.io/ibmcom/mq:9.1.5.0-r1": docker.io/ibmcom/mq:9.1.5.0-r1: not found
I went to the docker.io page of theirs and indeed such a tag : 9.1.5.0-r1 is not present.
OK, can we update the image then?
helm show values ibm-stable-charts/ibm-mqadvanced-server-dev
reveals:
image:
# repository is the container repository to use, which must contain IBM MQ Advanced for Developers
repository: ibmcom/mq
# tag is the tag to use for the container repository
tag: 9.1.5.0-r1
good, that means we can change it via an override value:
helm install foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--set image.tag=latest # or any other tag
so this works.
How to set-up that tag in helmfile is left as an exercise to you, but it's pretty trivial.

Installing helm chart under custom namespace

I am trying to install some helm charts into custom namespaces on a Microk8s cluster but get the following errors. What am I doing wrong? The cluster is fresh and none of these namespaces or resources exist.
> helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki --create-namespace
W0902 08:08:35.878632 1610435 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "loki": current value is "default"
> helm install traefik traefik/traefik -f values/traefik.yml --namespace traefik --create-namespace
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
ClusterRole "traefik" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "traefik": current value is "default"
The resource (loki) to be deployed already exists in the specified namespace.
Please check with helm list -n loki.
So before you install it, you need to uninstall it first.
helm uninstall loki -n loki
helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki
Or you can try to update it directly:
helm upgrade loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki

RabbitMQ-Overview dashboard no metrics data shown

I have a working bitnami/rabbitmq server helm chart working on my Kubernetes cluster but the graphs displayed on Grafana are not enough, I found this dashboard RabbitMQ-Overview which has sufficient details that can be very useful for my bitnami/rabbitmq server but unfortunately there is no data showing on the dashboard. The issue here is I cannot see my metrics on this dashboard, can someone please suggest a work-around approach for my case. Please note I am using the kube-prometheus-stack helm chart for Prometheus+Grafana services.
Steps to solve this issue:
I enabled the rabbitmq_prometheus for all my Rabbitmq nodes by entering the Pods
rabbitmq-plugins enable rabbitmq_prometheus
Output:
:/$ rabbitmq-plugins list
Listing plugins with pattern "." ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#rabbitmq-0.broker-server
|/
[E] rabbitmq_management 3.8.9
[e*] rabbitmq_management_agent 3.8.9
[ ] rabbitmq_mqtt 3.8.9
[e*] rabbitmq_peer_discovery_common 3.8.9
[E*] rabbitmq_peer_discovery_k8s 3.8.9
[E*] rabbitmq_prometheus 3.8.9
Made sure I was using the same data source for Prometheus used in my Grafana
I tried creating a prometheus-rabbitmq-exporter to get the metrics from Rabbitmq and send them to RabbitMQ-Overview dashboard but no data was displayed.
my dashboard no data
Please note while reading this documentation to solve my issue I already have metrics from Rabbitmq displayed on Grafana but I need the RabbitMQ-Overview that contains more details about my server.
To build on #amin 's answer, the label can be added to the helm like so:
Before helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
After helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
labels:
release: prometheus
Fixed the issue after including 2 commands from this table table and need to edit the ServiceMonitor to include the label release: prometheus as it is needed to be visible in the prometheus targets as I am using kube-prometheus-stack helm chart.
helm install -f values.yml broker bitnami/rabbitmq --namespace default --set nodeSelector.test=rabbit --set volumePermissions.enabled=true --set replicaCount=1 --set metrics.enabled=true --set metrics.serviceMonitor.enabled=true
I hope it helps someone in the future!

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}