Deploying zookeeper from Bitnami Helm chart - apache-zookeeper

We are using Zookeeper Bitnami chart for creating a vmware carvel package, firstly I tried deploying helm chart, but pod was pending, I got the below output. But how does I install zkCli.sh
I want to test the ouput but i am unable to understand How to install zkCli.sh
$ helm install zookeeper bitnami/zookeeper
NAME: zookeeper
LAST DEPLOYED: Wed Apr 20 13:33:03 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 9.0.3
APP VERSION: 3.8.0
** Please be patient while the chart is being deployed **
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
zookeeper.default.svc.cluster.local
To connect to your ZooKeeper server run the following commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- **zkCli.sh**
To connect to your ZooKeeper server from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/release-name-zookeeper-0 2181: &
**zkCli.sh** 127.0.0.1:2181

Related

Missing information when scraping data in grafana using prometheus and minikube

I am trying to use prometheus and grafana to get information like cluster cpu usage, cluster memory usage and pod cpu usage in my kubernetes cluster. I am using minikube in wsl2. I am using following commands to get everything up and running:
To start minikube:
$ minikube start
To add repo and install prometheus:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
To create a nodeport on the port 9090:
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-ext
To access prometheus server outside minikube:
$ minikube service prometheus-server-ext
Adding grafana report and installing with helm:
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm install grafana grafana/grafana
To create a nodeport on the port 3000 :
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
To access grafana server outside minikube:
$ minikube service grafana-ext
To get decode and get password to grafana(username:admin):
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then I add a data source with type prometheus and the URL seen above: https://192.168.49.2:30988.
Until here everything works as expected. Then I import two different dashboards with the id "6417" and "12740". I then get the following dashboards:
and
My question is why do I only see the number of pods and cluster memory usage, but no cpu usage of pods or the cluster? It seems like there is a lot of information missing.
Here is the JSON code for dashboard with id 6417: https://pastebin.com/ywrT2qmQ
Here is the JSON code for dashboard with id 12740: https://pastebin.com/2neEzESu
I get the dashboards by using the id's 6417 and 12740 and importing it. Check image below:

wiki.js exec user process caused: exec format error on postgress container

I'm trying to deploy a wiki.js into my K3S cluster of four RPi4.
For this, I run this commands according to the install instructions (https://docs.requarks.io/install/kubernetes):
$ helm repo add requarks https://charts.js.wiki
$ helm repo update
$ helm install wikijs requarks/wiki
After those commands, I get the following:
NAME: wikijs
LAST DEPLOYED: Tue Jun 14 13:25:30 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://wiki.minikube.localmap[path:/ pathType:Prefix]
However, when I get the pods, I get the following:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
wikijs-7f6c8b9f54-lz55k 0/1 ContainerCreating 0 3s
wikijs-postgresql-0 0/1 Error 0 3s
Finally, viewing the postgres logs, I get:
$ kubectl logs wikijs-postgresql-0
standard_init_linux.go:228: exec user process caused: exec format error
I believe this is an error about an executable running in the wrong architecture but, both, wikijs and postgresql support ARM64 so, by deploying the app, the right architecture should be selected, shouldn't it?
If I need to select the architecture manually, how can I do so? I've viewed the chart for wikijs and I can't find the place to select the postgres image.
Many thanks!
I was running into the same issue. The issue is running the postgres image on your rpi. I was able to get this to work on my rpi4 using this image for my postgresql statefulset: arm64v8/postgres:14 from docker.io.
I had to change this image in two places within the helm chart:
# charts/postgresql/values.yaml
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
volumePermissions:
enabled: true
image:
registry: docker.io
repository: arm64v8/postgres
tag: 14
The latter is for the initContainer (see statefulset template within the postgresql chart).

Bitnami mongodb cluster con't access from mongo3t client kubernetes

I have installed bitnami mongodb cluster with helm install my-release bitnami/mongodb -f values-production.yaml . , architecture: replicaset , and replicaCount: 2 in aws server,
helm version version.BuildInfo{Version:"v3.2.4",CHART mongodb-8.2.1, APP VERSION 4.2.8, kubectl version GitVersion:"v1.16.8"
Its installed successfully one statefulset along with three pods and one svc , I have put port forward, to get access mongodb from roto3T client as a localhost with using user name and password , But its getting Error , it is Authorization failed on admin database as root
I have tryed with portforword to svc , pod, and statefulset all are getting same error .
kubectl port-forward --namespace default svc/my-release-mongodb-headless 27017:27017
Is it I done any wrong or anything extra i need to mension in values-production.yaml file ?
It will install and working fine with below cammand
helm install mongo-cluster-name . -f values-production.yaml --set architecture=replicaset --set replicaCount=2
this will start statefulset and svc and pod along with disk, if we do the portforward to svc name mongo-cluster-name-headless, we can access as a localhost
kubectl port-forward --namespace default svc/mmongo-cluster-name-headless 27017:27017
you can't do a port-forward in a replicaset architecture.
https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/#enable-external-access-for-a-replica-set
You are using helm ? nice just check this lines there is an option just made for you https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml#L446
This is gonna create a LoadBalancer for each of you primary & secondary mongodb pods.
But LoadBalancer are quite expensive (it's kind of 15 euros/month for me), so 30/month just to access the replicaset (2 replicas mini) from outside.

Installation of ibm mq chart version 1.2.0 using helm in Gcp,created a error during pod creation “Creashloopbackoff”

IBM Mq Helm chart installation failed to create Pod showing "Crashloop Backoff error".
Pod error Message:
Error setting admin password: /usr/bin/sudo: exit status 1: sudo: effective uid is not 0, is
/usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
Infrastructure: Google Cloud Platform
Kubectl version:
Client Version: v1.18.6
Server Version: v1.16.13-gke.1.
helm chart:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
Helm command:
$ helm install mq --set license=accept --set service.type=LoadBalancer --set queueManager.dev.secret.name=mysecret --set queueManager.dev.secret.adminPasswordKey=adminPassword . --set security.initVolumeAsRoot=true

Gitlab-installed Helm: Error: context deadline exceeded

I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key