Postgres Kubernetes Helm chart fails when istio enabled - kubernetes

I have installed istio in my aks cluster and enabled it to a namespace called database as below.
kubectl label namespace database istio-injection=enabled
I'm going to install helm3 posgress database into database namespace.
helm install pg-db bitnami/postgresql-ha --version=2.0.1 -n database
few seconds database starting to fails because the database pod is not considered healthy.
when I disable adding sidecar into database as below it doesn't restart. How can I run this helm chart without disabling sidecar
podAnnotations:
sidecar.istio.io/inject: "false"
listing pods
pg-db-postgresql-ha-postgresql-1 logs
pg-db-postgresql-ha-pgpool-5475f499b8-7z4ph logs

Related

Missing information when scraping data in grafana using prometheus and minikube

I am trying to use prometheus and grafana to get information like cluster cpu usage, cluster memory usage and pod cpu usage in my kubernetes cluster. I am using minikube in wsl2. I am using following commands to get everything up and running:
To start minikube:
$ minikube start
To add repo and install prometheus:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
To create a nodeport on the port 9090:
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-ext
To access prometheus server outside minikube:
$ minikube service prometheus-server-ext
Adding grafana report and installing with helm:
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm install grafana grafana/grafana
To create a nodeport on the port 3000 :
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
To access grafana server outside minikube:
$ minikube service grafana-ext
To get decode and get password to grafana(username:admin):
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then I add a data source with type prometheus and the URL seen above: https://192.168.49.2:30988.
Until here everything works as expected. Then I import two different dashboards with the id "6417" and "12740". I then get the following dashboards:
and
My question is why do I only see the number of pods and cluster memory usage, but no cpu usage of pods or the cluster? It seems like there is a lot of information missing.
Here is the JSON code for dashboard with id 6417: https://pastebin.com/ywrT2qmQ
Here is the JSON code for dashboard with id 12740: https://pastebin.com/2neEzESu
I get the dashboards by using the id's 6417 and 12740 and importing it. Check image below:

Installing telepresence with a pod security policy

I'm trying to install telepresence into a EKS cluster that has PodSecurityPolicy's. I've gotten the traffic manager installed by running helm on the traffic manager chart:
helm install traffic-manager -n ambassador datawire/telepresence --create-namespace
After that I modify the traffic-manager-ambassador clusterRole to use one of the cluster PodSecurityPolicy's. Installation of the traffic manager eventually succeeds after I do this. However the installation of the uninstall-agent job fails:
Error creating: pods "uninstall-agents-" is forbidden: PodSecurityPolicy: unable to admit pod: []
My question is - what role or clusterRole do I have to modify to allow helm to uninstall telepresence? Or how do I figure out what service account is being used to try and install the pod so I can give it access to a pod security policy?
I made some fixes at https://github.com/ddl-pjohnson/telepresence/pull/1/files to make it easier to add additional rules and to run the helm hook as the correct user.

Bitnami mongodb cluster con't access from mongo3t client kubernetes

I have installed bitnami mongodb cluster with helm install my-release bitnami/mongodb -f values-production.yaml . , architecture: replicaset , and replicaCount: 2 in aws server,
helm version version.BuildInfo{Version:"v3.2.4",CHART mongodb-8.2.1, APP VERSION 4.2.8, kubectl version GitVersion:"v1.16.8"
Its installed successfully one statefulset along with three pods and one svc , I have put port forward, to get access mongodb from roto3T client as a localhost with using user name and password , But its getting Error , it is Authorization failed on admin database as root
I have tryed with portforword to svc , pod, and statefulset all are getting same error .
kubectl port-forward --namespace default svc/my-release-mongodb-headless 27017:27017
Is it I done any wrong or anything extra i need to mension in values-production.yaml file ?
It will install and working fine with below cammand
helm install mongo-cluster-name . -f values-production.yaml --set architecture=replicaset --set replicaCount=2
this will start statefulset and svc and pod along with disk, if we do the portforward to svc name mongo-cluster-name-headless, we can access as a localhost
kubectl port-forward --namespace default svc/mmongo-cluster-name-headless 27017:27017
you can't do a port-forward in a replicaset architecture.
https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/#enable-external-access-for-a-replica-set
You are using helm ? nice just check this lines there is an option just made for you https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml#L446
This is gonna create a LoadBalancer for each of you primary & secondary mongodb pods.
But LoadBalancer are quite expensive (it's kind of 15 euros/month for me), so 30/month just to access the replicaset (2 replicas mini) from outside.

cannot helm install rabbitmq servers (helm 2.16.9) : namespaces "rabbit" is forbidden

helm install --name my-rabbitserver stable/rabbitmq --namespace rabbit
Error: release my-rabbitserver failed: namespaces "rabbit" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "rabbit"
I have tried with (and without a rabbit namespace created before the install attempt)
I am using helm 2.16.9, so I need to qualify the name of my installation with a --name
I am using this against a Google Cloud kubernetes cluster
It looks as though the Helm tiller pod did not have sufficient priveldeges.
I found this similar issue:
https://support.sumologic.com/hc/en-us/articles/360037704393-Kubernetes-Helm-install-fails-with-Error-namespaces-sumologic-is-forbidden-User-system-serviceaccount-kube-system-default-cannot-get-resource-namespaces-in-API-group-in-the-namespace-sumologic-
Basically I have to stop the tiller deployment, set up tiller ServiceAccount yaml and run it to give tiller access to the kube-system. And then execute helm init again with the new service account.
The helm rabbitmq installs then appear work as advertised
I thought helm was supposed to make life easier, but it still has its own limitations and additional yaml files to get it to work as advertised.

Firewall/Port requirements for Helm 2.16

We are installing helmv2.16 on Kubernetes v1.14 in offline mode.We downloaded the tiller docker image and loaded on the server where we were installing the helm
i. No access to Internet from the application servers
ii. Limited ports connectivity between the Kubernetes master and worker nodes(No * connectivity between the servers). The ports that are opened between the application servers are -
a.10250-10260
b.6443
c.443
d.2379-2380
e.Node port series 30000-32767
f. 44134-44135
We downloaded the Helm 2.16 and installed following the below steps. The tiller pod failed to come up till we allowed ALL communication between kubernetes master and kubernetes worker nodes. This means that there are specific firewall requirements for Helm/tiller to function in a kubernetes cluster. Could someone please share the port / firewall details since we do not want to open ALL traffic even between the nodes of a cluster (rather we would open specific ports).
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --skip-refresh