I have installed bitnami mongodb cluster with helm install my-release bitnami/mongodb -f values-production.yaml . , architecture: replicaset , and replicaCount: 2 in aws server,
helm version version.BuildInfo{Version:"v3.2.4",CHART mongodb-8.2.1, APP VERSION 4.2.8, kubectl version GitVersion:"v1.16.8"
Its installed successfully one statefulset along with three pods and one svc , I have put port forward, to get access mongodb from roto3T client as a localhost with using user name and password , But its getting Error , it is Authorization failed on admin database as root
I have tryed with portforword to svc , pod, and statefulset all are getting same error .
kubectl port-forward --namespace default svc/my-release-mongodb-headless 27017:27017
Is it I done any wrong or anything extra i need to mension in values-production.yaml file ?
It will install and working fine with below cammand
helm install mongo-cluster-name . -f values-production.yaml --set architecture=replicaset --set replicaCount=2
this will start statefulset and svc and pod along with disk, if we do the portforward to svc name mongo-cluster-name-headless, we can access as a localhost
kubectl port-forward --namespace default svc/mmongo-cluster-name-headless 27017:27017
you can't do a port-forward in a replicaset architecture.
https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/#enable-external-access-for-a-replica-set
You are using helm ? nice just check this lines there is an option just made for you https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml#L446
This is gonna create a LoadBalancer for each of you primary & secondary mongodb pods.
But LoadBalancer are quite expensive (it's kind of 15 euros/month for me), so 30/month just to access the replicaset (2 replicas mini) from outside.
Related
I am trying to use prometheus and grafana to get information like cluster cpu usage, cluster memory usage and pod cpu usage in my kubernetes cluster. I am using minikube in wsl2. I am using following commands to get everything up and running:
To start minikube:
$ minikube start
To add repo and install prometheus:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
To create a nodeport on the port 9090:
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-ext
To access prometheus server outside minikube:
$ minikube service prometheus-server-ext
Adding grafana report and installing with helm:
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm install grafana grafana/grafana
To create a nodeport on the port 3000 :
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
To access grafana server outside minikube:
$ minikube service grafana-ext
To get decode and get password to grafana(username:admin):
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then I add a data source with type prometheus and the URL seen above: https://192.168.49.2:30988.
Until here everything works as expected. Then I import two different dashboards with the id "6417" and "12740". I then get the following dashboards:
and
My question is why do I only see the number of pods and cluster memory usage, but no cpu usage of pods or the cluster? It seems like there is a lot of information missing.
Here is the JSON code for dashboard with id 6417: https://pastebin.com/ywrT2qmQ
Here is the JSON code for dashboard with id 12740: https://pastebin.com/2neEzESu
I get the dashboards by using the id's 6417 and 12740 and importing it. Check image below:
I have installed istio in my aks cluster and enabled it to a namespace called database as below.
kubectl label namespace database istio-injection=enabled
I'm going to install helm3 posgress database into database namespace.
helm install pg-db bitnami/postgresql-ha --version=2.0.1 -n database
few seconds database starting to fails because the database pod is not considered healthy.
when I disable adding sidecar into database as below it doesn't restart. How can I run this helm chart without disabling sidecar
podAnnotations:
sidecar.istio.io/inject: "false"
listing pods
pg-db-postgresql-ha-postgresql-1 logs
pg-db-postgresql-ha-pgpool-5475f499b8-7z4ph logs
I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.
But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I helm init it says it's already installed (which makes totally sense):
helm init --tiller-namespace gitlab-managed-apps --service-account tiller
$HELM_HOME has been configured at C:\Users\danie\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:
$ helm list --tiller-namespace gitlab-managed-apps --debug
[debug] Created tunnel using local port: '60471'
[debug] SERVER: "127.0.0.1:60471"
Error: context deadline exceeded
What I'm missing so I can use the Gitlab-installed Tiller from my CLI?
Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official installation instruction on GitLab website, which would mean this is what causes your helm ls command to fail (just skip it)
The best way to verify it is via:
kubectl get deploy/tiller-deploy -n gitlab-managed-apps
Do you see any tiller related deployment object in that namespace ?
Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use --kube-context argument with helm command.
Update:
I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof here).
Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod:
#The CA
ca.cert.pem
ca.key.pem
#The Helm client files
helm.cert.pem
helm.key.pem
#The Tiller server files
tiller.cert.pem
tiller.key.pem
and then connect helm client to tiller server using following command, as explained here:
helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
Here's the way I've been doing this.
First open a shell in the gitlab tiller pod:
# replace the pod name, tiller-deploy-5bb888969c-7bzpl with your own
kubectl exec -n gitlab-managed-apps tiller-deploy-5bb888969c-7bzpl -it -- sh
Then use the pod's native helm and certs... to connect to tiller
$ env | grep TILLER_TLS_CERTS
#cd to the result, in my case /etc/certs
$ cd /etc/certs
# connect to tiller with the certs using the native helm (/helm) in my case:
$ /helm ls --tls --tls-ca-cert ./ca.crt --tls-cert ./tls.crt --tls-key ./tls.key
I have a Kubernetes (v1.10) cluster with Istio installed, I'm trying to install fission following Enabling Istio on Fission guide. when i run
[![helm install --namespace $FISSION_NAMESPACE --set enableIstio=true --name istio-demo
https://github.com/fission/fission/releases/download/0.9.1/fission-all-0.9.1.tgz
It throws error saying
Error: the server has asked for the client to provide credentials
(My cluster has two nodes and one master created using kubespray all ubuntu 16.04 machines)
I think that error is probably an authentication failure between helm and the cluster. Are you able to run kubectl version? How about helm ls?
If you have follow up questions, could you ask them on the fission slack? You'll get quicker answers there.
I think problem with helm
Solution
Remove .helm folder
rm -rf .helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system
I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/