Installing kubernetes-dashboard with helm fails - kubernetes

I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using helm init and install kubernetes dashboard through helm install stable/kubernetes-dashboard.
The helm install command seems to be successful and helm ls outputs:
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default
However after waiting a few minutes the deployment is still not ready.
Running kubectl get pods shows that the pod's status as CrashLoopBackOff.
NAME READY STATUS RESTARTS AGE
exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m
The description for the pod shows the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1
Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard
Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard
Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container
And the logs show the following panic message
panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system"
Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?
Is it possible to setup without giving the dashboard account cluster-admin permissions?

Check this out mate:
https://akomljen.com/installing-kubernetes-dashboard-per-namespace/
You can create your own roles if you want to.

By default i have puted namespace equals default, but if is other you need to replace for yours
kubectl create serviceaccount exhaling-ladybug-kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:exhaling-ladybug-kubernetes-dashboard

based on the error you have posted what is happineening is:
1. helm is trying is install dashboard but by default it was picking up the namespace you have provided.
For solving that:
1. either you create roles based on the namespace you are trying to install, by default namespace should be: default
2. just install the helm chart in the proper location which is required by helm chart, in your case you can do:
helm install stable/kubernetes-dashboard --name=kubernetes-dashboard --namespace=kube-system

Try creating clusterrole
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

Related

Error when getting IngressClass nginx: "nginx" not found

I'm using Kubernetes version: 1.19.16 on bare metal Ubuntu-18.04lts server. When i tried to deploy the nginx-ingress yaml file it always fails with below errors.
Following steps followed to deploy nginx-ingress,
$ git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress/deployments
kubernetes-ingress/deployments$ git branch
* main
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f deployment/nginx-ingress.yaml
deployment.apps/nginx-ingress created
$ kubectl get pods -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-75c4bd64bd-mm52x 0/1 Error 2 21s 10.244.1.5 k8s-master <none> <none>
$ kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c4bd64bd-mm52x 0/1 CrashLoopBackOff 12 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c4bd64bd 1 1 0 38m
$ kubectl logs nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
W1003 04:53:02.833073 1 flags.go:273] Ignoring unhandled arguments: []
I1003 04:53:02.833154 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.1 PlusFlag=false
I1003 04:53:02.833158 1 flags.go:191] Commit=a8742472b9ddf27433b6b1de49d250aa9a7cb47e Date=2022-09-16T08:09:31Z DirtyState=false Arch=linux/amd64 Go=go1.18.5
I1003 04:53:02.844374 1 main.go:210] Kubernetes version: 1.19.16
F1003 04:53:02.846604 1 main.go:225] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" not found
$ kubectl describe pods nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned nginx-ingress/nginx-ingress-75c4bd64bd-mm52x to k8s-worker-1
Normal Pulled 87s (x5 over 3m5s) kubelet Container image "nginx/nginx-ingress:2.3.1" already present on machine
Normal Created 87s (x5 over 3m5s) kubelet Created container nginx-ingress
Normal Started 87s (x5 over 3m5s) kubelet Started container nginx-ingress
Warning BackOff 75s (x10 over 3m3s) kubelet Back-off restarting failed container
Nginx Ingress controller Deployment file Link for the reference.
As I'm using kubernetes-ingress.git repository main branch, not sure whether main branch is compatible with my Kubernetes version or not.
Can anyone share some pointer to solve this?
I think you missed to install ingress-controller "NGINX" that is why it is not able to identify the same https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4
kubectl apply -f common/ingress-class.yaml
You can follow thie steps from this document: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

kubectl ImagePullBackOff due to secret

I'm creating kubevirt in minikube, initially kubevirt-operator.yaml fails with ImagePullBackOff. After I added secret in the yaml
imagePullSecrets:
- name: regcred
containers:
all my virt-operator* started to run. virt-api* pods still shows ImagePullBackOff.
The error comes out as
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned kubevirt/virt-api-787487d9cd-t68qf to minikube
Normal Pulling 25m (x4 over 27m) kubelet Pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Warning Failed 25m (x4 over 27m) kubelet Failed to pull image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for us-ashburn-1.ocir.io/xxx/virt-api, repository does not exist or may require 'docker login': denied: Anonymous users are only allowed read access on public repos
Warning Failed 25m (x4 over 27m) kubelet Error: ErrImagePull
Warning Failed 25m (x6 over 27m) kubelet Error: ImagePullBackOff
Normal BackOff 2m26s (x106 over 27m) kubelet Back-off pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Manually, I can pull the same image with docker login. Any help would be much appreciated. Thanks
This docker image looks like it is in a private registry(and from oracle). And I assume the regcred is not correct. Can you login there with docker login? if so you can create regcred secret like this
$ kubectl create secret docker-registry regcred --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<oci-auth-token>' --docker-email='<email-address>'
Also check this oracle tutorial: https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html
Here you can find the steps to implement secret values to the cluster.
If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace.
Your Minikube is a VM not your localhost. You try this
Open Terminal
eval $(minikube docker-env)
docker build .
kubectl create -f deployment.yaml
just valid this terminal. if closed terminal again open terminal and write eval $(minikube docker-env)
eval $(minikube docker-env) this code build image in Minikube
Also, try to login docker on all nodes by using docker login.
There is also a lengthy blog post describing how to debug image pull back-off in depth here
Hey if you look here. I think you can find some helpful documentation.
What they are doing is, they are upload the dockerconfig file which has login credentials as a secret and then referring to that in the deployment.
You could try to follow these steps and do something similar. Let me know if it works

New deployment is going into ImagePullBackOff

I have installed minikube in my local machine and have created a deployment from a yaml file with imagePullPolicy: Always.
On runnning, minikube kubectl -- get pods,the status of the pods is imagePullPolicy: ImagePullBackOff.
and on running
minikube kubectl -- describe pod podname
I am getting the following results:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 5m39s (x103 over 26h) kubelet Pulling image "deploy1:1.14.2"
Normal BackOff 44s (x2244 over 26h) kubelet Back-off pulling image "deploy1:1.14.2"
Please suggest how to make the deployment running. I have gone through the link but I could not find the service.xml file of the pod. Where is it in the Kubernetes in the local system/?
It means either you are trying to pull a image from a private repo or you don't have connectivity to outside. You can test this, but running command kubectl run <pod_name> --image=nginx. If this works then it means you are trying to pull a image from a repo which requires auth.

More detailed monitoring of Pod states

Our Pods usually spend at least a minute and up to several minutes in the Pending state, the events via kubectl describe pod x yield:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned testing/runner-2zyekyp-project-47-concurrent-0tqwl4 to host
Normal Pulled 55s kubelet, host Container image "registry.com/image:c1d98da0c17f9b1d4ca81713c138ee2e" already present on machine
Normal Created 55s kubelet, host Created container build
Normal Started 54s kubelet, host Started container build
Normal Pulled 54s kubelet, host Container image "gitlab/gitlab-runner-helper:x86_64-6214287e" already present on machine
Normal Created 54s kubelet, host Created container helper
Normal Started 54s kubelet, host Started container helper
The provided information is not exactly detailed as to figure out exactly what is happening.
Question:
How can we gather more detailed metrics of what exactly and when exactly something happens in regards to get a Pod running in order to troubleshoot which step exactly needs how much time?
Special interest would be the metric of how long it takes to mount a volume.
Check kubelet and kube scheduler logs because kube scheduler schedules the pod to a node and kubelet starts the pod on that node and reports the status as ready.
journalctl -u kubelet # after logging into the kubernetes node
kubectl logs kube-scheduler -n kube-system
Describe the pod, deployment, replicaset to get more details
kubectl describe pod podnanme -n namespacename
kubectl describe deploy deploymentnanme -n namespacename
kubectl describe rs replicasetnanme -n namespacename
Check events
kubectl get events -n namespacename
Describe the nodes and check available resources and status which should be ready.
kubectl describe node nodename

Pull an Image from a Private Registry fails - ImagePullBackOff

On our K8S Worker node with below command have created "secret" to pull images from our private (Nexus) registry.
kubectl create secret docker-registry regcred --docker-server=https://nexus-server/nexus/ --docker-username=admin --docker-password=password --docker-email=user#company.com
Created my-private-reg-pod.yaml in K8S Worker node, It has below.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: private-reg-container
image: nexus-server:4546/ubuntu-16:version-1
imagePullSecrets:
- name: regcred
Created pod with below command
kubectl create -f my-private-reg-pod.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 ImagePullBackOff 0 27m
kubectl describe pod test-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-pod to k8s-worker01
Warning Failed 26m (x6 over 28m) kubelet, k8s-worker01 Error: ImagePullBackOff
Normal Pulling 26m (x4 over 28m) kubelet, k8s-worker01 Pulling image "sonatype:4546/ubuntu-16:version-1"
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Failed to pull image "nexus-server:4546/ubuntu-16:version-1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus-server.domain.com/nexus/v2/ubuntu-16/manifests/ver-1: no basic auth credentials
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Error: ErrImagePull
Normal BackOff 3m9s (x111 over 28m) kubelet, k8s-worker01 Back-off pulling image "nexus-server:4546/ubuntu-16:version-1"
On terminal nexus login works
docker login nexus-server:4546
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Something i am missing with this section?
Since my docker login to nexus succeeded on terminal, So i have deleted my secret and created with kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson it worked.