In kubernetes not able to attach to container in a pod - kubernetes

I am not able to attach to a container in a pod. Receiving below message
Error from server (Forbidden): pods "sleep-76df4f989c-mqvnb" is forbidden: cannot exec into or attach to a privileged container
Could someone please let me what i am missing?

This seems to be a permission (possibly RBAC) issue.
See Kubernetes pod security-policy.
For instance gluster/gluster-kubernetes issue 432 points to Azure PR 1961, which disable the cluster-admin rights (although you can customize/override the admission-controller flags passed to the API server).
So it depends on the nature of your Kubernetes environment.

I have not enabled RBAC at all. What I have done is that i have enabled istio and all the pods are now running with side car.
I am not able to attach or exec to pods which have istio.
I am able to attach or exec which do not have istio proxy side car.
Need help here.

Related

not able to pull image in POD ,getting ImagePullBackOff

I have my kubernetes nodes on different vms . each VM has 1 kubernetes node . in total I have 7 worker nodes
While trying to create POD on 1 node I get ImagepullBackOff error while docker pull on the same node is successful .
rest of the worker nodes are working fine
My docker registry is already set as insecure-regiry in daemon.json
pls help
ImagePullBackOff is almost always a typo in the image name. Make sure you specified the name correctly.
You need to describe the Pod using: kubectl describe pod <name>. It will show a more detailed message why pulling fails.
The kubernetes service account attached to the Pod is probably not able to pull the image. The service account must have the correct ImagePullSecrets.
When no service account is configured, it uses the default service account.
kubectl get sa -o yaml
This will give a list of ImagePullSecrets attached to this service account. See if you have created the correct secret and attached it to the service account.
resolved the issue.
issue was the Container runtime. affected nodes were using containrd as runtime and I setup these nodes to access my insecure registry for containerd . everything was OK after that.

deployed a service on k8s but not showing any pods weven when it failed

I have deployed a k8s service, however its not showing any pods. This is what I see
kubectl get deployments
It should create on the default namespace
kubectl get nodes (this shows me nothing)
How do I troubleshoot a failed deployment. The test-control-plane is the one deployed by kind this is the k8s one I'm using.
kubectl get nodes
If above command is not showing anything which mean there is no Nodes in your cluster so where your workload will run ?
You need to have at least one worker node in K8s cluster so deployment can schedule the POD on it and run the application.
You can check worker node using same command
kubectl get nodes
You can debug more and check the reason of issue further using
kubectl describe deployment <name of your deployment>
To find out what really went wrong, first follow the steps described in Harsh Manvar in his answer. Perhaps obtaining that information can help you find the problem. If not, check the logs of your deployment. Try to list your pods and see which ones did not boot properly, then check their logs.
You can also use the kubectl describe on pods to see in more detail what went wrong. Since you are using kind, I include a list of known errors for you.
You can also see this visual guide on troubleshooting Kubernetes deployments and 5 Tips for Troubleshooting Kubernetes Deployments.

kubernetes can't pull certain images from ibm cloud registry

My pod does the following:
Warning Failed 21m (x4 over 23m) kubelet, 10.76.199.35 Failed to pull image "registryname/image:version1.2": rpc error: code = Unknown desc = Error response from daemon: unauthorized: authentication required
but other images will work. The output of
ibmcloud cr images
doesn't show anything different about the images that don't work. What could be going wrong here?
Given this is in kubenetes and you can see the image in ibmcloud cr images it most likely going to be a misconfiguration of your imagePullSecrets.
If you do kubectl get pod <pod-name> -o yaml you will be able to see the what imagePullSecrets are in scope for the pod and check if it looks correct (could be worth comparing it to a pod that is working).
It's worth noting that if your cluster is an instance in the IBM Cloud Kubernetes Service a default imagePullSecret for your account is added to the default namespace and therefore if you are running the pod in a different Kubenetes namespace you will need to do additional steps to make that work. This is a good place to start for information on this topic.
https://console.bluemix.net/docs/containers/cs_images.html#other
Looks like you haven't logged into the IBM Cloud Container registry. If you haven't done this yet, You should login with this command
ibmcloud cr login
Other issues can be
Docker is not installed.
The Docker client is not logged in to IBM Cloud Container Registry.
Your IBM Cloud access token might have expired.
You can find more troubleshooting instructions here

Problem getting pods stats from kubelet and cri-o

We are running Kubernetes with the following configuration:
On-premise Kubernetes 1.11.3, cri-o 1.11.6 and CentOS7 with UEK-4.14.35
I can't make crictl stats to return pods information, it returns empty list only. Has anyone run into the same problem?
Another issue we have, is that when I query the kubelet for stats/summary it returns an empty pods list.
I think that these two issues are related, although I am not sure which one of them is the problem.
I would recommend checking kubelet service to verify health status and debug any suspicious events within the cluster. I assume that CRI-O runtime engine can select kubelet as the main Pods information provider because of its managing Pod lifecycle role.
systemctl status kubelet -l
journalctl -u kubelet
In case you found some errors or dubious events, share it in a comment below this answer.
However, you can use metrics-server, which will collect Pod metrics in the cluster and enable kube-apiserver flags for Aggregation Layer. Here is a good article about Horizontal Pod Autoscaling and monitoring resources via Prometheus.

Why can't I delete heapster and kubernetes-dashboard on gke namespace=kube-system

I want to have full control of what I do with my single node cluster (savings...lol), but somehow I can't do this even if I delete the deployment it respawns ..
As mentioned in another answer, you cannot delete them directly via the Kubernetes API; however, you can delete them indirectly via the Google Container Engine API.
To remove the dashboard, run gcloud container clusters update $CLUSTER_NAME --update-addons=KubernetesDashboard=DISABLED.
To disable heapster you need to disable monitoring using gcloud container clusters update $CLUSTER_NAME --monitoring-service=none (it may actually require disabling another add-on too, I can't recall at the moment).
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update for the commands referenced above.
Heapster is configured as a cluster addon. The addon manager is going to reconcile it to it's preconfigured state if you change or delete it.
You are stuck with it.
Even if you delete heapster pod; it restart automatically. I can made it with scaling it down to zero as shown below
kubectl scale --replicas=0 deployment/heapster-v1.6.0-beta.1 --namespace=kube-system
And you can find the exact name of the heapster pod within result of the command below
kubectl get deployments --namespace=kube-system
By the way you can find more options to reduce resource usage here:
https://cloud.google.com/kubernetes-engine/docs/how-to/small-cluster-tuning