Pods on google kubernetes engine pending? - kubernetes

Trying to set up the jupyterhub server on Google Kubernetes following this tutorial. Everything went through fine. But when I install jupyterhub/jupyterhub image with helm, it's always showing the pods are pending:
kubectl --namespace=jupyter-server get pod
NAME READY STATUS RESTARTS AGE
hub-6dbd4df8b8-nqvnf 0/1 Pending 0 17h
proxy-7bb666576c-fx726 0/2 Pending 0 17h
Even after 17 hours.
The helm version is 2.6.2 as suggested in the tutorial. And I am using 3 f1-micro instances in the kubernetes cluster. Are these instances too small? Thanks for any advice.

Try describing the pod, and the describing the nodes in the cluster, to get more info about why exactly they're still pending:
kubectl describe po/hub-6dbd4df8b8-nqvnf -n jupyter-server
kubectl describe po/proxy-7bb666576c-fx726 -n jupyter-server
kubectl describe nodes

Related

How to get information related to etcd in my local kubernetes cluster created using Rancher Desktop

So my question is pretty straightforward, how to get information related to etcd in the kubernetes cluster I created using rancher desktop. I get the following 3 pods when I start the cluster.
❯ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
❯ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-d76bd69b-6n2tn 1/1 Running 24 (38m ago) 35d
kube-system local-path-provisioner-6c79684f77-g44x4 1/1 Running 24 (38m ago) 35d
kube-system metrics-server-7cd5fcb6b7-fl7ws 1/1 Running 24 (38m ago) 35d
I want to see the storage consumption of the etcd. Any help is much appreciated since I couldn't find anything related to this so far.

Kubectl : No resource found even tough there are pods running in the namespace

I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)

Enabling NodeLocalDNS fails

We have 2 clusters on GKE: dev and production. I tried to run this command on dev cluster:
gcloud beta container clusters update "dev" --update-addons=NodeLocalDNS=ENABLED
And everything went great, node-local-dns pods are running and all works, next morning I decided to run same command on production cluster and node-local-dns fails to run, and I noticed that both PILLAR__LOCAL__DNS and PILLAR__DNS__SERVER in yaml aren't changed to proper IPs, I tried to change those variables in config yaml, but GKE keeps overwriting them back to yaml with PILLAR__DNS__SERVER variables...
The only difference between clusters is that dev runs on 1.15.9-gke.24 and production 1.15.11-gke.1.
Apparently 1.15.11-gke.1 version has a bug.
I recreated it first on 1.15.11-gke.1 and can confirm that node-local-dns Pods fall into CrashLoopBackOff state:
node-local-dns-28xxt 0/1 CrashLoopBackOff 5 5m9s
node-local-dns-msn9s 0/1 CrashLoopBackOff 6 8m17s
node-local-dns-z2jlz 0/1 CrashLoopBackOff 6 10m
When I checked the logs:
$ kubectl logs -n kube-system node-local-dns-msn9s
2020/04/07 21:01:52 [FATAL] Error parsing flags - Invalid localip specified - "__PILLAR__LOCAL__DNS__", Exiting
Solution:
Upgrade to 1.15.11-gke.3 helped. First you need to upgrade your master-node and then your node pool. It looks like on this version everything runs nice and smoothly:
$ kubectl get daemonsets -n kube-system node-local-dns
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-local-dns 3 3 3 3 3 addon.gke.io/node-local-dns-ds-ready=true 44m
$ kubectl get pods -n kube-system -l k8s-app=node-local-dns
NAME READY STATUS RESTARTS AGE
node-local-dns-8pjr5 1/1 Running 0 11m
node-local-dns-tmx75 1/1 Running 0 19m
node-local-dns-zcjzt 1/1 Running 0 19m
As it comes to manually fixing this particular daemonset yaml file, I wouldn't recommend it as you can be sure that GKE's auto-repair and auto-upgrade features will overwrite it sooner or later anyway.
I hope it was helpful.

Kubernetes pods are pending not active

If I run this:
kubectl get pods -n kube-system
I get this output:
NAME READY STATUS RESTARTS AGE
coredns-6fdd4f6856-6bl64 0/1 Pending 0 1h
coredns-6fdd4f6856-xgrbm 0/1 Pending 0 1h
kubernetes-dashboard-65c76f6c97-c69jg 0/1 Pending 0 13m
supposedly I need a kubernetes scheduler in order to actually launch containers? Does anyone know how to initiate a kube-scheduler?
More than a Kubernetes scheduler issue, it looks like it's more about not having enough resources on your nodes (or no nodes at all) in your cluster to schedule any workloads. You can check your nodes with:
$ kubectl get nodes
Also, you are not likely able to see any control plane resource on the kube-system namespace because you may be using managed services like EKS or GKE.

coredns containers are running on only one master

I have setup Kubernetes HA cluster with 3 masters. Version 1.14.2. Observed that 2 coredns containers are running on only one master. If I stop this Master, coredns is stopped. Are there any configuration to spawn this to remaining masters?
How can I spawn the coredns containers to the remaining masters.
you need to deploy dns autoscaler. and then tune autoscale parameters.
follow the link
https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/
follow the steps
kubectl apply -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/dns-horizontal-autoscaler.yaml
kubectl get deployment --namespace=kube-system
kubectl edit configmap dns-autoscaler --namespace=kube-system
Look for this line:
linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}'
updae min value to 2 as shown below
kubectl edit configmap dns-autoscaler --namespace=kube-system
linear: '{"coresPerReplica":256,"min":2,"nodesPerReplica":16}'
you should get two coredns pods listed as below
master $ kubectl get po --namespace=kube-system|grep dns
coredns-78fcdf6894-l54db 1/1 Running 0 1h
coredns-78fcdf6894-vbk6q 1/1 Running 0 1h
dns-autoscaler-6f888f5957-fwpgl 1/1 Running 0 2m