Running Kubectl inside a worker node - kubernetes

I did an SSH on a worker node inside the cluster and I run kubectl in there. I created a PV, a PVC and a deployment. I read on the documentation that PV is a cluster-wide object. My question is what happens in this case? In other words, Does running kubectl inside a worker node has the same effect as running it from master node?

Short answer: yes. kubectl connects to the configured API server which controls the whole cluster.

Related

" Pod is blocking scale down because it has local storage "

I have kubernets cluster in gcp with docker container runtime. I am trying to change docker container runtime into containerd. Following steps shows what I did.
New node pool added ( nodes with containerd )
drained old nodes
Once I perform above steps I am getting " Pod is blocking scale down because it has local storage " warning message.
You need to add the once annotation to POD so that cluster autoscaler can remove that POD from POD safe to evict.
cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
above annotation, you have to add in into POD.
You can read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#cluster-not-scalingdown
NoScaleDown example: You found a noScaleDown event that contains a
per-node reason for your node. The message ID is
"no.scale.down.node.pod.has.local.storage" and there is a single
parameter: "test-single-pod". After consulting the list of error
messages, you discover this means that the "Pod is blocking scale down
because it requests local storage". You consult the Kubernetes Cluster
Autoscaler FAQ and find out that the solution is to add a
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation to
the Pod. After applying the annotation, cluster autoscaler scales down
the cluster correctly.
For further clarification, you can use this command to update the pod's annotation:
kubectl annotate pod <podname> -n <namespace> "cluster-autoscaler.kubernetes.io/safe-to-evict=true"
Had the same error when using Gitlab + Autodevops + GoogleCloud.
The issue is the cm_acme pods's that are spun up to answer the letsencrypt challenges.
e.g. we have pods like this
cm-acme-http-solver-d2tak
hanging around in our cluster so the cluster won't downsize until these pods are destroyed.
A simple
kubectl get pods -A | grep cm-acme
will list all the pods that need to be destroyed with
kubectl delete pod -n {namespace} {pod name}

Can a worker node in kubernetes can run two different pods?

In kubernetes we can pods on worker nodes and pods share the resources and IP address,but what if we run two diiferent pods on a same worker node does that mean that both the pods will have different IP address?
To answer the main question - yes. A node can and does run different pods. Even if you have only one Deployment you can run
kubectl describe nodes my-node
Or even
kubectl get pods --all-namespaces
To see some pods that kubernetes uses for its control plane on each node.
About the second question, it really depends on your deployment, id recommend on reading about kube proxy which is a pod running on every node! (Regarding your first question) and is in charge of the networking layer and communication within the cluster
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/
The pods will have their own IP address within that node, and there are ways to directly communicate with pods
https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/
https://kubernetes.io/docs/concepts/cluster-administration/networking/

Kubernates cluster instance

I have created a Kubernetes cluster and one of instance in the cluster is inactive
I want to review the configured Kubernetes Engine cluster of an inactive configuration by which command should I check?
Should I use this "kubectl config get-contexts"?
or
kubectl config use-context and kubectl config view?
Am beginner to cloud please anyone explains?
The kubectl config get-context will not help you debug why the instance is failing. Basically it will just show you the list ot contexts. A context is a group of cluster access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl . On other hand the kubectl config view will just print you kubeconfig settings.
The best way to start is the Kubernestes official documentation. It provides a good basic steps for troubleshoouting your cluster. Some of the steps can be applied to GKE as well as the Kubeadm or Minikube clusters.
If you're using GKE, then you can read the nodes logs from Stackdriver. This document is excellent start when you want to check the logs directly in the log viewer.
If one of your instaces report NotReady after listing them with kubectl get nodes I suggest to ssh to that instances and check kubernetes components (kubelet and kube-proxy). You can view the GKE nodes from the instances page.
Kube Proxy logs:
/var/log/kube-proxy.log
If you want to check the kubelet logs, they're a unit in systemd in COS that can be accessed using jorunactl.
Kubelet logs:
sudo journalctl -u kubelet
For further debugging it is worth mentioning that that GKE master is a node inside a Google managed project and it is different from your cluster project.
For the detailed master logs you will have open a google support ticket. Here is more information about how GKE cluster architecture works, in case there's something related to the api-server.
Let me know if that was helpful.
You can run below command to check status of all the nodes of a kubernetes cluster. Pleases note if you are using GKE managed service you will not be able to see status of master nodes, you will only see status of worker nodes.
kubectl get nodes -o wide
kubectl describe node nodename
You can also run below command to check status of control plane components.
kubectl get componentstatus
You can use the below command to get list of all the nodes in GKE cluster:
kubectl get nodes -o wide
Once you have the list of nodes, you can describe the node to get the events"
kubectl describe node <Node-Name>
Based on the events you can debug the node.

How to autoscale with GKE

I have a GKE cluster with an autoscale node pool.
After adding some pods, the cluster starts autoscale and creates a new node but the old running pods start to crash randomly:
I don't think it's directly related to autoscaling unless some of your old nodes are being removed. The autoscaling is triggered by adding more pods but most likely, there is something with your application or connectivity to external services (db for example). I would check the what's going on in the pod logs:
$ kubectl logs <pod-id-that-is-crashing>
You can also check for any other event in the pods or deployment (if you are using a deployment)
$ kubectl describe deployment <deployment-name>
$ kubectl describe pod <pod-id> -c <container-name>
Hope it helps!

Can we run kubectl form worker/minion node?

I have my kubernetes cluster setup and I want to check for the nodes from worker/minion node, can we run kubectl form worker/minion node?
Yes, you just need to have the proper client credentials and you can run kubectl from anywhere that has network access to the apiserver. See Sharing Cluster Access with kubeconfig for the instructions to get a kubeconfig file onto your worker node.