How to access local kubernete cluster - kubernetes

I have deployed 1 master and 3 nodes on VM's.
I can run successfully "kubectl" command on the server's SSH CLI. I can deploy pods, all fine.
But I couldn't find how can I run "kubectl" command from my local and manage the K8S cluster? How can I do that?
Thanks!

You might have a kubeconfig file somewhere on the VMs. You can copy this one to your local device under $HOME/.kube/config, so kubectl knows how to access the cluster.
For more information, see the kubernetes documentation.

From your local machine run:
kubectl config get-contexts
Then run the below (replace cluster-name with the cluster name you want to communicate with):
kubectl config use-context cluster-name
If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.

Related

Unable to switch from Minikube to AWS EKS on windows for Deployment

I have minikube on my local machine for testing deployment and I ran commands like
kubectl apply -f testingfile.yaml
and it worked fine. Now I want to perform the same on aws eks. I have followed all steps given in https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html. Created a config file and added that to the path. Commands like eksctl get cluster are correctly listing the clusters from aws eks but now when I run
kubectl apply -f testingfile.yaml
I am getting the following statement
deployment.apps/testingfile unchanged which means it is still applying the command inside minikube and not on aws eks. I have also deleted path variables related to minikube from environment variables but I am still unable to switch to aws eks for applying. I would like to deploy this on aws eks. Let me know what I am missing here
Checking your existing cluster contexts
There will multiple contexts one for Minikube and One for EKS
kubectl config get-contexs
change context to EKS if your config is set it will be there
kubectl config use-context <Name of context>
this way you can get changed to another clusters.

How to use kubectl command for kubernetes implemented in rancher run with docker?

I built a rancher using docker on server 1.
I created and added a kubernetes cluster on server 2, and I wanted to access the kubernetes with the kubectl command on server 2 local, but localhost:8080 error is displayed.
How can I apply kubectl command to kubernetes configured with docker rancher locally?
I fixed that issue modifying the kube config file.
The kubeconfig file can be checked by entering the rancher
The file to be modified is ~/.kube/config

reset the kubectl context to docker desktop

I have installed docker desktop on my windows 10 and have enabled Kubernetes. When I run the kubectl config current-context command I am getting this response gke_k8s-demo-263903_asia-south1-a_kubia. How do I set up the context to point to docker-desktop? I remember that I had worked with GKE earlier but not sure how to reset the context.
From your local machine run, you should see docker-desktop listed:
kubectl config get-contexts
Then run the below:
kubectl config use-context docker-desktop
If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.

Kubernates cluster instance

I have created a Kubernetes cluster and one of instance in the cluster is inactive
I want to review the configured Kubernetes Engine cluster of an inactive configuration by which command should I check?
Should I use this "kubectl config get-contexts"?
or
kubectl config use-context and kubectl config view?
Am beginner to cloud please anyone explains?
The kubectl config get-context will not help you debug why the instance is failing. Basically it will just show you the list ot contexts. A context is a group of cluster access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl . On other hand the kubectl config view will just print you kubeconfig settings.
The best way to start is the Kubernestes official documentation. It provides a good basic steps for troubleshoouting your cluster. Some of the steps can be applied to GKE as well as the Kubeadm or Minikube clusters.
If you're using GKE, then you can read the nodes logs from Stackdriver. This document is excellent start when you want to check the logs directly in the log viewer.
If one of your instaces report NotReady after listing them with kubectl get nodes I suggest to ssh to that instances and check kubernetes components (kubelet and kube-proxy). You can view the GKE nodes from the instances page.
Kube Proxy logs:
/var/log/kube-proxy.log
If you want to check the kubelet logs, they're a unit in systemd in COS that can be accessed using jorunactl.
Kubelet logs:
sudo journalctl -u kubelet
For further debugging it is worth mentioning that that GKE master is a node inside a Google managed project and it is different from your cluster project.
For the detailed master logs you will have open a google support ticket. Here is more information about how GKE cluster architecture works, in case there's something related to the api-server.
Let me know if that was helpful.
You can run below command to check status of all the nodes of a kubernetes cluster. Pleases note if you are using GKE managed service you will not be able to see status of master nodes, you will only see status of worker nodes.
kubectl get nodes -o wide
kubectl describe node nodename
You can also run below command to check status of control plane components.
kubectl get componentstatus
You can use the below command to get list of all the nodes in GKE cluster:
kubectl get nodes -o wide
Once you have the list of nodes, you can describe the node to get the events"
kubectl describe node <Node-Name>
Based on the events you can debug the node.

Can we run kubectl form worker/minion node?

I have my kubernetes cluster setup and I want to check for the nodes from worker/minion node, can we run kubectl form worker/minion node?
Yes, you just need to have the proper client credentials and you can run kubectl from anywhere that has network access to the apiserver. See Sharing Cluster Access with kubeconfig for the instructions to get a kubeconfig file onto your worker node.