Kubectl documentation without starting Kubernetes - kubernetes

I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.
praveensripati#praveen-ubuntu:~$ kubectl explain pods
Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host
Any workaround for this?

See "kubectl explain — #HeptioProTip"
Behind the scenes, kubectl just made an API request to my Kubernetes cluster, grabbed the current Swagger documentation of the API version running in the cluster, and output the documentation and object types.
Try kubectl help as an offline alternative, but that won't be as complete (limite to kubectl itself).

So the rather sobering news is that AFAIK there's not out-of-the box way how to do it, though you could totally write a kubectl plugin (it has become rather trivial now in 1.12). But for now, the best I can offer is the following:
# figure out which endpoint kubectl uses to retrieve docs:
$ kubectl -v9 explain pods
# from above I learn that in my case it's apparently
# https://192.168.64.11:8443/openapi/v2 so let's curl that:
$ curl -k https://192.168.64.11:8443/openapi/v2 > resources-docs.json
From here you can, for example, use jq to query for the descriptions. It's not as nice as a proper explain, but kinda is a good enough workaround until someone writes an docs offline query kubectl plugin.

The 'explain' documentation lives in the kube-apiserver and its resource definitions. Hence the need to connect to it through kubectl explain to get any docs. This is different from the standard very basic cli help from kubectl where it's in the kubectl Golang code.
So no workaround really other than setting up a dummy Kubernetes cluster and have kubectl point to it. Please note that CRDs help might not be available since they live in the deployed CRDs themselves.

Related

How to debug a kubernetes cluster?

As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.
use a tool like https://github.com/kubernetes/kubernetes-dashboard
You can install kubectl and kubernetes-dashboard in a k8s cluster (https://kubernetes.io/docs/tasks/tools/install-kubectl/), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to https://kubernetes.io/
kubectl get pods
will show you all your pods and their status. A quick check to make sure that all is at least running.
If there are pods that are unhealthy, then
kubectl describe pod <pod name>
will give some more information.. eg image not found etc
kubectl log <pod name> --all
is often the next step , use -f to follow the logs as you exercise your api.
It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...

Recovery from kubectl crash

What is the best way to troubleshoot when kubectl doesn't responde or exit with timeout? How to get it work again?
I'm having my kubectl as well as helm on my cluster down when installing a helm chart.
General advice:
Check if your kubectl is connecting to the correct kube-api endpoint. You could take a look at your kubeconfig. It is by default stored in $HOME/.kube. Try simple CURL to make sure that it is not DNS problem, etc.
Take a look at your nodes' logs by ssh into the nodes that you have: see this for more details instructions and log locations.
Once you have more information, you could get yourself started in the investigation of problems.

How to access the Kubernetes API in Go and run kubectl commands

I want to access my Kubernetes cluster API in Go to run kubectl command to get available namespaces in my k8s cluster which is running on google cloud.
My sole purpose is to get namespaces available in my cluster by running kubectl command: kindly let me know if there is any alternative.
You can start with kubernetes/client-go, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)
It includes a NamespaceLister, which helps list Namespaces.
See "Building stuff with the Kubernetes API — Using Go" from Vladimir Vivien
Michael Hausenblas (Developer Advocate at Red Hat) proposes in the comments documentations with using-client-go.cloudnative.sh
A versioned collection of snippets showing how to use client-go.

Where is the Kubernetes YAML/JSON configuration files documentation?

Hei,
I'm looking for the documentation for Kubernetes's configuration files. The ones used by kubectl (e.g. kubectl create -f whatever.yaml).
Basically, the Kubernetes equivalent of this Docker Compose document.
I did search a lot but I didn't find much, or 404 links from old stackoverflow questions.
You could use the official API docs but a much more user-friendly way on the command line is the explain command, for example, I never remember what exactly goes into the spec of a pod, so I do:
$ kubectl explain Deployment.spec.template.spec

Unable to resolve hostname using `kubectl logs` or `kubectl exec`

I've created a Kubernetes cluster using CoreOS on AWS and I'm having trouble communicating with nodes from the master.
For example, operations like kubectl exec or kubectl logs fail an error similar to the following:
Error from server: dial tcp: lookup ip-XXX-X-XXX-XXX.eu-west-1.compute.internal: no such host
I've found some issues on Github that describe the problem so I know the team knows about this bug, but I would like to ask here if its possible to use some workaround until it gets addressed somehow.
One workaround mentioned was to use the --hostname-override flag but as I'm on AWS, this flag is ignored (see #22984)
Related issues on GitHub: #22770 #22063.
Have you made sure you're using the right context?
kubectl config use-context my-cluster-name