kubectl -- create -h works but kubectl create -h doesn't - kubernetes

Running 'minikube' on windows 10, why minikube kubectl create -h doesn' work but minikube kubectl -- create -h does (w.r.t. showing help for create)

This is the way minikube works:
Minikube has a subcommand kubectl that will exectute the kubectl bundled with minikube (because you can also have one installed outside of minikube, on your plain system).
Minikube has to know the exact command to pass to its kubectl, thus minikube splits the command with --.
It's used to differentiate minikube arguments and kubectl's.

Related

Unknown image flag when creating deployment using Minikube kubectl

I am getting unknown image flag when creating a deployment using minikube on windows 10 cmd. Why?
C:\WINDOWS\system32>minikube kubectl create deployment nginxdepl --image=nginx
Error: unknown flag: --image
See 'minikube kubectl --help' for usage.
C:\WINDOWS\system32>
When using kubectl bundled with minikube the command is little different.
From the documentation, your command should be:
minikube kubectl -- create deployment nginxdepl --image=nginx
The difference is the -- right after kubectl
there problem is your command. you are mixing kubectl and minikube.
minikube is for managing your one-node local dev cluster.
kubectl is used for interacting with your cluster.
you should be using the following command:
kubectl create deployment nginxdepl --image nginx

`kubectl` not found. If you need it, try: 'minikube kubectl -- get pods -A'

I installed minikube in Windows 10 . I am able to start minikube
**C:\WINDOWS\system32>minikube start
* minikube v1.15.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default**
But there is a warning in above output ( 2nd last line ) says
kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
After that I executed this command too minikube kubectl -- get pods -A
Still getting below error while trying kubectl
C:\WINDOWS\system32>kubectl
'kubectl' is not recognized as an internal or external command,
operable program or batch file.
Minikube installs kubectl inside of itself.
So to use the kubectl which you installed via minikube, you have to prepend the command arguments with minikube kubectl --. For example:
# the same as `kubectl version --client`
minikube kubectl -- version --client
For convenience, you may want to add an alias in your shell configuration.
Source: https://minikube.sigs.k8s.io/docs/handbook/kubectl/
kubectl is wrapped around minikube.
Don't forget to add a -- after minikube kubectl
minikube kubectl -- describe pod kube-scheduler-minikube --namespace kube-system
minikube kubectl -- get pods --namespace kube-system
You have installed minikube, kubectl is not a part of minikube package.
It says when you do minikube start that kubectl is not present and if you need to you can use minikube kubectl instead.
This is also mentioned here
If you already have kubectl installed, you can now use it to access your shiny new cluster
It means that the kubectl might not be present on your machine or that it is not added to your PATH.
You can follow these instructions to install it either by downloading executable or by using curl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/windows/amd64/kubectl.exe
After that add the binary to PATH.
You can run kubectl version --client to ensure correct version is downloaded.
Use doskey.exe to create an alias for kubectl.
Example:
doskey kubectl="%PROGRAMFILES%\Kubernetes\Minikube\minikube.exe" kubectl -- $*
You might need to update the path if you've installed minikube somewhere else.

Cannot access the proxy of a kubernetes pod

I created a kubernetes cluster on my debian 9 machine using kind.
Which apparently works because I can run kubectl cluster-info with valid output.
Now I wanted to fool around with the tutorial on Learn Kubernetes Basics site.
I have already deployed the app
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
and started the kubectl proxy.
Output of kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 17m
My problem now is: when I try to see the output of the application using curl I get
Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'
My commands
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
For the sake of completeness I can run curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/ and I get valid output.
The steps from this tutorial module represent environment as if You were working on one of the cluster nodes.
And the command tries to check connectivity to service locally on the node.
However In Your case by running Your kubernetes in a docker (kind) cluster the curl command is most likely ran from the host that is serving the docker containers that have kubernetes in it.
It might be possible to use docker exec to get inside kind node and try to run curl command from there.
Hope this helps.
I'm also doing following the tutorial using kind and got it to work forwarding the port:
kubectl port-forward $POD_NAME 8001:8001
Try add :8080 after the $POD_NAME
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/

Difference between kubectl and minikube-kubectl

I'm using Mac OS for development environment.
If I install minikube, the kubectl will use the local cluster made by minikube as a default option. I found I can use kubectl command with minikube prefix - just like below:
$ minikube kubectl get pods
So I tried it, and kubectl download process began. So I can get that the kubectl in my Mac and the kubectl in minikube is not identical. But what does thie mean?
It's just a wrapper for kubectl, downloading it when not installed, otherwise executing the client.
See the command with '--help' below.
$ minikube kubectl --help
Run the kubernetes client, download it if necessary.
Usage:
minikube kubectl [flags]
Flags:
-h, --help help for kubectl
Global Flags:
[...]

How to access kube-apiserver on command line?

Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126