I am running kubectl on:
Microsoft Windows [Version 10.0.14393]
Pointing to Kubernetes cluster deployed in Azure.
A kubectl version command with verbose logging and preceded with time echo shows a delay of ~ 2 Min before showing any activity on the API calls.
Note the first log line that show 2 Min after invoking the command.
C:\tmp>echo **19:12:50**.23
19:12:50.23
C:\tmp>kubectl version --kubeconfig=C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1 -v=20
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2
017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"windows/amd64"}
I0610 **19:14:58.311364 9488 loader.go:354]** Config loaded from file C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1
I0610 19:14:58.313864 9488 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.6.4 (windows/amd64) kub
ernetes/d6f4332" https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version
I0610 19:14:58.519869 9488 round_trippers.go:417] GET https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version in 206 milliseconds
Other kubectl commands (get nodes etc.) exhibit the same delay.
Flushing dns cache didn't resolve the issue but it looks like the API requests are responsive. Also running the command as admin didn't help.
What other operation kubectl is attempting before loading the config?
there could be two reasons for latency
kubectl is on network drive(mostly H: drive) so kubectl is first copied to your. system and the run
.kube/config file is on network drive
So to summarise either of the thing is on network drive you will face.
You can try one more thing if this doesn't work out, you can run kubectl command -v=20 this will give all the time duration taken by it.
reference
Related
I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error.
I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.
Output of the failed step in GitHub Actions workflow:
Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"authserver","image":"test.azurecr.io/authserver:20201230-1712-d3a2ae4"}]}}}}' -o yaml > temp.yaml && mv temp.yaml authserver-deployment.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
Error: Process completed with exit code 1.
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0",
GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean",
BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc",
Platform:"linux/amd64"}
As a workaround I installed kind (it does take longer for the job to finish, but at least it's working and it can be used for e2e tests later).
Added this step:
- name: Setup kind
run: kubectl version
uses: engineerd/setup-kind#v0.5.0
Also use --dry-run=client as an option for your kubectl command.
I do realize this is not the proper solution.
You still need to set the config to access kubernetes cluster. Even tho you are modifying the file locally, you are still executing kubectl command that has to be ran against the cluster. By default, kubectl looks for a file named config in the $HOME/.kube directory.
error: current-context is not set indicates that there is no current context set for the cluster and kubectl cannot be executed against a cluster. You can create a context for Service Account using this tutorial.
Exporting KUBERNETES_MASTER environment variable should do the trick:
$ export KUBERNETES_MASTER=localhost:8081 # 8081 port, just to ensure it works
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate
:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8081 was refused - did you specify the right host or port?
# Notice the port 8081 in the error message ^^^^^^
Now patch also should work as always:
$ kubectl patch --local=true -f testnode.yaml -p '{"metadata":{"managedFields":[]}}' # to get the file content use -o yaml
node/k8s-w1 patched
Alternatively, you can update kubectl to a later version. (v1.18.8 works fine even without the trick)
Explanation section:
The change is likely to be introduced by PR #86173 stop defaulting kubeconfig to http://localhost:8080
The change was reverted in Revert "stop defaulting kubeconfig to http://localhost:8080" #90243 for the later 18.x versions, see the issue kubectl --local requires a valid kubeconfig file #90074 for the details
I ended up using sed to replace the string with image
- name: Update manifests with new images
working-directory: test/cloud
run: |
sed -i "s~image:.*$~image: ${{ steps.image_tags.outputs.your_new_tag }}~g" your-deployment.yaml
Works like a charm now.
Often times, when I want to check out what's wrong with Pods that go to a state of CrashLoopBackoff or Error, I do the following. I change the pod command to sleep 10000 and run kubectl exec -ti POD_NAME bash in my terminal to further inspect the environment and code. The problem is that it terminates very soon and without exception. It has been quite annoying to inspect the content of my pod.
My config
The result of kubectl version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-15T15:50:38Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:34:17Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
The result of helm version:
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
OS: MacOS Catalina 10.15.2
Docker version: 19.03.5
I run my stuff using helm and helmfile, and my releases usually include a Deployment and a Service.
Let me know if any additional info can help.
Any help is appreciated!
Try to install Golang in version 1.13.4+. You have go1.12.12 version of kubectl server which casues a lot of problems with compatibility. So you have to update it. If you are upgrading from an older version of Go you must first remove the existing version.
Take a look here: upgrading-golang.
Apply changes in your pod definition file, add following lines under container definition:
#Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "trap : TERM INT; sleep infinity & wait" ]
This will keep your container alive until it is told to stop. Using trap and wait will make your container react immediately to a stop request. Without trap/wait stopping will take a few seconds.
If you think it is networking problem use a tcpdump.
Tcpdump is a tool to that captures network traffic and helps you troubleshoot some common networking problems. Here is a quick way to capture traffic on the host to the target container with IP 172.28.21.3.
We are going to join the one container and will be trying to reach out another container:
kubectl exec -ti testbox-2460950909-5wdr4 -- /bin/bash
$ curl http://ip:port
On the host with a container we are going to capture traffic related to container target IP:
$ tcpdump -i any host ip
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:15:59.903566 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10056152 ecr 0,nop,wscale 7], length 0
20:15:59.903566 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10056152 ecr 0,nop,wscale 7], length 0
20:15:59.905481 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28
20:16:00.907463 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28
20:16:01.909440 ARP, Request who-has 172.28.21.3 tell 10.244.27.0, length 28
20:16:02.911774 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10059160 ecr 0,nop,wscale 7], length 0
20:16:02.911774 IP 172.28.128.132.60358 > 172.28.21.3.5000: Flags [S], seq 3042274422, win 28200, options [mss 1410,sackOK,TS val 10059160 ecr 0,nop,wscale 7], length 0
As you see there is a trouble on the wire as kernel fails to route the packets to the target IP.
You can also debug pod using kubectl logs command:
Running kubectl logs -p will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.
The best way is to have your logs centralized via logging agents or directly pushing these logs into an external service.
Alternatively and given the logging architecture in Kubernetes, you might be able to fetch the logs directly from the log-rotate files in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.
Take a look here: pod-debugging.
Take a look on official documentation: kubectl-exec.
you can do something like this :
kubectl exec -it --request-timeout=500s POD_NAME bash
Minikube not starting with several error messages.
kubectl version gives following message with port related message:
iqbal#ThinkPad:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes 1.12.
Indeed, the compatibility matrix between kubernetes and docker recommends to run :
Docker 18.06 + kubernetes 1.12 (Docker 18.09 is not supported now).
Thus, make sure docker version is NOT above 18.06. Then, run the following:
# clean up
minikube delete
minikube start --vm-driver="none"
kubectl get nodes
If you are still encountering issues, please give more details, namely minikube logs.
If you want to change the VM driver add the appropriate --vm-driver=xxx flag to minikube start. Minikube supports
the following drivers:
virtualbox
vmwarefusion
KVM2
KVM (deprecated in favor of KVM2)
hyperkit
xhyve
hyperv
none (Linux-only) - this driver can be used to run the Kubernetes cluster components on the host instead of in a VM. This can be useful for CI workloads which do not support nested virtualization. For example, if your vm is virtualbox then use:
$ minikube delete
$ minikube start --vm-driver=virtualbox
i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
but when i try to verify the installation, i am getting the below error.
ubuntu#ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
i am not sure why! because i had set up cluster with similar way earlier and everything worked fine.
Now wanted to set up a new cluster, but kind of stuck in this.
Any help appreciated.
Two things :
If every instruction was followed properly, and still facing same issue. #VAS answer might help.
However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up.
Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This error usually means that your kubectl config is not correct and either it points to the wrong address or credentials are wrong.
If you have successfully created a cluster with kops, you just need to export its connections settings to kubectl config.
kops export --name=<your_cluster_name> --config=~/.kube/config
If you want to use a separate config file for this cluster, you can do it by setting the environment variable:
export KUBECONFIG=~/.kube/you_cluster_name.config
kops export kubecfg --name you_cluster_name --config=~$KUBECONFIG
You can also create a kubectl config for each team member using KOPS_STATE_STORE:
export KOPS_STATE_STORE=s3://<somes3bucket>
# NAME=<kubernetes.mydomain.com>
kops export kubecfg ${NAME}
In my particular case I forgot to configure kubectl after the installation which resulted in the exact same symptoms.
More specifically I forgot to create and populate the config file in $HOME/.kube directory. You can read about how to do this properly here, but this should suffice to make the error go away:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Everytime I try and exec into a pod through the minikube dashboard running alpine linux it crashes and closes the connection with the following error
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"bash\": executable file not found in $PATH"
CONNECTION CLOSED
Output from the command "kubectl version" reads as follows:
Client Version: version.Info{Major:"1", Minor:"8",
GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8",
GitVersion:"v1.8.0",
GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",
GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Can anybody please advise? I can run other containers perfectly OK as long as they have BASH not ASH.
Many thanks
Normally Alpine linux doesn't contain bash.
Have you tried executing into the container with any of the following?
/bin/ash
/bin/sh
ash
sh
so for example kubectl exec -it my-alpine-shell-293fj2fk-fifni2 -- sh should do the job.
Everytime I try and exec into a pod
You didn't specify the command you provided to kubectl exec but based on your question I'm going to assume it is kubectl exec -it $pod -- bash
The problem, as the error message states, is that the container image you are using does not provide bash. Many, many "slim" images don't ship with bash because of the dependencies doing so would bring with them.
If you want a command that works across all images, use sh, since 90% of the time if bash is present, it is symlinked to /bin/sh and the other cases (as you mentioned with ash or dash or whatever) then using sh will still work and allow you to determine if you need to adjust the command to specifically request a different shell.
Thus kubectl exec -it $pod -- sh is the command I would expect to work