kubectl in pod works unexpectedly - kubernetes

kubectl is installed in the pod. When kubectl is used to execute commands in another pod, some commands works as expected, and the others behave abnormally, such as echo.
execute on host
excecute in pod
The host and pod have the same version of kubectl,is it related to some concept of tty? or something else? thx for any device.
I've tried to add privilege, tty, stdin to my kubectl pod, but not working. Yaml part as below:
containers:
- name: bridge
image: registry:5000/bridge:v1
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
stdin: true
tty: true

It depends on the container environment and limitations based on permissions or if the pod has the required software to run the commands.
As it appears echo is not present in the image which is deployed on pod. Though echo command is part of standard libraries and comes by default you can try installing it again.like in ubuntu use below
sudo apt-get update
sudo apt-get install coreutils

It was finally found to be related to my kubectl command in pod. Which is override by the following script. When I changed to the original kubectl command, everything works fine.
root#bridge-66d98bd46d-zk65m:/data/ww# cat /usr/bin/kubectl
#!/bin/bash
/opt/kubectl1 -ndefault $#
root#bridge-66d98bd46d-zk65m:/data/ww# /opt/kubectl1 -ndefault exec vnc-test -- bash -c 'which echo '
/usr/bin/echo
root#bridge-66d98bd46d-zk65m:/data/ww#

Related

"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

I'm on an ec2 instance trying to get my cluster created. I have kubectl already installed and here are my services and workloads yaml files
services.yaml
apiVersion: v1
kind: Service
metadata:
name: stockapi-webapp
spec:
selector:
app: stockapi
ports:
- name: http
port: 80
type: LoadBalancer
workloads.yaml
apiVersion: v1
kind: Deployment
metadata:
name: stockapi
spec:
selector:
matchLabels:
app: stockapi
replicas: 1
template: # template for the pods
metadata:
labels:
app: stockapi
spec:
containers:
- name: stock-api
image: public.ecr.aws/u1c1h9j4/stock-api:latest
When I try to run
kubectl apply -f workloads.yaml
I get this as an error
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I also tried changing the port in my services.yaml to 8080 and that didn't fix it either
This error comes when you don't have ~/.kube/config file present or configured correctly on the client / where you run the kubectl command.
kubectl reads the clusterinfo and which port to connect to from the ~/.kube/config file.
if you are using eks here's how you can create config file
aws eks create kubeconfig file
Encountered the exact error in my cluster when I executed the "kubectl get nodes" command.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I ran the following command in master node and it fixed the error.
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
I was following the instructions on aws
With me, I was on a Mac. I had docker desktop installed. This seemed to include kubectl in homebrew
I traced it down to a link in usr/local/bin and renamed it to kubectl-old
Then I reinstalled kubectl, put it on my path and everything worked.
I know this is very specific to my case, but may help others.
I found how to solve this question. Run the below commands
1.sudo -i
2.swapoff -a
3.exit
4.strace -eopenat kubectl version
and you can type kubectl get nodes again.
Cheers !
In my case I had a problem with a certificate authority. Found out that by checking the kubectl config
kubectl config view
The clusters part was null, instead of having something similar to
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
It was not parsed because of time differences between my machine and a server (several seconds was enough).
Running
sudo apt-get install ntp
sudo apt-get install ntpdate
sudo ntpdate ntp.ubuntu.com
Had solved the issue.
My intention is to create my own bare-metal Kubernetes Cloud with up to six nodes. I immediate ran into issues with the below intermittent issue.
"The connection to the server 192.168.1.88:6443 was refused - did you specify the right host or port?"
I have performed a two node installation (master and slave) about 20 different times using a multitude of “how to” sites. I would say I have had the most success with the below link…
https://www.knowledgehut.com/blog/devops/install-kubernetes-on-ubuntu
…however every install results in the intermittent issue above.
Given this issue is intermittent, I have to assume the necessary folder structure and permissions exist, the necessary application prerequisites exist, services/processes are starting under the correct context, and the installation was performed properly (using sudo at the appropriate times).
Again, the problem is intermittent. I will work, then stop, and the start again. A reboot sometimes corrects the issue.
Using Ubuntu ubuntu-22.04.1-desktop-amd64.
I have read a lot of comments on line concerning this issue, and a majority of the recommended fixes deals with creating directories and installing packages under the correct user context.
I do not believe this is the problem I am having given the issue is intermittent.
Linux is not my strong suite…and I would rather not go the Windows route. I am doing this to learn something new and would like to experience the entire enchilada (from OS install, to Kubernetes Cluster install, and then to Docker deployments).
I could be said that problems are the best teachers. And I would agree with that. However, I would like to absolutely know that I am starting with a stable working set of instructions before devoting endless hours trouble shooting bad/incorrect documentation.
Any ideas on how to proceed with correcting this problem?
d0naldashw0rth(at)yahoo(dot)com
I got the same error and after switching from root user to regular user (ubuntu, etc...) my problem was fixed.
Exactly the same issues as Donald wrote.
I've tried all the suggestions as you described above.
sudo systemctl stop kubelet
sudo systemctl start kubelet
strace -eopenat kubectl version
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
The cluster crashes intermittent. Sometimes works, sometimes not.
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
Any idea else? Thank you!

Not able to run kubectl cp command in Argo workflow

I am trying to run this command in my Argo workflow
kubectl cp /tmp/appendonly.aof redis-node-0:/data/appendonly.aof -c redis -n redis
but I get this error
Error from server (InternalError): an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding (get pods redis-node-0)
surprisingly when I am copying the file from a pod to local system then it is working, like this command kubectl cp redis-node-0:/data/appendonly.aof tmp/appendonly.aof -c redis -n redis
Any idea what might be causing it?
Solution -
Not sure what was causing this issue but found this command in the docs that worked fine
tar cf - appendonly.aof | kubectl exec -i -n redis redis-node-0 -- tar xf - -C /data

How to access kube-apiserver on command line?

Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126

How to Deploy our Customize Thingsboard to Kubenetes Engine?

After make docker image of cassandra, cassandra-setup, application and zookeeper from my custom thingsboard.
I tried to deploy that to Kubernetes Engine, there's no error, but not running well.
Here is my command for yaml from my github:
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/common.yaml > common.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/cassandra.yaml > cassandra.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/zookeeper.yaml > zookeeper.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/tb.yaml > tb.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/cassandra-setup.yaml > cassandra-setup.yaml
and here is my docker image:
https://hub.docker.com/u/firdauzfanani/
Example: when i run command kubectl create -f cassandra.yaml, cassandra engine just show running but not ready.
Status screenshot here
If it is shown as not ready even if it is running with no issue (es: you can ssh into it and all the services are running), could be an misconfiguration of your redinessprobe that I see defined in the YAML file as follow, but I have no clue regarding its behaviour. Consider that accordingly to documentation it should return 0.
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
On the other hand, if when you try to access the pod you face some kind of errors, I would suggest you if you didn't do it already to retrieve further information to carry on the troubleshooting running the following commands:
$ kubectl describe deployments
$ kubectl describe pods
$ kubectl describe services
This series of commands could help you in order to understand better what is going on.
Please run them and edit your initial post with the output and I can take a look to them.
To ssh into the pod run:
$ kubectl get pods (to retrieve pod name)
$ kubectl exec -ti PODNAME /bn/bash
UPDATE
I deployed your YAML files, the pods is running correctly (I believe) what is failing is the probe whose content is the following:
cat /ready-probe.sh
if [[ $(nodetool status | grep $POD_IP) == *"UN"* ]]; then
if [[ $DEBUG ]]; then
echo "UN";
fi
exit 0;
else
if [[ $DEBUG ]]; then
echo "Not Up";
fi
exit 1;

Need to run more than one commands

I have to run 2 commands at a time:
bash
service nginx start
How can I pass those by using the following command?
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
kubectl run -it testnew --image=imagename --command -- "/bin/bash","-c","service nginx start && while true; do echo bye; sleep 10;done" --requests=cpu=200m
Not sure how the --command flag works or is supposed to work.
This works for me, in that I get a running nginx with bash looping forever and printing 'bye'.
kubectl run -it testnew --image=nginx -- /bin/bash -c "service nginx start && while true; do echo bye; sleep 10;done"
Instead of this special command, you probably want to create a tweaked image that runs a script on start. Easier to manage what is running and harder to lose the customizations.