Kubernetes pod that runs kubectl get deploy returns connection error - kubernetes

I have a pod that runs kubectl get deploy <Deployment Name>. However, the pod doesn’t get created, and returns the message below.
The connection to the server 172.20.0.1:443 was refused - did you specify the right host or port?
Which permission do I have to add as a serviceaccount ?

Related

Call Pod by IP from another Pod

I've developed a python script, using python kubernetes-client to harvest Pods' internal IPs.
But when I try to make an http request to these IPs, from another pod, I get Connection refused error.
I spin up a temporary curl container:
kubectl run curl --image=radial/busyboxplus:curl -it --rm
And having the internal IP of one of the pods, I try to make a GET request:
curl http://10.133.0.2/stats
and the response is:
curl: (7) Failed to connect to 10.133.0.2 port 80: Connection refused
Both pods are in the same default namespace and use the same default ServiceAccount.
I know that I can call the Pods thru the ClusterIP service by which they're load-balanced, but this way I will only access a single Pod at random (depending which one the service forwards the call to), when I have multiple replicas of the same Deployment.
I need to be able to call each Pod of a multi-replica Deployment separately. That's why I'm going for the internal IPs.
I guess you missed the port number here
It should be like this
curl POD_IP:PORT/stats

Kubectl port-forward not working with IBM Cluster

When I do Kubernetes port-forward with IBM cluster I get connection refused. I have access to other clusters like Azure Kubernetes Service and kubectl port-forward is working fine there. Also when I get a pod log using kubectl logs {pod_name} I get TLS handshake error but the other kubernetes commands like get pod and describe pod is working fine.

"Error from server (NotFound): deployments.apps "wordpress" not found" I am getting this error although I've deployed it?

I'm trying to deploy the pod that I've already created as a service but I keep getting the aforementioned error.
The first error is because I had already deployed the pods the other day. But the second error is the main problem.
It would be great if anyone could help me out.
kubectl run ...
is used to create and run a particular image in a pod. [reference]
kubectl expose ...
is used to expose a resource (pod, service, replicationcontroller, deployment, replicaset) as a new k8s service. [reference]
What you are doing is create a pod with kubectl run and expose a deployment with kubectl expose deployment. Those are two different resources. That's why you are getting NotFound error - because specified deployment does not exist.
What you can do is either
kubectl expose pod ...
or create a deployment.

Execute a command on Kubernetes node from the master

I would like to execute a command on a node from the master. For e.g let's say I have worker node: kubenode01
Now a pod (pod-test) is running on this node. Using "kubectl get pods --output=wide" on the master shows that the pod is running on this node.
Trying to execute a command on that pod from the master results into an error e.g:
kubectl exec -ti pod-test -- cat /etc/resolv.conf
The result is:
Error from server: error dialing backend: dial tcp 10.0.22.131:10250: i/o timeout
Any idea?
Thanks in advance
You can execute kubectl commands from anywhere as long as your kubeconfig is configured to point to the right cluster URL (kube-apiserver), with the right credentials and the firewall allows connecting to the kube-apiserver port.
In your case, I'd check if your 10.0.22.131:10250 is the real IP:PORT for your kube-apiserver and that you can access it.
Note that kubectl exec -ti pod-test -- cat /etc/resolv.conf runs on the Pod and not on the Node. If you'd like to run on the Node just simply use SSH.
Update:
There are two other alternatives here:
You can create a pod (or debug pod) with a nodeSelector that specifically makes that pod run on the specific node.
If you are trying to debug something on a pod already running on a specific node, you can also try creating a debug ephemeral container.
On newer versions of Kubernetes you can use a debug pod to run something on a specific node
✌️

Access Kubernetes API with kubectl failed after enabling RBAC

I'm trying to enable RBAC on my cluster and iadded those following line to the kube-apiserver.yml :
- --authorization-mode=RBAC
- --runtime-config=rbac.authorization.k8s.io/v1beta1
- --authorization-rbac-super-user=admin
and i did systemctl restart kubelet ;
the apiserver starts successfully but i'm not able to run kubectl command and i got this error :
kubectl get po
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
where am I going wrong or i should create some roles to the kubectl user ? if so how that possible
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
You are using user kubectl to access cluster by kubectl utility, but you set --authorization-rbac-super-user=admin, which means your super-user is admin.
To fix the issue, launch kube-apiserver with superuser "kubectl" instead of "admin."
Just update the value of the option: --authorization-rbac-super-user=kubectl.
Old question but for google searchers, you can use the insecure port:
If your API server runs with the insecure port enabled (--insecure-port), you can also make API calls via that port, which does not enforce authentication or authorization.
Source: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
So add --insecure-port=8080 to your kube-apiserver options and then restart it.
Then run:
kubectl create clusterrolebinding kubectl-cluster-admin-binding --clusterrole=cluster-admin --user=kubectl
Then turn the insecure-port off.