How does kubectl detect configuration in GCE? - kubernetes

I created a Kube cluster using the kube-up script. If I ssh into the intances, kubectl is configured for the local cluster. My question, how is kubectl detecting the kubeconfig when a cluster is created using kube-up script?

I tried to do this using a cluster built from HEAD on GCE and didn't have the same experience. On the master instance, kubectl works. But on the nodes, it isn't configured to communicate with the master:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.8.82+c9d33ec1b4044e", GitCommit:"c9d33
ec1b4044e2a330a9b8b7a9204a99b6c6eec", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason that it works out of the box on the master is that by default kubectl tries to connect to port 8080 on localhost, which is also the insecure port used on the master (until kubernetes#13598 is resolved).

Related

Failed calling webhook "namespace.sidecar-injector.istio.io"

I have make my deployment work with istio ingressgateway before. I am not aware of any changes made in istio or k8s side.
When I tried to deploy, I see an error in replicaset side that's why it cannot create new pod.
Error creating: Internal error occurred: failed calling webhook
"namespace.sidecar-injector.istio.io": Post
"https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp
10.104.136.116:443: connect: no route to host
When I try to go inside api-server and ping 10.104.136.116 (istiod service IP) it just hangs.
What I have tried so far:
Deleted all coredns pods
Deleted all istiod pods
Deleted all weave pods
Reinstalling istio via istioctl x uninstall --purge
turning all of VMs firewall
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
restarted all of the nodes
manual istio pod injection
Setup
k8s version: 1.21.2
istio: 1.10.3
HA setup
CNI: weave
CRI: containerd
In my case this was related to firewall. More info can be found here.
The gist of it is that on GKE at least you need to open another port 15017 in addition to 10250 and 443. This is to allow communication from your master node(s) to you VPC.
I don't have a definite answer unto why is this happening. But kube-apiserver cannot access istiod via service IP, wherein it can connect when I used the istiod pod IP.
Since I don't have the control over the VM and lower networking layer and not sure if they have changed something (because it is working before).
I made this work by changing my CNI from weave to flannel
In my case it was due to firewall. Following this Istio debug guide, I identified that the kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4 command was timing out while all other cluster internal calls were ok.
The best way to diagnose this is to open temporarly your AWS Security Groups involved to 0.0.0.0/0 for port 15017 and then try again.
If the errror won't show again, you know there's need to fix this part.
I am using EKS with Amazon VPC CNI v1.12.2-eksbuild.1

How to locally access influx db deployed on GCP kubernetes

Influxdb 1.8 is deployed on kubernets using helm charts. influx db is deployed as Stateful Set that exposes a service with one running pods. Am able to ssh into running pods using kubectl exec command and its running fine. I can also see databases using influx cli after logging into pods
But i need to access this influx db on my local system to execute queries directly from my system using curl command. Deployed influxdb has no external IP/DNS. It ha internal endpoint that usually starts with 10...*
Can anybody guide me on how can i access influxdb on my local system using curl command?
You can use the kubectl port-forward command. You can use it to either map a Pod or a Service TCP port to a port on your local machine:
> kubectl port-forward service/your-influxdb-service 8086:8086
^ ^
| |
local port remote/service port
While that command is running, kubectl will forward all connections to your local port 8086 to the same port of your InfluxDB service. All traffic will be funneled through kubectl and your API server, so this is not exactly suited for high-throughput scenarios, but should be sufficient for occasional debugging and testing.

SSH to Kubernetes pod using Bastion

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy

How to access to the services in kubernetes cluster when ssh to another VMs via proxy?

Consider if we build two VMs in a bare-metal server through a network, one is master and another is worker. I ssh to the master and construct a cluster using kubeadm which has three pods and a service with type: ClusterIP. So when I want access to the cluster I do kubectl proxy in the master. Now we can explore the API with curl and wget in the VM which we ssh to it, like this :
$ curl http://localhost:8080/api/
So far, so good! but I want access to the services by my laptop? The localhost which comes above is refer to the bare-metal server! How can access to the services through proxy by my laptop when cluster is placed in another machine?
When I do $ curl http://localhost:8080/api/ in my laptop it says :
127.0.0.1 refused to connect
which make sense! But what is the solution to this?
If you forward the port 8080 when sshing to master, you can use localhost on your laptop to access the apis on the cluster.
You can try adding the -L flag to your ssh command:
$ ssh -L 8080:localhost:8080 your.master.host.com
Then the curl to localhost will work.
You can also specify an extra arguments to the kubectl proxy command, to let your reverse-proxy server listening on non-default ip address (127.0.0.1) - expose outside
kubectl proxy --port=8001 --address='<MASTER_IP_ADDRESS>' --accept-hosts="^.*$"
You can get your Master IP address by issuing following command: kubectl cluster-info

kubectl run command is failing with a connection refused error

I am following the hellonode tutorial on kubernetes.io
http://kubernetes.io/docs/hellonode/
I am getting an error when trying to do the 'Create your pod' section.
When I run this command (replacing PROJECT_ID with the one I created) I get the following:
$ kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I get a similar error just typing kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I'm not sure what to do since I have no experience with kubernetes other than following the steps of this tutorial.
I figured out the issue.
In the Create your cluster section I missed a critical step.
The step I missed was: Please ensure that you have configured kubectl to use the cluster you just created. The configured part is a link to how to do this:
The steps are as follows:
gcloud config set project PROJECT
gcloud config set compute/zone ZONE
gcloud config set container/cluster CLUSTER_NAME
gcloud container clusters get-credentials CUSTER_NAME