I have a Kubernetes cluster installed inside a VM on AWS EC2. When I try getting the Kube API URL, I get this:
[root#node-1 centos]# kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
https://172.xxx.xxx.xxx:6443
Problem is that we are now trying to create a Kube config file so I can run kubectl from my local machine but my kubectl seems to not be able to reach the kube API
In my kube config, I tried using both the public IP and the DNS name of the EC2 VM but it doesn't work (it times out after a long time)
My firewall on EC2 is open, so that shouldn't be a problem
Any ideas on how I can expose my cluster installed inside EC2 to external world/kubectl?
Thanks!
Related
Influxdb 1.8 is deployed on kubernets using helm charts. influx db is deployed as Stateful Set that exposes a service with one running pods. Am able to ssh into running pods using kubectl exec command and its running fine. I can also see databases using influx cli after logging into pods
But i need to access this influx db on my local system to execute queries directly from my system using curl command. Deployed influxdb has no external IP/DNS. It ha internal endpoint that usually starts with 10...*
Can anybody guide me on how can i access influxdb on my local system using curl command?
You can use the kubectl port-forward command. You can use it to either map a Pod or a Service TCP port to a port on your local machine:
> kubectl port-forward service/your-influxdb-service 8086:8086
^ ^
| |
local port remote/service port
While that command is running, kubectl will forward all connections to your local port 8086 to the same port of your InfluxDB service. All traffic will be funneled through kubectl and your API server, so this is not exactly suited for high-throughput scenarios, but should be sufficient for occasional debugging and testing.
I have deployed 1 master and 3 nodes on VM's.
I can run successfully "kubectl" command on the server's SSH CLI. I can deploy pods, all fine.
But I couldn't find how can I run "kubectl" command from my local and manage the K8S cluster? How can I do that?
Thanks!
You might have a kubeconfig file somewhere on the VMs. You can copy this one to your local device under $HOME/.kube/config, so kubectl knows how to access the cluster.
For more information, see the kubernetes documentation.
From your local machine run:
kubectl config get-contexts
Then run the below (replace cluster-name with the cluster name you want to communicate with):
kubectl config use-context cluster-name
If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.
Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)
I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy
While running Minikube, I want to connect to a server that has the annoying habit of announcing itself to a service registry with its internal IP address from inside its pod.
However for legacy reasons I have to connect to this registry first and retrieve that server's ip address from it. The only way to access this server from my dev machine, it seems to me, is bridging to the internal network, so I can access the networking of the Minikube. Is there an easy way to do this?
You can add a route to the k8 internal network from localhost
Add a route to the internal network using the minikube ip address
$ sudo ip route add 172.17.0.0/16 via $(minikube ip) # linux
$ sudo route -n add 172.17.0.0/16 $(minikube ip) # OSX
your subnet mask could be found using kubectl get service command
Test the route by deploying a test container and connect to it from localhost
$ kubectl run monolith --image=kelseyhightower/monolith:1.0.0 --port=80
$ IP=$(kubectl get pod -l run=monolith -o jsonpath='{.items[0].status.podIP }')
$ curl http://$IP
{"message":"Hello"}
You can also add a route to K8 master
sudo route -n add 10.0.0.0/24 $(minikube ip)
This is only useful for local development, you should use NodePort or LoadBalancer for exposing pods in production.
If I understand correctly: You are trying to expose a server from within minikube to your host network. This can be done a few ways:
The first is to create a NodePort Service for your server/pod. You can then run minikube service list to get the url for your service:
$ minikube service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | kubernetes | No node port |
| default | <your-service> | http://192.168.99.100:<port>|
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
The second is to use kubectl proxy and proxy the port you want to your local machine. This method does not require you to create a service, it should work with your current configuration.
kubectl proxy --port=<port-you-want-access-on-server>
This will then make the proxied port available at localhost:port
If you are just trying to get the IP address of a pod, this command should work (from How to know a Pod's own IP address from a container in the Pod?):
kubectl get pod $POD_NAME --template={{.status.podIP}}
Also if you just need to access minikube's internal network you can use:
minikube ssh
Which will drop you into minikube's VM