Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside.
The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside.
On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs.
Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation.
Is there any solution for this use-case?
Thanks for your input.
EDIT:
I see here:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect
that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port
You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me).
First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging).
Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path.
This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).
After trying the many methods explained in the doc mentioned above, the thing that works for me was:
1) Create a SSHD daemon container to SSH to the cluster
2) Create a ssh Service with a type: NodePort
3) get the port number with
kubectl describe service sshd
4) use ssh port forwarding to get to the service with:
ssh -L <local-port>:<my-k8s-service-name>:<my-k8s-service-port> -p <sshd-port> user#sshd-container
for example
ssh -L 2181:zookeeper:2181 -p 12345 root#sshd-container
Then I have my zookeeper service on localhost:2181
For more port mappings, use alternate ports.
You can also try using kubectl port-forward:
http://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/
http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/
Example:
kubectl port-forward POD_NAME LOCAL_PORT:REMOTE_PORT
kubectl port-forward redis-master 6379:6379
Related
I have two minikube clusters(two separate profiles) running locally call it minikube cluster A and minikube Cluster B. Each of these cluster also have an ingress and a dns associated with it locally. The dns are hello.dnsa and hello.dnsb . I am able to do ping on both of them and nslookup just like this https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#testing
I want pod A in cluster A to be able to communicate with pod B in cluster B. How can I do that? I logged into pod A cluster A and I did telnet hello.dnsb 80 and it doesn't get connected because I suspect there is no route. similarly I logged into pod B of cluster B and did telnet hello.dnsb 80 and it doesnt get connected. However If I do telnet hello.dnsb 80 or telnet hello.dnsb 80 from my host machine, telnet works!
Any simple way to solve this problem for now? I am ok with any solution like even adding routes manually using ip route add if needed
Skupper is a plugin available for performing these actions. It is a service interconnect that facilitates secured communication between the clusters, for more information on skupper go through this documentation.
There are multiple examples in which minikube is integrated with skupper, go through this configuration documentation for more details.
My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.
I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .
I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy bastion machine which allow my to connect into my private network (vpc) from the internet. This is the tutorial I based on. SSH into bastion - working currently.
The kubernetes master is not exposed outside. The result:
$ kubectl get pods
The connection to the server 172.16.0.2 was refused - did you specify the right host or port?
So i install kubectl on bastion and run:
$ kubectl proxy --port 1111
Starting to serve on 127.0.0.1:3128
now I want to connect my local kubectl to the remote proxy server. I installed secured tunnel to the bastion server and mapped the remote port into the local port. Also tried it with CURL and it's working.
Now I looking for something like
$ kubectl --use-proxy=1111 get pods
(Make my local kubectl pass tru my remote proxy)
How to do it?
kubectl proxy acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:
kubectl --server=http://localhost:1111
(Where port 1111 on your local machine is where kubectl proxy is available; in your case trough a tunnel)
If you need exec or attach trough kubectl proxy you'll need to run it with either --disable-filter=true or --reject-paths='^$'. Read the fine print and consequences for those options.
Safer way
All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:
ssh -D 1080 bastion
That makes ssh act as SOCKS proxy. You need GatewayPorts yes in your sshd_config for this to work. Thereafter from the workstation I can use
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod
I'm a complete newbie with Kubernetes, and have been trying to get secure CockroachDB running. I'm using the instructions and preconfigured .yaml files provided by Cockroach. https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html
I'm using the Cloud Shell in my Google Cloud console to set everything up. Everything goes well, and I can do local SQL tests and load data. Monitoring the cluster by proxying to localhost, with the comnmand below starts off serving as expected
kubectl port-forward cockroachdb-0 8080
However, when using cloud shell web preview on port 8080 to attach to localhost, the browser session returns "too many redirects".
My next challenge will be to figure out how to expose the cluster on a public address, but for now I'm stuck on what seems to be a fairly basic problem. Any advice would be greatly appreciated.
Just to make sure this question has an answer, the problem was that the question asker was running port-forward from the Google Cloud Shell rather than from his local machine. This meant that the service was not accessible to his local machine's web browser (because the Cloud Shell is running on a VM in Google's datacenters).
The ideal solution is to run the kubectl port-forward command from his own computer.
Or, barring that, to expose the cockroachdb pod externally using the kubectl expose pod cockroachdb-0 --port=8080 --type=LoadBalancer as suggested in the comments.