Looking for a solution to enable access to services on a cloud-hosted Kubernetes cluster on local network for various developers without each machine having to use kubectl to port forward and actually have the access to the cluster.
Essentially looking for a way to run a docker container or vm which I can expose the ports or even better away to forward local network traffic to clusters dns.
Really stuck looking for solutions, any help would be really appreciated
Related
I am developing an application where users can spin up compute pods running jupyter notebooks and can ssh into their pod. We are using k8s to provision and manage the compute. Since we run our infrastructure in AWS using EKS and elastic IPs are scarce, we need to route ssh traffic through a bastion instance which forwards ssh traffic (also http for jupyter notebooks) to the correct pod. I am hoping for any suggestions on how to implement this. From my understanding so far, I need to have a separate port for each user for ssh on the bastion instance. This seems unwieldily but AFAIK ssh traffic cannot be routed in any other way. For HTTP, we can have routing rules which should be much more straightforward.
I have a Kubernetes cluster running my production environments. I have bastion machine, and my own computer can connect to bastion & the bastion can access the cluster machines. I want to connect to some internal (i.e. not exposed to public network) services, such as MySQL, Redis, Kibana, etc, on my own computer. I need to have enough performance (e.g. the kubectl forward is toooo slow), and have enough security.
I have tried to use kubectl forward. But it is very slow, and after a search, they say it is just slow. So I guess I cannot make it faster.
I guess I can also expose every service as a NodePort. Then I can use things like ssh port forward. However, I am afraid whether the security is low? Because we have to create a NodePort, then if hacker can touch the cluster, he can use the nodeport to access my MySQL, Redis, Kafka, etc, which is terrible.
EDITED: In addition, I need not only my own computer, but my mobile phone to able to touch some services, such as my Spring Boot internal admin url. currently I do ssh port forward and bind to 0.0.0.0, so my mobile phone can connect to my_computer_ip:the_port to use it. But how can I do it without ssh port forward?
Thank you!
I have a custom Kubernetes Cluster (deployed using kubeadm) running on Virtual Machines from an IAAS Provider. The Kubernetes Nodes have no Internet facing IP Adresses (except for the Master Node, which I also use for Ingress).
I'm now trying to join a Machine to this Cluster that is not hosted by my main IAAS provider. I want to do this because I need specialized computing resources for my application that are not offered by the IAAS.
What is the best way to do this?
Here's what I've tried already:
Run the Cluster on Internet facing IP Adresses
I have no trouble joining the Node when I tell kube-apiserver on the Master Node to listen on 0.0.0.0 and use public IP Adresses for every Node. However, this approach is non-ideal from a security perspective and also leads to higher cost because public IP Adresses have to be leased for Nodes that normally don't need them.
Create a Tunnel to the Master Node using sshuttle
I've had moderate success by creating a tunnel from the external Machine to the Kubernetes Master Node using sshuttle, which is configured on my external Machine to route 10.0.0.0/8 through the tunnel. This works in principle, but it seems way too hacky and is also a bit unstable (sometimes the external machine can't get a route to the other nodes, I have yet to investigate this problem further).
Here are some ideas that could work, but I haven't tried yet because I don't favor these approaches:
Use a proper VPN
I could try to use a proper VPN tunnel to connect the Machine. I don't favor this solution because it would add a (admittedly quite small) overhead to the Cluster.
Use a cluster federation
It looks like kubefed was made specifically for this purpose. However, I think this is overkill in my case: I'm only trying to join a single external Machine to the Cluster. Using Kubefed would add a ton of overhead (Federation Control Plane on my Main Cluster + Single Host Kubernetes Deployment on the external machine).
I couldn't think about any better solution than a VPN here. Especially since you have only one isolated node, it should be relatively easy to make the handshake happen between this node and your master.
Routing the traffic from "internal" nodes to this isolated node is also trivial. Because all nodes already use the master as their default gateway, modifying the route table on the master is enough to forward the traffic from internal nodes to the isolated node through the tunnel.
You have to be careful with the configuration of your container network though. Depending on the solution you use to deploy it, you may have to assign a different subnet to the Docker bridge on the other side of the VPN.
right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since gcloud charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of vpn? putty tunnel or something? any help would be appreciated!
I'm also using
kubectl exec
If you are looking for a managed solution then Google is offering VPN for that:
https://console.cloud.google.com/networking/vpn/
If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.
A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)
At the end of the day the ideal solution depends much on your environment and goal.
I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.