Allow users to ssh into pods through a single service in k8s - kubernetes

I am developing an application where users can spin up compute pods running jupyter notebooks and can ssh into their pod. We are using k8s to provision and manage the compute. Since we run our infrastructure in AWS using EKS and elastic IPs are scarce, we need to route ssh traffic through a bastion instance which forwards ssh traffic (also http for jupyter notebooks) to the correct pod. I am hoping for any suggestions on how to implement this. From my understanding so far, I need to have a separate port for each user for ssh on the bastion instance. This seems unwieldily but AFAIK ssh traffic cannot be routed in any other way. For HTTP, we can have routing rules which should be much more straightforward.

Related

How to whitelist entire kubernetes cluster on external server

I have a kubernetes cluster with several nodes, and it is connecting to a SQL server outside of the cluster. How can I whitelist these (potentially changing) nodes on the SQL server firewall, without having to whitelist each Node's external IP independently?
Is there a clean solution for this? Perhaps some intra-cluster tooling to route all requests through a single node?
You would have to use a NAT. It is possible, but fiddly (we do this weekly in order to connect to a hosted service to make backups, and the hosted service only whitelists a specific IP.)
We used Terraform for spinning up a cluster, then deploying our backup job to it so it could connect to the hosted service, and since it was going via the NAT IP, the remote host would allow the connection.
We used Cloud NAT via Terraform (as we were on GKE): https://registry.terraform.io/modules/terraform-google-modules/cloud-nat/google/latest
Though there are surely similar options for whichever Kubernetes provider you are using. If you are running bare-metal, you'll need to do the routing yourself.

Accessing cloud kubenates services on local network

Looking for a solution to enable access to services on a cloud-hosted Kubernetes cluster on local network for various developers without each machine having to use kubectl to port forward and actually have the access to the cluster.
Essentially looking for a way to run a docker container or vm which I can expose the ports or even better away to forward local network traffic to clusters dns.
Really stuck looking for solutions, any help would be really appreciated

How to access internal services in production environment of Kubernetes from local machine? (Need: performance & security)

I have a Kubernetes cluster running my production environments. I have bastion machine, and my own computer can connect to bastion & the bastion can access the cluster machines. I want to connect to some internal (i.e. not exposed to public network) services, such as MySQL, Redis, Kibana, etc, on my own computer. I need to have enough performance (e.g. the kubectl forward is toooo slow), and have enough security.
I have tried to use kubectl forward. But it is very slow, and after a search, they say it is just slow. So I guess I cannot make it faster.
I guess I can also expose every service as a NodePort. Then I can use things like ssh port forward. However, I am afraid whether the security is low? Because we have to create a NodePort, then if hacker can touch the cluster, he can use the nodeport to access my MySQL, Redis, Kafka, etc, which is terrible.
EDITED: In addition, I need not only my own computer, but my mobile phone to able to touch some services, such as my Spring Boot internal admin url. currently I do ssh port forward and bind to 0.0.0.0, so my mobile phone can connect to my_computer_ip:the_port to use it. But how can I do it without ssh port forward?
Thank you!

How to allow nodes of one GKE cluster to connect to another GKE

I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.

browse kubernetes network form outside

I'm running a kubernetes cluster on AWS using Weave with a private topology. I have some multi-node applications (like Spark) that have a UI web page. I can expose that via a load balancer, but all the links to the workers, etc. use the k8s local ip addresses. Is it possible (via kubectl proxy or otherwise) to temporarily "go inside" the k8s network from a browser on my laptop, so that all the k8s internal ips work as expected? I'm not looking to expose everything to the outside, but to be able to temporarily browse for things from my laptop.
You can use weave expose to expose weave Subnet.
You should be able to use kubectl port-forward my-container-name localport:serviceport on your laptop (where service port is the port exposed by your WebUI service). Then you should be able to browse to localhost:localport and everything should work as expected.
Alternatively you may need to SSH into one of the private nodes via a bastion host.