I'm running a kubernetes cluster on AWS using Weave with a private topology. I have some multi-node applications (like Spark) that have a UI web page. I can expose that via a load balancer, but all the links to the workers, etc. use the k8s local ip addresses. Is it possible (via kubectl proxy or otherwise) to temporarily "go inside" the k8s network from a browser on my laptop, so that all the k8s internal ips work as expected? I'm not looking to expose everything to the outside, but to be able to temporarily browse for things from my laptop.
You can use weave expose to expose weave Subnet.
You should be able to use kubectl port-forward my-container-name localport:serviceport on your laptop (where service port is the port exposed by your WebUI service). Then you should be able to browse to localhost:localport and everything should work as expected.
Alternatively you may need to SSH into one of the private nodes via a bastion host.
Related
Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi
I am developing an application where users can spin up compute pods running jupyter notebooks and can ssh into their pod. We are using k8s to provision and manage the compute. Since we run our infrastructure in AWS using EKS and elastic IPs are scarce, we need to route ssh traffic through a bastion instance which forwards ssh traffic (also http for jupyter notebooks) to the correct pod. I am hoping for any suggestions on how to implement this. From my understanding so far, I need to have a separate port for each user for ssh on the bastion instance. This seems unwieldily but AFAIK ssh traffic cannot be routed in any other way. For HTTP, we can have routing rules which should be much more straightforward.
I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .
I have a Kubernetes cluster (Kubernetes 1.13, Weave Net CNI) that has no direct access to an internal company network. There is an authentication-free SOCKS5 proxy that can (only) be reached from the cluster, and which resolves and connects to resources in the internal network:
Consider some 3rd party Docker Images used on Pods that don't have any explicit proxy support, and just want a resolvable DNS name and target port to connect to a TCP-based service (which might be HTTP(S), but doesn't have to be).
What kind of setup would you propose to bind the Pods and Company Network Services together?
The only two things comes to my mind are:
1) Run the Socks5 docker image as a sidecar: https://hub.docker.com/r/serjs/go-socks5-proxy/
2) Use Transparent Proxy Redirector on the nodes - https://github.com/darkk/redsocks
I am trying to run a software in side Kubernetes that open more pods at runtime based on various operations. Is it possible to open more ports on the fly in a Kubernetes pod? It does not seem to be possible at the Docker level (Exposing a port on a live Docker container) which means Kubernetes can't do it either (I guess ?)
Each Pod in Kubernetes gets its own IP address. So a container (application) in a Pod can use any port as long as that port is not used by any other container within the same Pod.
Now if you want to expose those dynamic ports, it will require additional configuration though. Ports are exposed using Services, and service configuration has to be updated to expose those dynamic ports.