How can I expose kubernetes services from within a local VM? - kubernetes

I'm running kubernetes locally in a guest VM. How can I expose services to the host?
On GKE, for example, exposing a service requisitions an IP from Google and wires it up, so that requests to that IP end up at the right service. How can I do that for my VM?

Not the wanted answer but you have to do it manually for now. I usually start a nginx container with --net=host to proxy from kubernetes network to VM network.
I hope to be wrong but it was true some month ago.

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

How can k8s be aware of service running on host?

I am running Cockpit on my host, its running as systemd service, at the same time I am using microk8s (single node k8s).
I dont need reverse proxy to Cockpit web ui, but I need something like external service/endpoint k8s object mapping, .. to be k8s aware of that service.
k8s should not allow to create another nodePort service with same port ( k8s layer, it should not collide on OS layer )
k8s should handle rules for that service in iptables.
Btw: Cockpit must be running as standalone application directly on host.
Any ideas?
Thanks

Fail to connect the GKE with GCE on the same VPC?

I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080

Kubernetes cannot access app via POD IP

I set up a cluster with 2 machines, which are not in the same local subnet but they can connect each other, machine A is Master + Node and machine B is Node. Then I use flannel (subnet 172.16.0.0/16) as the network plugin. After deployed apps, I encountered a problem that I can access the app via POD IP on machine A, but I cannot access the same app on machine B via POD IP, and curl command would say No route to the host172.16.0.x`.
I think there is no route rules to other machine, but I don't know how to configure the network. Could anyone help to explain if I miss something important? Thank you very much.
I use this kubernetes/contrib ansible script to deploy cluster, and did not change any configuration about flannel.
You can use the type:NodePort to access the pod over all of the node's IPs

browse kubernetes network form outside

I'm running a kubernetes cluster on AWS using Weave with a private topology. I have some multi-node applications (like Spark) that have a UI web page. I can expose that via a load balancer, but all the links to the workers, etc. use the k8s local ip addresses. Is it possible (via kubectl proxy or otherwise) to temporarily "go inside" the k8s network from a browser on my laptop, so that all the k8s internal ips work as expected? I'm not looking to expose everything to the outside, but to be able to temporarily browse for things from my laptop.
You can use weave expose to expose weave Subnet.
You should be able to use kubectl port-forward my-container-name localport:serviceport on your laptop (where service port is the port exposed by your WebUI service). Then you should be able to browse to localhost:localport and everything should work as expected.
Alternatively you may need to SSH into one of the private nodes via a bastion host.