I have some pods running in microk8s and they need to access a machine outside the cluster and inside my local network. The problem is they can't access it even using the IP address.
Example:
In the host itself I can use "curl " and get the expected result, but inside a pod I can't do that.
I just did a very simple test using minikube and the IP is accessible by default, so I think the issue is something related to microk8s and I prefer to use it if possible.
Addons enabled:
dashboard
dns
ha-cluster
istio
metrics-server
registry
storage
This question is similar to this one How to access hosts in my network from microk8s deployment pods but in my case I can't even access using the IP, so it's not a problem of naming resolution like it was there.
Is there anything else I need to do to make this work?
Related
Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi
I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround
I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster.
The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:
I found that I did not receive the package back when I captured the package of tun0 on the server.
When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.
Not sure how you set up iptables inside the server pod as iptables/netfilter was not accessible on most kube clusters I saw.
If you want to have full access to cluster networking over that OpenVPN server you probably want to use hostNetwork: true on your vpn server. The problem is that you still need proper MASQ/SNAT rule to get response across to your client.
You should investigate your traffic going out of the server pod to see if it has a properly rewritten source address, otherwise the nodes in cluster will have no knowledge on how to route the response.
You probably have a common gateway for your nodes, depending on your kube implementation you might get around this issue by setting the route back to your vpn, but that likely will require some scripting around vpn server it self to make sure the route is updated each time server pod is rescheduled.
If I move a relevant config file and run kubectl proxy it will allow me to access the Kubernetes dashboard through this URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
However if I try to access the node directly, without kubectl proxy, I will get a 403 Forbidden.
http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running kubectl proxy is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.
What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?
You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).
I set up a cluster with 2 machines, which are not in the same local subnet but they can connect each other, machine A is Master + Node and machine B is Node. Then I use flannel (subnet 172.16.0.0/16) as the network plugin. After deployed apps, I encountered a problem that I can access the app via POD IP on machine A, but I cannot access the same app on machine B via POD IP, and curl command would say No route to the host172.16.0.x`.
I think there is no route rules to other machine, but I don't know how to configure the network. Could anyone help to explain if I miss something important? Thank you very much.
I use this kubernetes/contrib ansible script to deploy cluster, and did not change any configuration about flannel.
You can use the type:NodePort to access the pod over all of the node's IPs