Kubernetes cannot access app via POD IP - kubernetes

I set up a cluster with 2 machines, which are not in the same local subnet but they can connect each other, machine A is Master + Node and machine B is Node. Then I use flannel (subnet 172.16.0.0/16) as the network plugin. After deployed apps, I encountered a problem that I can access the app via POD IP on machine A, but I cannot access the same app on machine B via POD IP, and curl command would say No route to the host172.16.0.x`.
I think there is no route rules to other machine, but I don't know how to configure the network. Could anyone help to explain if I miss something important? Thank you very much.
I use this kubernetes/contrib ansible script to deploy cluster, and did not change any configuration about flannel.

You can use the type:NodePort to access the pod over all of the node's IPs

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

How to access a machine in my network from microk8s deployments

I have some pods running in microk8s and they need to access a machine outside the cluster and inside my local network. The problem is they can't access it even using the IP address.
Example:
In the host itself I can use "curl " and get the expected result, but inside a pod I can't do that.
I just did a very simple test using minikube and the IP is accessible by default, so I think the issue is something related to microk8s and I prefer to use it if possible.
Addons enabled:
dashboard
dns
ha-cluster
istio
metrics-server
registry
storage
This question is similar to this one How to access hosts in my network from microk8s deployment pods but in my case I can't even access using the IP, so it's not a problem of naming resolution like it was there.
Is there anything else I need to do to make this work?

How to route all Node A traffic through Node B

I have 4 nodes 3 Windows and 1 Linux, I simply need to route all traffic from windows nodes though Linux node.
Any inputs how can I do this?
This is not a thing Kubernetes is involved in. It would be up to your CNI plugin and your overall networking setup.
This is not realted to kubernetes however still if you are for answer in kubernetes you can create gateway with VM. In which all VM traffic will be redirect via another VM.
Here the example of terraform script : https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples/gke-nat-gateway

How to allow nodes of one GKE cluster to connect to another GKE

I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.

Can't deploy nodes on slaves with daemon set installed

Soo to resume the situation, i created a kubernetes cluster with one master and on slaves.
I installed everything to need it to work (Pod Network Calico here and dashbord for better management)
I tried to install a virtual IP service to just create a virtual IP for the service i deploy in my cluster but when i installed the daemonset (kube-keepalived-vip here), everytime i deploy a service it never get shedule on the slave. But the virtual IP is working properly because i can access my service thougth this IP.
To clarify the situation, my master is untained soo i can deploy pods on it.
Soo here i try to have HighAvailability services with the same virtual IP. Without the Daemonset, the LoadBalancing is working properly when i deploy a service in my cluster and the pods are sheduled on the slaves.
If you know what's wrong or what to do, let me know