Ingress on one node can use pod on other node as backend? - kubernetes

Setting up a K8S cluster (RKE) on Hetzner, having 3 Ubuntu 22 worker nodes, using the Hetzner LoadBalancer.
So I tried to run the Google "hello" app and create service and Ingress.
Problem: it only works 1/3 of the time.
Can ingress running on node 1 not use a pod running on node 2 as backend? This would make the Hetzner load balancer unusable for this use case I suppose?

Had to change the network to flannel, then it worked.

Related

How to deploy two ingress-nginx controllers on one kind kubernetes cluster

For testing purposes, I deploy two versions of my application on the same machine. On production, only one application instance runs in one cloud Kubernetes cluster and uses the ingress-nginx controller to expose its API.
I use kind to run a Kubernetes cluster locally and deploy the application versions into two different namespaces. I configure the ingress controller according to the kind and ingress-nginx Multiple controllers documentation. The first instance of my app works as expected, but when I deploy the second one, the controller pod fails to start with the following message:
0/6 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 5 node(s) didn't match Pod's node affinity/selector
As far as I understand, two ingress controller pods are scheduled on the same node and cannot share the same port. Please advise how to proceed further. Should the second controller pod be scheduled to a different node? As kind maps node ports to the host machine, is it possible to map the same ports of multiple nodes to the host machine?
Not sure if this will satisfy your use-case, but you can scope the nginx ingress controller to a namespace:
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L150
This way you could have multiple nginx controllers in different namespaces and they wouldn't conflict. Looks like you can also have them watch specific namespaces via selectors not just their own.
I was facing the same issue with installing ingress with my kind cluster. adding a label to my control plane node soled this issue.
You may try: kubectl label nodes <name of your control plane node> ingress-ready=true

Kube-proxy was not found in my rancher cluster

My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy
For me, it is really strange to have a cluster without kubeproxy
My setup for rancher cluster is below:
Cluster Provider: RKE
Provision and Provision : Use existing nodes and create a cluster using RKE
Network Plugin : canal
Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.
Finally, I find my kibe-proxy. It is process of host not docker container.
In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.
root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16

Configure keepalived for services (NodePort) on kubernates

I have a k8s cluster which contains 2 nodes. And in the cluster I deployed 2 pods for the same application. Due to some reason I have to deploy a service (NodePort IP) for each pod, so totally I have 2 services the application, for example the service NodePort IP is 192.142.1.11 and 192.142.1.12. And use these 2 ips I can access the application from any node.
Now I am going to use keepalived to set up HA for the application. So:
What's the best practice to install the keepalived service? On each k8s node or deploy it as pod?
How to configure the interface in the keepalived.conf file? You know the NodePort ips are configured on kube-ipvs0 interface created by k8s and its status is down. Seems it cannot be used as the interface in keepalived.conf. Should I use the Node external interface if I start keepalived service on each node?
Thanks for your help.
If your final goal is masters HA / users service load balancing in on-prem environment, then you can take a look on this two project:
Kubevip: can do both (HA masters + LoadBalancer type for user workload).
Metallb:
user workload LoadBalancer

i want to enable the pod to pod communication in different namespace same cluster

I have kubernetes cluster with 1 master 1 worker , i have DB service postgres running one namespace "PG" and i have another service config-server running in default namespace and i am unable to access postgres from config-server service which is in default namespace
Kubernetes version 1.13
overlay network -calico
as per the articles i read if pods doesnt have any network policy defined then pods can be reached to any other namespace pod without any restriction , need help in how to achieve it
should be able to reach any pod from another pod on the same cluster.
one quick way to check is to ping the service dns of the pod from another pod
get into config service pod and try to run the below command
ping <postgres-service-name>.<namespace>.svc.cluster.local
you should be able to get ping response
I was using kubernetes cluster with overlay network as calico , if there is no network policy created , by default kubernetes core dns will resolve the service but we have to add the . in the application or env variable where you are calling the service in another namespace. That will allow cross namespace communication

Can we reach a server running inside kubernetes Cluster from Outside?

I have a requirement that the server that is running inside one of my container in a k8s cluster should be able to reach a server that is running in some other machine (currently its in AWS).Now the problem is that both the server (in AWS & Kubernetes Cluster) should be able to reach each other.
My server in AWS is not able to ping my Server running in Kubernetes Cluster.
Is that possible? Can we do it ?
Yes you can use ingress-nginx to create publicly reachable services ingress-nginx
If you want to do it manually you can setup load balancers that map to specific ip ranges for your nodes. This is for ssh traffic.
yes you can use ingress kubernetes object it will create publicly reachable services.
Mainly if you are using aws or digital-ocean and you will use ingress it will make load balancer (ELB or ALB) and make public service and you can access server running inside kubernetes
By manually also you can do it just simply use kubernetes service and expose it using load balancer and NODE port
https://kubernetes.io/docs/concepts/services-networking/service/