Map node port directly into pod, bypassing services and ingress - kubernetes

Is there a way to map a port directly from the node to a pod running on the same node, bypassing services, loadbalancers and ingress. So basically if a pod is deployed on node X, then node X needs to listen on port 1234 and map that port directly into a pod also running on node X on port 1234. So basically there would be no cross node connections. And whatever node Kubernetes decides to deploy the POD on, that node is now the new host for the external connections.
I am fully aware that this goes against all design principles of a Kubernetes cluster. But I am trying to host an old custom build cloud app that was built for a once only custom cloud solution, and see if I can host it on Kubernetes, but each POD in the stateful set needs a dedicated public IP assigned to it as the public IP get's sent to external devices to redirect them to the correct POD. And the protocol is also custom so there doesn't exist an Level 7 loadbalancer for this. So the only solution I can come up with is a direct port mapping from the node into the POD.

You can use hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet in the pod spec
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
This means the pod will listen on the port 8080 directly using the host's network and can be accessed via nodeip:8080 without creating a service or ingress.

Have you considered using a Service with a NodePort type? This will give you a "static" port on each node to which the pod is deployed. The port number will be high (30000+) but reliable. This is described in the narrative documentation.

Related

Why Do I Need a NodePort in My Local Kubernetes Cluster?

Excuse my relative networking ignorance, but I've read a lot of docs and still have trouble understanding this (perhaps due to lack of background in networks).
Given this Dockerfile:
from node:lts-slim
RUN mkdir /code
COPY package.json /code/
WORKDIR /code
RUN npm install
COPY server.js /code/
EXPOSE 3000
CMD ["node", "server.js"]
...this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-k8s
ports:
- containerPort: 3000
protocol: TCP
and this service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
My understanding is that:
The app in my container is exposing itself to the outside world on 3000
my deployment yaml is saying, "the container is listening on 3000"
my service is saying map 3000 internally to port 80, which is the default port, so you don't have to add the port to the host.
I'm using the NodePort type because on local clusters like Docker Desktop it works out of the box instead of LoadBalancer. It opens up a random port on every node (pod?) to the outside in the cluster between 30000–32767. That node port is how I access my app from outside. E.g. localhost:30543.
Are my assumptions correct? I am unclear why I can't access my app at localhost:80, or just localhost, if the service makes the mapping between the container port and the outside world? What's the point of the mapping between 3000 and 80 in the service?
In short, why do I need NodePort?
There are two networking layers, which we could call "inside the cluster" and "outside the cluster". The Pod and the Service each have their own IP address, but these are only inside the cluster. You need the NodePort to forward a request from outside the cluster to inside the cluster.
In a "real" Kubernetes cluster, you'd make a request...
...to http://any-kubernetes-node.example.com:31245/, with a "normal" IP address in the way you'd expect a physical system to have, connecting to the NodePort port, which forwards...
...to http://web-service.default.svc.cluster.local:80/, with a cluster-internal IP address and the service port, which looks at the pods it selects and forwards...
...to http://10.20.30.40:3000/, using the cluster-internal IP address of any of the matching pods and the target port from the service.
The containerPort: in the pod spec isn't strictly required (but if you give it name: http then you can have the service specify targetPort: http without knowing the specific port number). EXPOSE in the Dockerfile means pretty much nothing in this sequence.
This sequence also gives you some flexibility in not needing to know where things are running. Say you have 100 nodes and 3 replicas of your pod; the initial connection can be to any node, and the service will forward to all of the target pods, without you needing to know any of these details from the caller.
(For completeness, a LoadBalancer type service requests that a load balancer be created outside the cluster; for example, an AWS ELB. This forwards to any of the cluster nodes as in step 1 above. If you're not in a cloud environment and the cluster doesn't know how to create the external load balancer automatically, it's the same as NodePort.)
If we reduce this to a local Kubernetes installation (Docker Desktop, minikube, kind) the only real difference is that there's only one node; the underlying infrastructure is still built as though it were a multi-node distributed cluster. How exactly you access a service differs across these installations. In Docker Desktop, from the host system, you can use localhost as the "normal" "external" node IP address in the first step.

AKS Kubernetes questions

Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref

How to expose kubernetes service on prem using 443/80

Is it possible to expose Kubernetes service using port 443/80 on-premise?
I know some ways to expose services in Kubernetes:
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.
2. Host network - Force the pod to use the host’s network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
MetalLB which allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, is not yet stable enough.
Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.
Thanks.
IMHO ingress is the best way to do this on prem.
We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.
Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding ingress.
Here's an example of the daemonset manifest:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
Use ingress controller as an entrypoint to a services in kubernetes cluster. Run ingress controller on port 80 or 443.
You need to define ingress rules for each backend service that you want to access from outside. Ingress controller should be able to allow client to access the services based on the paths defined in the ingress rules.
If you need to allow access over https then you need to get the dns certificates, load them into secrets and bind them in the ingress rules
Most popular one is nginx ingress controller. Traefik and ha proxy ingress controllers are also other alternate solutions
Idea with hostNetwork proxy is actually not bad, Openshift Router uses that for example. You define two or three nodes to run proxy and use DNS load balancing in front of them.
And you can still use kube-dns with hostNetwork, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
You are probably running a kubeadm on-premise Kubernetes setup with a nginx ingress controller on unix/linux hosts and can't safely expose ports in the restricted system port range (0-1023).
You either need to set up your own dedicated load balancer pair (e.g. a Linux boxes with HA-Proxy running) or alternatively use an existing load balancers if you are lucky engough being in a corporate environment that already provides load balancing (e.g. F5 LB).
Then you will be able to set the load balancers to forward your 443/80 requests to your cluster node's 30443/30080 ports that are handled by your cluster's ingress controller.

Kubernetes: How to map service to a local port inside pod

Is it possible to map Kubernetes service to a specific port for a group of pods (deployement)?
E.g. I have service (just as an example)
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And I want this service be available as http://localhost:8081/ in my pods from some specific deployment.
It seems to me that I saw this in K8S docs several days ago, but I can not find this right now.
It may be beneficial to review your usage of K8s services. If you had exposed a deployment of pods as a service, then your service will define the port mappings, and you will be able to access your service on its cluster DNS name on the service port.
If you must access your service via localhost, I am assuming your use case is some tightly coupled containers in your pod. In which case, you can define a "containerPort" in your deployment yaml, and add the containers that need to communicate with each other on localhost in the same pod.
If by localhost you are referring to your own local development computer, you can do a port-forward. As long as the port-forwarding process is running, you can access the pods' ports from your localhost. Find more on port-forwarding. Simple example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
# or
kubectl port-forward service/redis 6379:6379
Hope this helps!

Assign public ip to the service which is running on Kubernetes

i have a application ( Let's say Integration-protocol-api)
and this application want to talk to other application, but this application located on another Network (Let's call it Another-Integration-Protocol)
And problem is, on another-integration-protocol side the whitelist exist, which allow to connect to it, only from selected ip addresses.
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod.
how can i assign the Public and Static ip to my Kubernetes Pod?
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod. how can i assign the Public and Static ip to my Kubernetes Pod
There are several approaches, depending on your actual setup/needs and I'll try to give some options here:
Tie pod to specific node and expose that node's IP address through service. This would be something along those lines:
# Quick deployment/pod manifest node selector (affinity is better)
...
spec:
nodeSelector:
kubernetes.io/hostname: my-node-name
...
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: svc-myservice
labels:
app: myapp
tier: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- name: tcpserviceport
protocol: TCP
port: 8080
targetPort: 80
externalIPs:
- 111.222.222.111
Pod should be in same namespace, tied to that node via either node selector or affinity rules and have same labels as in selector for service to pick it up. IP address of cluster node with name my-node-name should be 111.222.222.111 in this example, and it would be accessible through port 8080 and that ip address.
If applicable, expose service through ingress and whitelist ingress public ip only. Depending on your namespace separation you'll reference your pod (wherever it might run) in ingress through corresponding service using either service name (in namespace scope) or FQDN such as:
<service-name>.<namespace-name>.svc.cluster.local
Here is good overview of some methods to make it more illustrative from kubernetes docs: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/
A pod makes any request with it's node IP address as source. So you could whitelist your cluster nodes' IP addresses, and it should work.