Kubernetes ingress to pod running on same host? - kubernetes

We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?
We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.
Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.

I finally got this figured out. This was way harder than it should be IMO! Update: It's not working. Traffic frequently goes to the wrong pod.
The service needs externalTrafficPolicy: Local (see docs).
apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
The Ingress needs nginx.ingress.kubernetes.io/service-upstream: "true" (service-upstream docs).
The nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com" bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
So now a call https://starterservice-foo.example.com will go to the instance running on k8s host foo.

I believe Sticky Sessions is what you are looking for. Ingress does not communicate directly with pods, but with services. Sticky sessions try to bind requests from the same client to the same pod by setting an affinity cookie.
This is used for example with SignalR sessions, where the negotiation request has to be on the same host as the following websocket connection.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Affinity mode "balanced" is the default. If your pod count does not change, part of your clients will lose the session. Use "persistent" to have users connect to the same pod always (unless it dies of course). Further reading: https://github.com/kubernetes/ingress-nginx/issues/5944

Related

Does ExternalName service act as a reverse proxy?

I have the following need:
There is API that may be accessed only from allowlisted IPs. I'd like to make this API available publicly.
I thought about the following solution:
Create a service of type ServiceName:
kind: Service
apiVersion: v1
metadata:
name: my-svc
spec:
type: ExternalName
externalName: restricted-api.com
Create an ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- mysite.com
secretName: mysite-tls
rules:
- host: example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: my-svc
port:
name: https
Is my understanding correct that with such a setup when I call https://example.com/request on K8s level the request will be sent to https://restricted-api.com/request? The caller would not know that there is communication with restricted-api.com. Since the clients' IPs are dynamic the restricted-api.com would not allow them to call it.
The k8s IP is static and I could allowlist it.
Ok, if these are just your thoughts, I would recommend to look into this annotation:
externalTrafficPolicy=local is an annotation on the Kubernetes service resource that can be set to preserve the client source IP. When this value is set, the actual IP address of a client (e.g., a browser or mobile application) is propagated to the Kubernetes service instead of the IP address of the node.
For more information you can find here or in official Kubernetes docs.
Feel free to reach me out again, if you start to realize your thoughts and will face any issue with this.

Nginx ingress sends private IP for X-Real-IP to services

I have created a Nginx Ingress and Service with the following code:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP
selector:
name: my-app
ports:
- port: 8000
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: myingress
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 8000
Nginx ingress installed with:
helm install ingress-nginx ingress-nginx/ingress-nginx.
I have also enabled proxy protocols for ELB. But in nginx logs I don't see the real client ip for X-Forwarded-For and X-Real-IP headers. This is the final headers I see in my app logs:
X-Forwarded-For:[192.168.21.145] X-Forwarded-Port:[80] X-Forwarded-Proto:[http] X-Forwarded-Scheme:[http] X-Real-Ip:[192.168.21.145] X-Request-Id:[1bc14871ebc2bfbd9b2b6f31] X-Scheme:[http]
How do I get the real client ip instead of the ingress pod IP? Also is there a way to know which headers the ELB is sending to the ingress?
One solution is to use externalTrafficPolicy: Local (see documentation).
In fact, according to the kubernetes documentation:
Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client.
...
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
If you want to follow this route, update your nginx ingress controller Service and add the externalTrafficPolicy field:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
...
externalTrafficPolicy: Local
A possible alternative could be to use Proxy protocol (see documentation)
The proxy protocol should be enabled in the ConfigMap for the ingress-controller as well as the ELB.
L4 uses proxy-protocol
For L7 use use-forwarded-headers
# configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
Just expanding on #strongjz answer.
By default, the Load balancer that will be created in AWS for a Service of type LoadBalancer will be a Classic Load Balancer, operating on Layer 4, i.e., proxying in the TCP protocol level.
For this scenario, the best way to preserve the real ip is to use the Proxy Protocol, because it is capable of doing this at the TCP level.
To do this, you should enable the Proxy Protocol both on the Load Balancer and on Nginx-ingress.
Those values should do it for a Helm installation of nginx-ingress:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
The service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" annotation will tell the aws-load-balancer-controller to create your LoadBalancer with Proxy Protocol enabled. I'm not sure what happens if you add it to a pre-existing Ingress-nginx, but it should work too.
The use-proxy-protocol and real-ip-header are options passed to Nginx, to also enable Proxy Protocol there.
Reference:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#proxy-protocol-v2
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol

kubernetes - route ingress traffic to specific pod for some paths

I have multiple pods, that scale up and down automatically.
I am using an ingress as entry point. I need to route external traffic to a specific pod base on some conditions (lets say path). At the point the request is made I am sure the specific pod is up.
For example lets say I have domain someTest.com, that normally routes traffic to pod 1, 2 and 3 (lets say I identify them by internal ips - 192.168.1.10, 192.168.1.11 and 192.168.1.13).
When I call someTest.com/specialRequest/12, I need to route the traffic to 192.168.1.12, when I call someTest.com/specialRequest/13, I want to route traffic to 192.168.1.13. For normal cases (someTest.com/normalRequest) I just want to do the lb do his epic job normally.
If pods scale up and 192.168.1.14 appears, I need to be able to call someTest.com/specialRequest/14 and be routed to the mentioned pod.
Is there anyway I can achieve this?
Yes, you can easily achieve this using Kubernetes Ingress. Here is a sample code that might help:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: YourHostName.com
http:
paths:
- path: /
backend:
serviceName: Service1
servicePort: 8000
- path: /api
backend:
serviceName: Service2
servicePort: 8080
- path: /admin
backend:
serviceName: Service3
servicePort: 80
Please not that the ingress rules have serviceNames and not pod names, so you will have to create services for your pods. Here is an example for a service which exposes nginx as a service in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
io.kompose.service: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
I am not aware of built-in functionality to implement this (if this is what your really want). You can achieve this by building your own operator for Kubernetes. Your operator may provision a Pod+Ingress combo which will do exactly what you want - forward your traffic to a single pod, or you can provision 2 pods and 1 ingress to achive HA setup.
Depending on the Ingress you are using, it also may be possible to group multiple ingress resources under the same load balancer.
Here is a brief diagram of how this could look like.
would it be feasible to create another application
that can get the path and target the pod directly via
a pattern in the naming convention? for example
${podnamePrefix+param}.${service name}.${namespace}.svc.cluster.local

What's the easiest way to add TLS to a Kubernetes service?

I have a simple web server exposed publicly on Kubernetes on GKE and a domain registered. I'm looking to add TLS to this so it's accessible via HTTPS. I've heard a lot about using Let's Encrypt and ended up attempting this: https://github.com/jetstack/cert-manager/blob/master/docs/tutorials/acme/quick-start/index.rst but found it totally over-whelming. Is there a simpler approach to using Let's Encrypt given that my deployment is just a single service and pod?
The config I'm using is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: gcr.io/my-repo
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /healthz
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-balancer-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
========================================
EDIT: Following #Utku Ă–zdemir's suggestion I tried to codify those changes into YAML. I created the IP address with
gcloud compute addresses create example-ip-address --global
And the certificate and provisioning with: https://gist.github.com/nickponline/ab74d3d179e21474551b7596c6478eea
Everything provisions correctly but when I inspect the ManagedCertificates with kubectl describe ManagedCertificates example-certificate is says
Spec:
Domains:
app.domain.xyz
Status:
Certificate Name: xxxxxxxxxxxxxxxxxx
Certificate Status: Provisioning
Domain Status:
Domain: app.domain
Status: FailedNotVisible
Events: <none>
I've waited 24 hours so assume that this isn't going to change.
Since you use the ingress controller of the GKE itself, when you create an Ingress resource, it triggers the creation of a Load Balancer resource in the Google Cloud Platform. Normally, SSL termination is a responsibility of the ingress controller, therefore that GCP load balancer is responsible of doing the SSL termination.
This means, cert-manager will not work for your case, since the certificates will live outside of your cluster, and the traffic will be already SSL terminated before coming in your cluster.
Luckily, GCP has self-provisioned SSL (Let's Encrypt) support. To make use of that, yo need to follow the steps below:
Go to the Load Balancing screen on GCP, switch to advanced view, and jump to the Certificates tab (or simply click here).
Create a new SSL certificate, with "Create Google-managed certificate" chosen. To the domain field, write down the exact domain you want the SSL certificate for. It should look like this:
Go to the External IP Addresses screen, and reserve a new static IP address. Choose the type to be global (at the time of writing, GCP ingress controller only supports global IP addresses). Should look like this:
Take the static IP that you reserved (in this example it is 34.95.84.106)
Go to your domain registrar, and add an A type record for your domain (the one in the SSL certificate) to point to the static IP you allocated. In this example, it would be my-app.example.com -> 34.95.84.106.
Finally, you will need to edit your ingress to put 2 annotations, so it will hint the Google Cloud's Ingress Controller to use the static IP you reserved, and the certificate you created. See the ingress example below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: my-ssl-certificate # the name of the SSL certificate resource you created
kubernetes.io/ingress.global-static-ip-name: my-static-ip # the name of the static ip resource you created
kubernetes.io/ingress.allow-http: "false" # if you want to block plain http
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
Apply it, and verify that the changes are reflected by going to the Load Balancers screen on GCP.
Important Notes:
If there already is a GCP Load Balancer that is created by an Ingress, the changes you do (annotations) on the ingress will not reflect to the existing load balancer. Therefore, delete your existing ingress, make sure the existing load-balancer disappears, and create the ingress with correct annotations, so the load balancer will be configured correctly.
For the Let's Encrypt provisioning to work, your DNS record should be in place. It checks the owner of the domain using DNS before issuing the certificate. Also, the initial provisioning can take quite some time (up to half an hour).

How to expose a Ingress for external access in Kubernetes?

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.
This is the YAML I'm using after making several attempts:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
I ran yaml like this: kubectl create -f file.yaml
In the /etc/hosts file I added k8s.local to the ip of the master server.
When trying the command in or out of the master server a "Connection refused" message appears:
$ curl http://172.16.0.18:80/ -H 'Host: k8s.local'
I do not know if it's important, but I'm using Flannel in the cluster.
My idea is just to create a 'hello world' and expose it out of the cluster!
Do I need to change anything in the configuration to allow this access?
YAML file edited:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster
You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80
Of course the best solution is to use a cloud provider with a load balancer
You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.
Here is some information on how to install it.
If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.
And yes the selector on the 'Service' needs to be:
selector:
app: nginx
In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.
If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml
That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.
https://github.com/alexellis/inlets
Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.
I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)
https://blog.alexellis.io/https-inlets-local-endpoints/
I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:
mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com
proxy / 127.0.0.1:8080 {
transparent
}
proxy /tunnel 127.0.0.1:8080 {
transparent
websocket
}
tls {
max_certs 10
}
Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.
The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.