We want to access only local services via Ingress using K3S (1.23) and Traefik.
We have an NGINX gateway running as a DaemonSet on all nodes, exposed as a NodePort 30123 called gateway with externalTrafficPolicy: Local. When we ping https://node1:30123 we consistently get only a local pod from the nginx instance on node1. ✔
We also have our Ingress Controller Traefik running as a DaemonSet exposed as a NodePort 30999 with externalTrafficPolicy: Local. When we hit https://node1:30999 we get a load-balanced answer ❌🤷♂️:
answer from node1
answer from node2
answer from node1
answer from node3
etc
How can we ensure that https://node1:30999 only gets routed to local pods?
Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: my-ingress
spec:
rules:
- host: node1
http:
paths:
- backend:
service:
name: gateway
port:
number: 8443
path: /
pathType: Prefix
- host: node2
http:
paths:
- backend:
service:
name: gateway
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- node1
- node2
secretName: tls-secret
Gateway Service
apiVersion: v1
kind: Service
metadata:
annotations:
name: gateway
spec:
clusterIP: ***
clusterIPs:
- ***
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https
nodePort: 30123
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: gateway
sessionAffinity: None
type: NodePort
The reason we want only local traffic is because Kubernetes is too slow at evicting pods/endpoints from a service when a node goes down. It continues to send traffic to dead nodes/pods for minutes after a node disappears. We use an external load balancer with active health checks every 2s to avoid this problem. However, evern if our LB targets only healthy nodes, Kubernetes "Services" still have invalid endpoints and round-robin traffic into nowhere.
I guess this might be due to the fact that you are running traefic as a daemonset which behaves like a shared daemon running on all nodes and every node will then apply the local traffic policy to the request.
So basically running the Ingress Controller as a daemonset and setting the controllers service traffic Policy to Local will result in some behavior that equals the Cluster Policy.
Further the idea of the Ingress Controller is to route the traffic to a specific service in the cluster. Accessing a node directly on the Ingress-Controller Service Port will not trigger any forwarding rules. You should instead create a Ingress Resource that maps some existing service (Nginx-test-pod). This services traffic policy must then be set to "Local" too to see the real SourceIP in the services log.
Related
We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?
We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.
Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.
I finally got this figured out. This was way harder than it should be IMO! Update: It's not working. Traffic frequently goes to the wrong pod.
The service needs externalTrafficPolicy: Local (see docs).
apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
The Ingress needs nginx.ingress.kubernetes.io/service-upstream: "true" (service-upstream docs).
The nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com" bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
So now a call https://starterservice-foo.example.com will go to the instance running on k8s host foo.
I believe Sticky Sessions is what you are looking for. Ingress does not communicate directly with pods, but with services. Sticky sessions try to bind requests from the same client to the same pod by setting an affinity cookie.
This is used for example with SignalR sessions, where the negotiation request has to be on the same host as the following websocket connection.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Affinity mode "balanced" is the default. If your pod count does not change, part of your clients will lose the session. Use "persistent" to have users connect to the same pod always (unless it dies of course). Further reading: https://github.com/kubernetes/ingress-nginx/issues/5944
I have following setup in minikube cluster
SpringBoot app deployed in minikube cluster
name : opaapp and containerPort: 9999
Service use to expose service app as below
apiVersion: v1
kind: Service
metadata:
name: opaapp
namespace: default
labels:
app: opaapp
spec:
selector:
app: opaapp
ports:
- name: http
port: 9999
targetPort: 9999
type: NodePort
Created an ingreass controller and ingress resource as below
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: opaapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: opaapp.info
http:
paths:
- path: /
backend:
serviceName: opaapp
servicePort: 9999
I have setup host file as below
172.17.0.2 opaapp.info
Now, if I access service as below
http://opaapp.info:32746/api/ping : I am getting the response back
But if I try to access as
http://opaapp.info/api/ping : Getting 404 error
Not able to find the error on configuration
The nginx ingress controller has been exposed via NodePort 32746 which means nginx is not listening on port 80/443 in the host's(172.17.0.2) network, rather nginx is listening on port 80/443 on Kubernetes pod network which is different than host network. Hence accessing it via http://opaapp.info/api/ping is not working. To make it work the way you are expecting the nginx ingress controller need to be deployed with hostNetwork: true option so that it can listen on 80/443 port directly in the host(172.17.0.2) network which can be done as discussed here.
I am new to GKE and kubernetes. I installed elastic search on GKE using Google Click to Deploy. I also installed nginx-ingress and secured the elasticsearch service with HTTP basic authentication (through the ingress). I created an external static IP and assigned it to the ingress controller using the loadBalancerIp field in the ingress-controller service configuration.
Questions:
I have appengine services running in GCP which need to access this elasticsearch setup. Can I avoid exposing my elasticsearch service outside - with some kind of an "internal" IP which only my appengine services can access? Is using VPC one of the ways of doing this?
I see that my ingress was also assigned an external IP address (the static IP I created was assigned to the nginx-ingress-controller service). However, when I hit this IP on port 80, I get connection refused and on 9200 port, it times out. Can I avoid having two external IPs? How secure is this ingress IP address? What are its open ports?
Here is my ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - ok
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: basic-ingress
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: elasticsearch-1-elasticsearch-svc
servicePort: 9200
path: /
Here is the ingress controller service configuration:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.6.15
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
spec:
clusterIP: <Some IP>
externalTrafficPolicy: Cluster
loadBalancerIP: <External IP>
ports:
- name: http
nodePort: 30290
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30119
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
My suggestion is to use 2 load balancer, 1 for public and 1 for private. To create private load balancer you just need to add following line in metadata section
cloud.google.com/load-balancer-type: "Internal"
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
I have an ingress defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: zaz-address
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: foo-bar-com
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz/*
backend:
serviceName: zaz-service
servicePort: 8080
Then the service zap-service is a nodeport defined as:
apiVersion: v1
kind: Service
metadata:
name: zaz-service
namespace: default
spec:
clusterIP: 10.27.255.88
externalTrafficPolicy: Cluster
ports:
- nodePort: 32455
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: zap
sessionAffinity: None
type: NodePort
The nodeport is successfully selecting the two pods behind it serving my service. I can see in the GKE services list that the nodeport has an IP that looks internal.
When I check in the same interface the ingress, it also looks all fine, but serving zero pods.
When I describe the ingress on the other hand I can see:
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/zaz/* zaz-service:8080 (<none>)
Which looks like the ingress is unable to resolve the service IP. What am I doing wrong here? I cannot access the service through the external domain name, I am getting an error 404.
How can I make the ingress translate the domain name zaz-service into the proper IP so it can redirect traffic there?
Seems like the wildcards in the path are not supported yet.
Any reason why not using just the following in your case?
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz
backend:
serviceName: zaz-service
servicePort: 8080
My mistake was, as expected, not reading the documentation thoroughly.
The port stated in the Ingress path is not a "forwarding" mechanism but a "filtering" one. In my head it made sense that it would be redirecting http(s) traffic to port 8080, which is the one where the Service behind was listening to, and the Pod behind the service too.
Reality was that it would not route traffic which was not port 8080 to the service. To make it cleaner I changed the port in the Ingress from 8080 to 80 and in the Service the front-facing port from 8080 to 80 too.
Now all requests coming from the internet can reach the server successfully.
I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site.
How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses?
I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet:
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.org
http:
paths:
- backend:
serviceName: example-web
servicePort: 80
path: /
I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- port: 80
targetPort: 80
loadBalancerIP: "10.10.10.10"
type: LoadBalancer
I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer.
This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?
As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you.
Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- nodePort: 30080
port: 80
targetPort: 80
type: NodePort
Create a backend service with HTTP health check on port 30080 for your instance group (nodes).