I had sticky session working in my dev environment with minibike with following configurations:
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gl-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip"
spec:
backend:
serviceName: gl-ui-service
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: gl-api-service
servicePort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: gl-api-service
labels:
app: gl-api
annotations:
ingress.kubernetes.io/affinity: 'cookie'
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
selector:
app: gl-api
Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).
Session affinity is not available yet in the GCE/GKE Ingress controller.
In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.
Use NodePort for the Kubernetes Service. Set the value of the port in spec.ports[*].nodePort, otherwise a random one will be assigned
Disable kube-proxy SNAT load balancing
Create a Load Balancer from the GCE API, with cookie session affinity enabled. As backend use the port from 1.
Good news! Finally they have support for these kind of tweaks as beta features!
Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:
Timeout
Connection draining timeout
Session affinity
The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.
Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
Based on this: https://github.com/kubernetes/ingress-gce/blob/master/docs/annotations.md
there's no annotation available, which could effect the session affinity setting of the Google Cloud LoadBalancer (GCLB), that is created as a result of the ingress creation. As such:
This have to be turned on by hand: either as suggested above by creating the LB yourself, or letting the ingress controller do so and then changing the backend configuration for each backend (either via GUI or gcloud cli). IMHO the later seems faster and less prone to errors. (Tested, and cookie "GCLB" was returned by LB after the config change got propagated automatically, and subsequent requests including the cookie were routed to the same node)
As rightfully pointed out by Matt-y-er: service.spec "externalTrafficPolicy" has to be set to local "Local" to disable forwarding from the Node the GCLB selected to another. However:
One would still need to ensure:
The GCLB should not send traffic to nodes, which doesn't run the pod or
make sure there's a pod running on all nodes (and only a single pod as the externalTrafficPolicy setting would not prevent loadbalancing over multiple local pods)
With regard to #3,the simple solution:
convert the deployment to a daemonset -> there will be exactly one pod on each node (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) at all times
The more complicated solution (but which allows to have less pods than nodes):
It seems, that GCLB's health check doesn't need to be adjusted as Ingress rule definition automatically sets up a healthcheck to the backend (and not to the default healthz service)
supply anti-affinity rules to make sure there's at most a single instance of a pod on each node (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)
Note: The above anti-affinity version was tested on 24th July 2018 with 1.10.4-gke.2 kubernetes version on a 2 node cluster running COS (default GKE VM image)
I was trying the gke tutorial for that on version: 1.11.6-gke.6 (the latest availiable).
stickiness was not there... the only option that was working was only after sessing externalTrafficPolicy":"Local" on the service...
spec:
type: NodePort
externalTrafficPolicy: Local
i opened defect to google about the same, and they accepted it, without commiting on eta.
https://issuetracker.google.com/issues/124064870
For the BackendConfig of the ingress loadbalancer, documentation can be found here:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
An example snippet for type generated cookie is :
spec:
timeoutSec: 1800
connectionDraining:
drainingTimeoutSec: 1800
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 1800
Related
I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp 👈
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test 👈
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way
I have a working Nexus 3 pod, reachable on port 30080 (with NodePort): http://nexus.mydomain:30080/ works perfectly from all hosts (from the cluster or outside).
Now I'm trying to make it accessible at the port 80 (for obvious reasons).
Following the docs, I've implemented it like that (trivial):
[...]
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nexus.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: nexus-service
servicePort: 80
Applying it works without errors. But when I try to reach http://nexus.mydomain, I get:
Service Unavailable
No logs are shown (the webapp is not hit).
What did I miss ?
K3s Lightweight Kubernetes
K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.
As I mentioned in comments, K3s as default is using Traefik Ingress Controller.
Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
This information can be found in K3s Rancher Documentation.
Traefik is deployed by default when starting the server... To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.
To disable it, start each server with the --disable traefik option.
If you want to deploy Nginx Ingress controller, you can check guide How to use NGINX ingress controller in K3s.
As you are using specific Nginx Ingress like nginx.ingress.kubernetes.io/rewrite-target: /$1, you have to use Nginx Ingress.
If you would use more than 2 Ingress controllers you will need to force using nginx ingress by annotation.
annotations:
kubernetes.io/ingress.class: "nginx"
If mention information won't help, please provide more details like your Deployment, Service.
I do not think you can expose it on port 80 or 443 over a NodePort service or at least it is not recommended.
In this configuration, the NGINX container remains isolated from the
host network. As a result, it can safely bind to any port, including
the standard HTTP ports 80 and 443. However, due to the container
namespace isolation, a client located outside the cluster network
(e.g. on the public internet) is not able to access Ingress hosts
directly on ports 80 and 443. Instead, the external client must append
the NodePort allocated to the ingress-nginx Service to HTTP requests.
-- Bare-metal considerations - NGINX Ingress Controller
* Emphasis added by me.
While it may sound tempting to reconfigure the NodePort range using
the --service-node-port-range API server flag to include unprivileged
ports and be able to expose ports 80 and 443, doing so may result in
unexpected issues including (but not limited to) the use of ports
otherwise reserved to system daemons and the necessity to grant
kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches
proposed in this page for alternatives.
-- Bare-metal considerations - NGINX Ingress Controller
I did a similar setup a couple of months ago. I installed a MetalLB load balancer and then exposed the service. Depending on your provider (e.g., GKE), a load balancer can even be automatically spun up. So possibly you don't even have to deal with MetalLB, although MetalLB is not hard to setup and works great.
Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !
The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.
My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.