I'm trying to configure a single ALB across multiple namespaces in aws EKS, each namespace has its own ingress resource.
I'm trying to configure the ingress controller aws-loadbalancer-controller on a k8s v1.20.
The problem i'm facing is that each time I try to deploy a new service it always spin-up a new classic loadbalancer in addition to the shared ALB specified in the ingress config.
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/
# service-realm1-dev.yaml:
apiVersion: v1
kind: Service
metadata:
name: sentinel
annotations:
external-dns.alpha.kubernetes.io/hostname: realm1.dev.sentinel.mysite.io
namespace: realm1-dev
labels:
run: sentinel
spec:
ports:
- port: 5001
name: ps1
protocol: TCP
selector:
app: sentinel
type: LoadBalancer
# ingress realm1-app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/success-codes: 200-300
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
name: sentinel-ingress-controller
namespace: realm1-dev
spec:
rules:
- host: realm1.dev.sentinel.mysite.io
http:
paths:
- path: /
pathType: Prefix
backend:
servicePort: use-annotation
serviceName: sentinel
Also I'm using external dns to create a route53 reecodset, and then I use the same configured DNS to route requests to the specific eks service, is there any issue with this approach ?
I was able to make it work using only one ALB,
#YYashwanth, Using Nginx was my fallback plan, I'm trying to make the configuration as simple as possible, maybe in the future when we will try to deploy our solution in other cloud providers we will use nginx ingress controller.
1- To start the service type should be node port, use loadbalancer will create a classic LB.
apiVersion: v1
kind: Service
metadata:
name: sentinel-srv
annotations:
external-dns.alpha.kubernetes.io/hostname: operatorv2.dev.sentinel.mysite.io
namespace: operatorv2-dev
labels:
run: jsflow-sentinel
spec:
ports:
- port: 80
targetPort: 80
name: ps1
protocol: TCP
selector:
app: sentinel-app
type: NodePort
2- Second we need to configure group.name, for the ingress controller to merge all ingress configurations using the annotation alb.ingress.kubernetes.io/group.name
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80} ]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
external-dns.alpha.kubernetes.io/hostname: operatorv2.sentinel.mysite.io
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-group
name: dev-operatorv2-sentinel-ingress-controller
namespace: operatorv2-dev
spec:
rules:
- host: operatorv2.dev.sentinel.mysite.io
http:
paths:
- path: /*
backend:
servicePort: 80
serviceName: sentinel-srv
A feature that was pushed out around November 2021 through the AWS ALB Ingress Controller ~v2.0 allows you to create just a single ALB for multiple Ingresses across multiple namespaces - if required by using the annotation alb.ingress.kubernetes.io/group.name: [InsertValue]
All ingress resources created with the same group.name will have the same ALB.
Unfortunately the tool being used for your usecase is wrong. AWS Load Balancer Controller will create a new load balancer for every ingress resource and I think, it makes a network load balancer for every service resource.
For your use-case, the best option is to use nginx ingress controller. You can deploy the nginx ingress controller in any 1 namespace and then create ingress resources throughout your cluster and you can have path/hostname based routing across your cluster.
In case you have many teams/projects/applications and you want to avoid a single point of failure where all your apps depend on 1 ELB, you can deploy more than 1 nginx ingress controller in your k8s cluster.
You just need to define a ingress-class variable in your nginx ingress controller deployment and add that ingress-class annotation on your applications. This way, applications having ingress-class:nginxA annotation will be clustered with the nginx ingress controller that has ingress-class=nginxA in its deployment.
Related
I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
I create a ingress to expose my internal service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
But when I try to get this ingress, it show it has not ip address.
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
The service show under below.
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
Note: I'm guessing because of the other question you asked that you are trying to create an ingress on a manually created cluster with kubeadm.
As described in the docs, in order for ingress to work, you need to install ingress controller first. An ingress object itself is merely a configuration slice for the installed ingress controller.
Nginx based controller is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use MetalLB. Otherwise you can deploy ingress-nginx over a node port: see details here
Finally, servicePort in your ingress object should be 3000, same as port of your service.
I have an k8s Apache server which uses mod_jk to connect to different kubernetes services. mod_jk is sort of dummy here as workers are having k8s service URL. I am now trying to achieve sticky sessions based on JESSIONID in the cookie. I have traefik ingress controller which directs all requests to k8s Apache service. The requests is terminated at ingress level for TLS. What is the best way to achieve sticky session ?
I tried enabling sessionAffinity: ClientIP, but the client IP is always same. It is the ingress controller.
In the Kubernetes ingress object in the annotations label you have to define, which kind of Ingress will you use, so in your case: Traefik ingress controller. Please note, that the sticky session in Traefik is defined in the Service object with the annotation. Instead of a random cookie name, we define it as JSESSIONID.
Ingress object definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
labels:
app: session-affinity
name: session-affinity
spec:
tls:
- host: <address>
secretName:
rules:
- host: <address>
http:
paths:
- path: /
backend:
serviceName: session-affinity
servicePort: 8080
Service object definition:
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "JSESSIONID"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
You can find more information in documentation.
i'm currently using nginx ingress to expose my apps to outside, currently my approach is like this. My question is this the best way to do it? or if not what would be the best practice.
nginx ingress controller service:-
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: "TCP"
- port: 443
name: https
protocol: "TCP"
selector:
app: nginx-ingress-lb
ingress:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
tls:
- hosts:
- myservice.example.com
secretName: sslcerts
rules:
- host: myservice.example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /
so basically i have my nginx ingress controller pods running and expose to a service type of Loadbalacner and i have rules define to determine the routes.
The best approach depends on the environment you will choose to build your cluster, whether you consider using Cloud provider or Bare-metal solutions.
In your example, I guess you are using Cloud provider as a Loadbalancer provisioner, which delivers External IP address as an entry point for your NGINX Ingress Controller. Therefore, you have a quite good possibility to scale your Ingress on demand with various features and options.
I found this Article a very useful to make the comparison between implementation of NGINX Ingress Controller on Cloud and Bare-metal environments.
Using Traefik as an ingress controller (on a kube cluster in GCP).
Is it possible to create an ingress rule that uses a backend service from a different namespace?
We have a namespace for each of our "major" versions of code.
1-service.com -> 1-service.com ingress in the 1-service ns -> 1-service svc in the same ns
2-service.com -> 2-service.com ingress in the 2-service ns... and so on
I also would like another ingress rule in the "unversioned" namespace that will route traffic to one of the major releases.
service.com -> service.com ingress in the "service" ns -> X-service in the X-service namespace
I would like to keep major versions separate in k8s using versioned host names (1-service.com etc), but still have a "latest" that points to the latest of the releases.
I believe voyager can do cross namespace ingress -> svc. can Traefik do the same??
You can use a workaround like this:
Create a Service with type ExternalName in your namespace when you want to create an ingress:
apiVersion: v1
kind: Service
metadata:
name: service-1
namespace: unversioned
spec:
type: ExternalName
externalName: service-1.service-1-ns.svc.cluster.local
ports:
- name: http
port: 8080
protocol: TCP
Create an ingress that point to this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: ingress-to-other-ns
namespace: service-1-ns
spec:
rules:
- host: latest.example.com
http:
paths:
- backend:
serviceName: service-1
servicePort: 8080
path: /
Just tested with the following example on EKS. Traefik is deployed in default namespace. This is the config used for the k8s service:
---
apiVersion: v1
kind: Service
metadata:
name: 1-service
namespace: 1-service
labels:
app: 1-service
spec:
selector:
app: 1-service
ports:
- name: http
port: 80
targetPort: 80
And this is the config used for Traefik service that will send the request to different namespace:
services:
1-service:
loadBalancer:
servers:
- url: http://1-service.1-service.svc.cluster.local:80
# - url: http://1-service.1-service:80 # This should work perfectly as well, didn't test it explicitly
As you probably already get that, you can reference to services from different namespace by using SERVICE.NAMESPACE notation, instead of the SERVICE, which will automatically assume that you are referencing a service from the current namespace.