I have followed the docs for an AKS internal nginx ingress while still having the public one at the same time.
https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx#additional-internal-load-balancer
https://github.com/kubernetes/ingress-nginx/blob/e8e793bb6270448960d53d9c3fbaa927ce8fbe4c/charts/ingress-nginx/values.yaml#L472
controller:
service:
loadBalancerIP: x.x.x.x
internal:
enabled: true
loadBalancerIP: y.y.y.y
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Note that my use cases are diverse:
Some services need to be accessible from both the public lb ip as well as the private lb ip
Other services need to be accessible only by 1 ip and choose which of them it is.
Based on the docs in the ingress-nginx repo, this should be possible without the need for multiple ingress controllers or objects
Following the docs I was able to create an ingress controller and an internal one in the same namespace under the same Ingress Controller (IC)
But what my problem is that I do not know how to reference the load balancers in the actual ingress objects.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-service
namespace: shared
labels:
app.kubernetes.io/component: udp
app.kubernetes.io/instance: some-service
app.kubernetes.io/name: some-service
annotations:
cert-manager.io/cluster-issuer: letsencrypt-dev
clusterIssuerEnv: dev
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: some-service
meta.helm.sh/release-namespace: shared
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- some-service.my-domain.com
secretName: wildcard.x.my-domain-tls-some-service
rules:
- host: some-service.my-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: some-service
port:
number: 12201
The kubernetes.io/ingress.class: nginx annotation references the ingress class but it obtains only the public loadbalancer IP. How do I reference the private one and also how do I do this to be able to have both?
i see two options:
for your current setup create a new ingress-class for example nginx-internal and point that to your 2nd IC installation. The downside is here that you would need two ingress Manifests defined per Application bcs of the new ingressClass.
better option IMO: use one Ingress Controller (IC) installation, create two services for the IC (default one with external IP & new one as copy of the default one but with the internal IP) that are pointing to your IC installation. Now resolve some-service.my-domain.com where needed to the internal IP (DNS config).
BTW: The ingress class annotation should ne bot used anymore, there is an own field for the ingressClass now.
Related
I'm trying to configure a single ALB across multiple namespaces in aws EKS, each namespace has its own ingress resource.
I'm trying to configure the ingress controller aws-loadbalancer-controller on a k8s v1.20.
The problem i'm facing is that each time I try to deploy a new service it always spin-up a new classic loadbalancer in addition to the shared ALB specified in the ingress config.
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/
# service-realm1-dev.yaml:
apiVersion: v1
kind: Service
metadata:
name: sentinel
annotations:
external-dns.alpha.kubernetes.io/hostname: realm1.dev.sentinel.mysite.io
namespace: realm1-dev
labels:
run: sentinel
spec:
ports:
- port: 5001
name: ps1
protocol: TCP
selector:
app: sentinel
type: LoadBalancer
# ingress realm1-app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/success-codes: 200-300
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
name: sentinel-ingress-controller
namespace: realm1-dev
spec:
rules:
- host: realm1.dev.sentinel.mysite.io
http:
paths:
- path: /
pathType: Prefix
backend:
servicePort: use-annotation
serviceName: sentinel
Also I'm using external dns to create a route53 reecodset, and then I use the same configured DNS to route requests to the specific eks service, is there any issue with this approach ?
I was able to make it work using only one ALB,
#YYashwanth, Using Nginx was my fallback plan, I'm trying to make the configuration as simple as possible, maybe in the future when we will try to deploy our solution in other cloud providers we will use nginx ingress controller.
1- To start the service type should be node port, use loadbalancer will create a classic LB.
apiVersion: v1
kind: Service
metadata:
name: sentinel-srv
annotations:
external-dns.alpha.kubernetes.io/hostname: operatorv2.dev.sentinel.mysite.io
namespace: operatorv2-dev
labels:
run: jsflow-sentinel
spec:
ports:
- port: 80
targetPort: 80
name: ps1
protocol: TCP
selector:
app: sentinel-app
type: NodePort
2- Second we need to configure group.name, for the ingress controller to merge all ingress configurations using the annotation alb.ingress.kubernetes.io/group.name
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80} ]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
external-dns.alpha.kubernetes.io/hostname: operatorv2.sentinel.mysite.io
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-group
name: dev-operatorv2-sentinel-ingress-controller
namespace: operatorv2-dev
spec:
rules:
- host: operatorv2.dev.sentinel.mysite.io
http:
paths:
- path: /*
backend:
servicePort: 80
serviceName: sentinel-srv
A feature that was pushed out around November 2021 through the AWS ALB Ingress Controller ~v2.0 allows you to create just a single ALB for multiple Ingresses across multiple namespaces - if required by using the annotation alb.ingress.kubernetes.io/group.name: [InsertValue]
All ingress resources created with the same group.name will have the same ALB.
Unfortunately the tool being used for your usecase is wrong. AWS Load Balancer Controller will create a new load balancer for every ingress resource and I think, it makes a network load balancer for every service resource.
For your use-case, the best option is to use nginx ingress controller. You can deploy the nginx ingress controller in any 1 namespace and then create ingress resources throughout your cluster and you can have path/hostname based routing across your cluster.
In case you have many teams/projects/applications and you want to avoid a single point of failure where all your apps depend on 1 ELB, you can deploy more than 1 nginx ingress controller in your k8s cluster.
You just need to define a ingress-class variable in your nginx ingress controller deployment and add that ingress-class annotation on your applications. This way, applications having ingress-class:nginxA annotation will be clustered with the nginx ingress controller that has ingress-class=nginxA in its deployment.
I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
192.168.0.1-H"domain.com":443/frontend -> 192.168.0.1 (eth0) -> ingress -> service-frontend
192.168.0.1-H"domain.com":443/backend -> 192.168.0.1 (eth0) -> ingress -> service-backend
88.88.88.88-H"domain.com":443/frontend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
88.88.88.88-H"domain.com":443/backend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
And then later the eth1 interface should be able to be switched on, so that requests on that interface behave the same as on eth0.
I would like to be able to deploy multiple instances of services for load-balancing. I would like to keep the configuration in my namespace (if possible) so I can always delete and apply everything at once.
I'm using this guide as a reference: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
I was able to get something working with minikube, but obviously could not expose any external IPs and performance was quite bad. For that, I just configured a "kind: Ingress" and that was it.
So far, the ingress controller that's default on microk8s seems to listen on all interfaces and I can only configure it in its own namespace. Defining my own ingress seems to not have any effect.
I would like to deploy an ngingx-ingress controller on my self-hosted
Kubernetes (microk8s) that is configurable to listen on one or more
interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
bar.foo.dev
foo.bar.dev
YAML example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
One potential fix was much simpler than anticipated, no messing with MetalLB needed or anything else.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "public"
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24
...
This does not answer the question of splitting an Ingress across multiple interfaces, but it does solve the problem of restricting public access.
By default, bare-metal ingress will listen on all interfaces, which might be a security issue.
This solution works without enabling ingress on Microk8s:
install ingress controller : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml
create your deployment and service and add this Ingress resource (all in the one namespace):
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: 'true'
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
name: ingress-resource
namespace: namespace-name
spec:
rules:
- http:
paths:
- backend:
service:
name: service-name
port:
number: service-port
path: /namespace-name/service-name(/|$)(.*)
pathType: Prefix
kubectl get svc -n ingress-nginx
now get either CLUSTER-IP or EXTERNAL-IP and :
curl ip/namespace-here/service-here
I have created a Nginx Ingress and Service with the following code:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP
selector:
name: my-app
ports:
- port: 8000
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: myingress
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 8000
Nginx ingress installed with:
helm install ingress-nginx ingress-nginx/ingress-nginx.
I have also enabled proxy protocols for ELB. But in nginx logs I don't see the real client ip for X-Forwarded-For and X-Real-IP headers. This is the final headers I see in my app logs:
X-Forwarded-For:[192.168.21.145] X-Forwarded-Port:[80] X-Forwarded-Proto:[http] X-Forwarded-Scheme:[http] X-Real-Ip:[192.168.21.145] X-Request-Id:[1bc14871ebc2bfbd9b2b6f31] X-Scheme:[http]
How do I get the real client ip instead of the ingress pod IP? Also is there a way to know which headers the ELB is sending to the ingress?
One solution is to use externalTrafficPolicy: Local (see documentation).
In fact, according to the kubernetes documentation:
Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client.
...
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
If you want to follow this route, update your nginx ingress controller Service and add the externalTrafficPolicy field:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
...
externalTrafficPolicy: Local
A possible alternative could be to use Proxy protocol (see documentation)
The proxy protocol should be enabled in the ConfigMap for the ingress-controller as well as the ELB.
L4 uses proxy-protocol
For L7 use use-forwarded-headers
# configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
Just expanding on #strongjz answer.
By default, the Load balancer that will be created in AWS for a Service of type LoadBalancer will be a Classic Load Balancer, operating on Layer 4, i.e., proxying in the TCP protocol level.
For this scenario, the best way to preserve the real ip is to use the Proxy Protocol, because it is capable of doing this at the TCP level.
To do this, you should enable the Proxy Protocol both on the Load Balancer and on Nginx-ingress.
Those values should do it for a Helm installation of nginx-ingress:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
The service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" annotation will tell the aws-load-balancer-controller to create your LoadBalancer with Proxy Protocol enabled. I'm not sure what happens if you add it to a pre-existing Ingress-nginx, but it should work too.
The use-proxy-protocol and real-ip-header are options passed to Nginx, to also enable Proxy Protocol there.
Reference:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#proxy-protocol-v2
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol
I created a service and each service is creating a new load balancer, I don't want to create a new load balancer for each service. For that, I found solution ingress controller but it's not happening.
I will try to describe the objects you need in just words.
You don't need to create a load balancer for each service. When you're using an ingress controller (like nginx), the ingress controller itself will be the type load balancer. All your other services need to be something like ClusterIP type.
Afterwards you can decide how to link your ClusterIP services with the Nginx LoadBalancer: create an ingress for each service or one ingress that exposes each service based on some rule (like paths as #harsh-manvar shows in the post above).
When you say "it's not happening", it would be good if you could provide details on your setup.
In order for Nginx ingress controller to work, it needs to be defined either as a NodePort or LoadBalancer service type. The examples provided in the nginx documentation are using LoadBalancer. However, LoadBalancer only works when your cluster supports this object (that means running in most cloud providers like AWS/GCP/Azure/DigitalOcean or newer versions of minikube). On the other hand, NodePort will expose the ingress controller on the Kubernetes node where it runs (when using minikube, that usually means a VM of sorts which then needs to be port forwarded to be accessible).
To use ingress in a local environment, you can look into minikube. All you need is to run minikube addons enable ingress and it will deploy an nginx controller for you. Afterwards, all you need to do is define an ingress and depending on your setup you may need to use kubectl port-forward to port forward port 80 on an nginx controller pod to a local port on your machine.
There are different types of services: ClusterIP, NodePort, LoadBalancer and ExternalName. You can specify it in spec.type. Actually the default one, when not specified is not LoadBalancer, but ClusterIP, so in your case, simply leave away the type: LoadBalancer definition and use your serviceName as backend in your ingress resource. Example:
spec:
rules:
- host: your.fully.qualified.host.name
http:
paths:
- backend:
serviceName: your-internal-service-name
servicePort: 80
path: /
Keep in mind that for some cloud providers there's also the possibility to use an internal LoadBalancer without a public IP. This is done by adding an annotation to the service configuration. For Azure AKS it looks like this:
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
For Google's GKE the annotation is cloud.google.com/load-balancer-type: "Internal"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/cluster-issuer: wordpress-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- test.test.com
secretName: prod
rules:
- host: test.test.com
http:
paths:
- path: /service-1
backend:
serviceName: service-1
servicePort: 80
- path: /service-2
backend:
serviceName: service-2
servicePort: 5000
Sharing here documentation for ingress to target multiple services you can redirect to multi-service.
Using this you can access services like
https://test.test.com/service-1
https://test.test.com/service-2
Following documentation you should do the following.
More information: kubernetes.github.com
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
I'm having difficulties getting my Ingress controller running on Google Container Engine. I want to use an NGINX Ingress Controller with Basic Auth and use a reserved global static ip name (this can be made in the External IP addresses section in the Google Cloud Admin interface). When I use the gce class everything works fine except for the Basic Auth (which I think is not supported on the gce class), anenter code hered when I try to use the nginx class the Ingress Controller launches but the IP address that I reserved in the Google Cloud Admin interface will not be attached to the Ingress Controller. Does anyone know how to get this working? Here is my config file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webserver
annotations:
kubernetes.io/ingress.global-static-ip-name: "myreservedipname"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: "Auth required"
ingress.kubernetes.io/auth-secret: htpasswd
spec:
tls:
- secretName: tls
backend:
serviceName: webserver
servicePort: 80
I found a solution with helm.
helm install --name nginx-ingress stable/nginx-ingress \
--set controller.service.loadBalancerIP=<YOUR_EXTERNAL_IP>
You should use the external-ip and not the name you gave with gcloud.
Also, in my case I also added --set rbac.create=true for permissions.
External IP address can be attached to the Load Balancer which you can point to your Ingress controller.
One major remark - the External IP address should be reserved in the same region as the Kubernetes cluster.
To do it, you just need to deploy your Nginx-ingress service with type: LoadBalancer and set ExternalIP value, like this:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
loadBalancerIP: <YOUR_EXTERNAL_IP>
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
After deployment, Kubernetes will create a new Load Balancer with desired static IP which will be an entry-point for your Ingress.
#silgon, as I see, you already tried to do it, but without a positive result. But, it should work. If not - check the region of IP address and configuration once again.
Here's an example that I know works, could be an issue around your syntax:
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: nginx.192.168.99.100.nip.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80