Multiple external IP (load balancers) for ingress-nginx - kubernetes

How can I configure multiple external IPs within a single cluster using ingress-nginx?
I can see that ingress-nginx creates a load balancer service with external IP. I assume I would need to create another load balancer service? How I would indicate in ingress which load balancer to use?
PS I am using GKE.

Create multiple ingress controller. In new controller define a class name, (Here nginx-internal)
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--election-id=ingress-controller-leader-internal'
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
Then Create a Ingress with
kubernetes.io/ingress.class: "nginx-internal" annotation.
For example, creating a hello-world ingress with following yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
annotations:
kubernetes.io/ingress.class: "nginx-internal"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- backend:
serviceName: hello-world-svc
servicePort: 8000
Click here for official documentation

Related

Traefik Dashboard: Ingress and IngressRoute, can they co-exist?

Recently I am moving a project to Kubernetes and have used Traefik as the ingress controller. For Traefik I have used the Traefik Kubernetes Ingress provider for routing. When I tried to add the Traefik dashboard, I found that seems it can only be added using IngressRoute (ie. using Kubernetes CRD as provider).
I have a few questions:
Is it possible to use Traefik Kubernetes Ingress provider to bring up the dashboard?
Can I use both kubernetesingress and kubernetescrd as provider? Can both Ingress and IngressRoute co-exist?
So I have solved the Traefik Dashboard problem using Traefik Kubernetes Ingress only, the answer to the first question is 'Yes':
The following is my configuration:
traefik-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
namespace: ingress-traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.2
ports:
- name: web
containerPort: 80
- name: websecure
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --api.insecure=true
- --api.dashboard=true
- --providers.kubernetesingress
- --providers.kubernetescrd
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
traefik-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: traefik-dashboard-ingress
namespace: ingress-traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.middlewares: ingress-traefik-traefikbasicauth#kubernetescrd
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- secretName: cert-stage-wildcard
rules:
- host: traefik.your-domain.io
http:
paths:
- path: /
backend:
serviceName: traefik-service
servicePort: 8080
The key to bringing up this is to set api.insecure=true, with this I can port-forward and test the Traefik Dashboard on my localhost, and then route the service through the traefik kubernetes ingress.
Another question (Can I use both kubernetesingress and kubernetescrd as provider) is also confirmed to be 'Yes', as I am now using them together, with kubernetesingress for routing and kubernetescrd on the basicAuth MiddleWare.
But I guess the two routing schemes ingress and ingressRoute may not be able to co-exist as they are both for routing and only one of them will be used by the system when both of them exist. Please correct me if I am wrong.

Google Kubernetes Engine: How to define one Ingress for multiple namespaces?

On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.
In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?
GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.
I would like to do something like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
annotations: # activation certificat ssl
kubernetes.io/ingress.global-static-ip-name: lb-ip-adress
spec:
hosts:
- host: dev.domain.com
http:
paths:
- path: /*
backend:
serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
servicePort: http
- host: domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
servicePort: http
- host: www.domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service
servicePort: http
As GKE requires service to be NodePort I am stuck with prod-service.
Any help will be appreciated.
Thanks a lot
OK here is what I have been doing. I have only one ingress with one backend service to nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: nginx-svc
servicePort: 80
And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
listen [::]:80;
server_name _;
location /{
add_header Content-Type text/plain;
return 200 "OK.";
}
location /segmentation {
proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
}
}
And the deployment will use the above config of nginx via config-map
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
#podAntiAffinity will not let two nginx pods to run in a same node machine
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-configs
mountPath: /etc/nginx/conf.d
livenessProbe:
httpGet:
path: /
port: 80
# Load the configuration files for nginx
volumes:
- name: nginx-configs
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: "TCP"
nodePort: 32111
port: 80
This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.
Use #Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.
The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.
One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)
There are multiple benefits, such as:
One load-balancer can serve multiple namespaces
The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
You can still use container native load balancing
One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.

Web app not displaying pages using Kubernetes traefik ingress controller

My app does not work when I use a path other than / in the ingress rule. The app works when I access the application using http://gv.cloud.test.com:nodeport outside kubernetes cluster however does not work with http://gv.cloud.test.com/mytestapp. Can someone help me? The web app is using / as the base_href path in angular.
I am using traefik as the ingress controller. I have tried all the available traefik rule types:
PathPrefixStrip
PathPrefix
etc
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
labels:
app: my-testapp
env: dev
name: my-testapp-dev-ingress
namespace: jenkins
spec:
rules:
- host: gv.cloud.test.com
http:
paths:
- backend:
serviceName: my-testapp-service
servicePort: 8090
path: /mytestapp

Kubernetes Ingress Path only works with /

I have configured a kubernetes ingress service but it only works when the path is /
I have tried all manner of different values for the path including:
/*
/servicea
/servicea/
/servicea/*
This is my ingress configuration (that works)
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boardingservice
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /
backend:
serviceName: servicea-nodeport
servicePort: 80
This is my nodeport service
- apiVersion: v1
kind: Service
metadata:
name: servicea-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
nodePort: 30124
selector:
app: servicea
And this is my deployment
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicea
spec:
replicas: 1
template:
metadata:
name: ervicea
labels:
app: servicea
spec:
containers:
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/servicea
name: servicea
ports:
- containerPort: 8080
protocol: TCP
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/serviceb
name: serviceab
ports:
- containerPort: 8081
protocol: TCP
If the path is / then I can do this http://my.url.com/api/ping
but as I will have multiple services I want to do this: http://my.url.com/servicea/api/ping but when I set the path to /servicea I get a 404.
I am running kubernetes on AWS with an ingress-nginx ingress controller
Any idea?
You are not using kubernetes Pods as they are intended to be used. A Pod
it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you have two applications, servicea and serviceb, they should be running on different Pods: one pod for servicea and another one for serviceb. This has many benefits: you can deploy them separately, scale them independently, etc.
As the docs say
A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
These Pods can be created using Deployments, as you were already doing. That's fine and recommended.
Once you have the Deployments running, you'd create a different Service that would balance traffic between all the Pods for a given Deployment.
And finally, you want to hit servicea or serviceb depending on the request URL. That can be done with Ingress, as you were trying, but mapping each path to different services. For example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /servicea
backend:
serviceName: servicea
servicePort: 80
- path: /serviceb
backend:
serviceName: serviceb
servicePort: 80
That way, requests going to your ingress controller using the /servicea path would be served by the Pods behind the servicea Service. And requests going to your ingress controller using the /serviceb path would be served by the Pods behind the serviceb Service.
For anyone reading this, my configuration was correct (even though unorthodox, as pointed out by fiunchinho), the error was in my Spring Boot applications, that were running in the containers. I needed to change the context paths to match the Ingress path - I could, of course, have changed the #GetMapping and #PostMapping methods in my Spring Controller, but I opted to change the context path.

Kubernetes Ingress not accessible (localhost)

I am setting up a minimal Kubernetes cluster on localhost on a Linux machine (starting with hack/local-up-cluster from the checked out repo). In my deployment file I defined an ingress, which should make the services, which are deployed in the cluster, accessible from the outside. Deployment.yml:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: foo-service-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: foo-service
spec:
containers:
- name: foo-service
image: images/fooservice
imagePullPolicy: IfNotPresent
ports:
- containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: foo-service-service
spec:
ports:
- port: 7778
selector:
app: foo-service
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
spec:
rules:
- host:
http:
paths:
- path: /foo
backend:
serviceName: foo-service-service
servicePort: 7779
- path: /bar
backend:
serviceName: bar-service-service
servicePort: 7776
I can not access the services. kubectl describe shows the following for my ingress:
Name: api-gateway-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/foo foo-service-service:7779 (<none>)
/bar bar-service-service:7776 (<none>)
Annotations:
Events: <none>
Is it because there is not address set for my ingress, that it is not visible to outside world yet?
An Ingress resource is just a definition for your cluster how to handle ingress traffic. It needs an Ingress Controller to actually process these definitions; creating an Ingress resource without having deployed an Ingress controller will not have any effect.
From the documentation:
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the kube-controller-manager binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one.
There are several Ingress controllers available that you can deploy by yourself (typically, via a Deployment resource), like for example the NGINX ingress controller (which is part of the Kubernetes project) or third-party ingress controllers like Traefik, Envoy or Voyager.