Google Kubernetes Engine: How to define one Ingress for multiple namespaces? - kubernetes

On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.
In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?
GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.
I would like to do something like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
annotations: # activation certificat ssl
kubernetes.io/ingress.global-static-ip-name: lb-ip-adress
spec:
hosts:
- host: dev.domain.com
http:
paths:
- path: /*
backend:
serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
servicePort: http
- host: domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
servicePort: http
- host: www.domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service
servicePort: http
As GKE requires service to be NodePort I am stuck with prod-service.
Any help will be appreciated.
Thanks a lot

OK here is what I have been doing. I have only one ingress with one backend service to nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: nginx-svc
servicePort: 80
And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
listen [::]:80;
server_name _;
location /{
add_header Content-Type text/plain;
return 200 "OK.";
}
location /segmentation {
proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
}
}
And the deployment will use the above config of nginx via config-map
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
#podAntiAffinity will not let two nginx pods to run in a same node machine
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-configs
mountPath: /etc/nginx/conf.d
livenessProbe:
httpGet:
path: /
port: 80
# Load the configuration files for nginx
volumes:
- name: nginx-configs
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: "TCP"
nodePort: 32111
port: 80
This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.

Use #Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.
The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.

One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)
There are multiple benefits, such as:
One load-balancer can serve multiple namespaces
The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
You can still use container native load balancing
One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.

Related

Connect two pods to the same external access point

I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080

Ingress without ip address

I create a ingress to expose my internal service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
But when I try to get this ingress, it show it has not ip address.
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
The service show under below.
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
Note: I'm guessing because of the other question you asked that you are trying to create an ingress on a manually created cluster with kubeadm.
As described in the docs, in order for ingress to work, you need to install ingress controller first. An ingress object itself is merely a configuration slice for the installed ingress controller.
Nginx based controller is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use MetalLB. Otherwise you can deploy ingress-nginx over a node port: see details here
Finally, servicePort in your ingress object should be 3000, same as port of your service.

Best approach to expose kubernetes ingress?

i'm currently using nginx ingress to expose my apps to outside, currently my approach is like this. My question is this the best way to do it? or if not what would be the best practice.
nginx ingress controller service:-
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: "TCP"
- port: 443
name: https
protocol: "TCP"
selector:
app: nginx-ingress-lb
ingress:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
tls:
- hosts:
- myservice.example.com
secretName: sslcerts
rules:
- host: myservice.example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /
so basically i have my nginx ingress controller pods running and expose to a service type of Loadbalacner and i have rules define to determine the routes.
The best approach depends on the environment you will choose to build your cluster, whether you consider using Cloud provider or Bare-metal solutions.
In your example, I guess you are using Cloud provider as a Loadbalancer provisioner, which delivers External IP address as an entry point for your NGINX Ingress Controller. Therefore, you have a quite good possibility to scale your Ingress on demand with various features and options.
I found this Article a very useful to make the comparison between implementation of NGINX Ingress Controller on Cloud and Bare-metal environments.

Kubernetes Ingress Path only works with /

I have configured a kubernetes ingress service but it only works when the path is /
I have tried all manner of different values for the path including:
/*
/servicea
/servicea/
/servicea/*
This is my ingress configuration (that works)
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boardingservice
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /
backend:
serviceName: servicea-nodeport
servicePort: 80
This is my nodeport service
- apiVersion: v1
kind: Service
metadata:
name: servicea-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
nodePort: 30124
selector:
app: servicea
And this is my deployment
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicea
spec:
replicas: 1
template:
metadata:
name: ervicea
labels:
app: servicea
spec:
containers:
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/servicea
name: servicea
ports:
- containerPort: 8080
protocol: TCP
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/serviceb
name: serviceab
ports:
- containerPort: 8081
protocol: TCP
If the path is / then I can do this http://my.url.com/api/ping
but as I will have multiple services I want to do this: http://my.url.com/servicea/api/ping but when I set the path to /servicea I get a 404.
I am running kubernetes on AWS with an ingress-nginx ingress controller
Any idea?
You are not using kubernetes Pods as they are intended to be used. A Pod
it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you have two applications, servicea and serviceb, they should be running on different Pods: one pod for servicea and another one for serviceb. This has many benefits: you can deploy them separately, scale them independently, etc.
As the docs say
A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
These Pods can be created using Deployments, as you were already doing. That's fine and recommended.
Once you have the Deployments running, you'd create a different Service that would balance traffic between all the Pods for a given Deployment.
And finally, you want to hit servicea or serviceb depending on the request URL. That can be done with Ingress, as you were trying, but mapping each path to different services. For example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /servicea
backend:
serviceName: servicea
servicePort: 80
- path: /serviceb
backend:
serviceName: serviceb
servicePort: 80
That way, requests going to your ingress controller using the /servicea path would be served by the Pods behind the servicea Service. And requests going to your ingress controller using the /serviceb path would be served by the Pods behind the serviceb Service.
For anyone reading this, my configuration was correct (even though unorthodox, as pointed out by fiunchinho), the error was in my Spring Boot applications, that were running in the containers. I needed to change the context paths to match the Ingress path - I could, of course, have changed the #GetMapping and #PostMapping methods in my Spring Controller, but I opted to change the context path.

How can I generate External IP when creating an ingress that uses nginx controller in kubernetes

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloworld-rules
spec:
rules:
- host: helloworld-v1.example.com
http:
paths:
- path: /
backend:
serviceName: helloworld-v1
servicePort: 80
- host: helloworld-v2.example.com
http:
paths:
- path: /
backend:
serviceName: helloworld-v2
servicePort: 80
I'm making kubernetes cluster and I will apply that cloudPlatform Isolated(not aws or google).
When creating an ingress for service I can choose host url but that is not exist anywhere(that address is not registrated something like DNS server) So I can't access that url. Visiting this IP just gives a 404.
how can I get or configure URL that can access external browser
:(...
It depends on how you configure your nginx controller.
You should have a Service configured which is the entry point when accessing from outside see the docs https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress.
So basically you have a Service that points to the ingress controller and this will redirect the traffic to your pods based on Ingress Objects.
Ingress -> Services -> Pods
Since you don't run on aws or google You would have to use externalIp or NodePort and configure the service accordingly
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
And DNS needs to be managed with whatever you have for your domains in order to resolve, or for locally testing you can just edit your /etc/hostnames
Basically in AWS or Google you just create a service with type: LoadBalancer and point your dns records to the balancer address (CNAME for aws and the IP for google)