Kubernetes Cross Namespace Ingress Network - kubernetes

I have a simple ingress network, I want to access services at different namespaces, from this ingress network.
How I can do this?
My ingress network yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: api.myhost.com
http:
paths:
- backend:
serviceName: bookapi-2
servicePort: 8080
path: /booking-service/
I've set the ExternalNames service type to the yaml file:
apiVersion: v1
kind: Service
metadata:
name: bookapi-2
namespace: booking-namespace
spec:
type: ExternalName
externalName: bookapi-2
ports:
- name: app
protocol: TCP
port: 8080
targetPort: 8080
selector:
app: bookapi-2
tier: backend-2

An ExternalName service is a special case of service that does not
have selectors and uses DNS names instead.
You can find out more about ExternalName service from the official Kubernetes documentation:
When you want to access a service from a different namespace, your yaml could, for example, look like this:
kind: Service
apiVersion: v1
metadata:
name: test-service-1
namespace: namespace-a
spec:
type: ExternalName
externalName: test-service-2.namespace-b.svc.cluster.local
ports:
- port: 80
As to your Ingress yaml file, please recheck it and make sure it is compliant with the official examples, for example this one as it contains some inconsistency:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: www.mysite.com
http:
paths:
- backend:
serviceName: website
servicePort: 80
- host: forums.mysite.com
http:
paths:
- path:
backend:
serviceName: forums
servicePort: 80
Please also recheck ExternalName yaml as it has TargetPorts and selectors which are not used in this type of Service and make sure that:
ExternalName Services are available only with kube-dns version 1.7
and later.
In case you will not succeed, please share the kind of problem you have meet.

create namespace service-ns
create a service of type ClusterIP ( which is default ) named nginx-service listening on port 80 in namespace service-ns
create nginx deployment in service-ns
create namespace ingress-ns
create a service in ingress-ns of type ExternalName and pointing to FQDN of nginx-service pointing it as nginx-internal.service-ns.svc.cluster.local
create ingress rules
NOTE: Demo code not to be running in production. Just wanted to give an idea of how it would work cross-namespaces
---
#1
apiVersion: v1
kind: Namespace
metadata:
name: service-ns
---
#2
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-internal
namespace: service-ns
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
app: nginx
---
#3
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: service-ns
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
#4
apiVersion: v1
kind: Namespace
metadata:
name: ingress-ns
---
#5
kind: Service
apiVersion: v1
metadata:
name: nginx
namespace: ingress-ns
spec:
type: ExternalName
externalName: nginx-internal.service-ns.svc.cluster.local
ports:
- port: 80
---
#6
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
namespace: ingress-ns
spec:
rules:
- host: whatever.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80

Outside traffic comes through ingress controller service that is responsible for routing the traffic based on the defined routing rules or what we call ingress rules in k8s world.
In other words, ingress resources are just routing rules (think of it in away that's similar to DNS records) so when you define an ingress resource you just defined a rule for ingress controller to work on and route traffic based on such defined rules.
Solution:
Since Ingress are nothing but routing rules, you could define such rules anywhere in the cluster (in any namespace) and controller should pick them up as it monitors creation of such resources and react accordingly.
Here's how to create ingress easily using kubectl
Create an ingress
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file
Or you may create a service of type ExternalName in the same namespace where you have defined your ingress. such external service can point to any URL (a service that lives outside namespace or even k8s cluster)
Here's an example:
Create an ExternalName service
kubectl create service externalname ingress-ns -n namespaceName --external-name=serviceName.namespace.svc.cluster.local --tcp=80:80
Note: Add --dry-run=client -oyaml to generate yaml manifest file
kind: Service
apiVersion: v1
metadata:
name: nginx
namespace: ingress-ns
spec:
type: ExternalName
externalName: serviceName.namespace.svc.cluster.local #or any external svc
ports:
- port: 80 #specify the port of service you want to expose
targetPort: 80 #port of external service
As described above, create an ingress as below:
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file

ExternaNames service should be created without any selector options. So create the ExternaNames service in the namespace where your ingress resource is created and point the external name service to resolve the name of your application hosted in different namespace.

Related

Exposing Redis with Ingress Nginx Controller

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Ingress configuration for k8s in different namespaces

I need to configure Ingress Nginx on azure k8s, and my question is if is possible to have ingress configured in one namespace et. ingress-nginx and some serivces in other namespace eg. resources?
My files looks like so:
# ingress-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 3
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
# configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
---
# default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
# app-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- api-sand.fake.com
rules:
- host: api-sand.fake.com
http:
paths:
- backend:
serviceName: api-sand
servicePort: 80
path: /
And then I have some app running in the resources namespace, and problem is that I am getting the following error
error obtaining service endpoints: error getting service resources/api-sand from the cache: service resources/api-sand was not found
If I deploy api-sand in the same namespace where ingress is then this service works fine.
I would like to simplify the answer a bit further for those who are reletively new to Kubernetes and its ingress options in particular.
There are 2 separate things that need to be present for ingress to work:
Ingress Controller(essentially a separate Pod/Deployment along with
a Service that can be used to utilize routing and proxying. Based on
nginx container for example);
Ingress rules(a separate Kubernetes
resourse with kind: Ingress. Will only take effect if Ingress
Controller is already deployed)
Now, Ingress Controller can be deployed in any namespace and is, in fact, usually deployed in a namespace separate from your app services. It can out-of-the-box see Ingress rules in all namespaces in the cluster and will pick them up.
The Ingress rules, however, must reside in the namespace where the app that they configure reside.
There are some workarounds for that, but this is the most common approach.
Instead of creating the ingress app-ingress in ingress-nginx namespace you should create it in the namespace where you have the service api-sandand the pod.
Alternatively there is way to achieve ingress in one namespace and service in another namespace via externalName.Checkout Kubernetes Cross Namespace Ingress Network
Here is an example referred from here.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: ExternalName
externalName: test-service.namespacename.svc.cluster.local
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
It's possible actually, you can define ingress and a service with ExternalName type in namespace A, while the ExternalName points to DNS of the service in namespace B. For further details, please refer to this answer: https://stackoverflow.com/a/51899301/2995449
Outside traffic comes through ingress controller service that is responsible for routing the traffic based on the defined routing rules or what we call ingress rules in k8s world.
In other words, ingress resources are just routing rules (think of it in away that's similar to DNS records) so when you define an ingress resource you just defined a rule for ingress controller to work on and route traffic based on such defined rules.
Solution:
Since Ingress are nothing but routing rules, you could define such rules anywhere in the cluster (in any namespace) and controller should pick them up as it monitors creation of such resources and react accordingly.
Here's how to create ingress easily using kubectl
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file
Or you may create a service of type ExternalName in the same namespace where you have defined your ingress. such external service can point to any URL (a service that lives outside namespace or even k8s cluster)
Here's an example that shows how to create an ExternalName service using kubectl:
kubectl create service externalname ingress-ns -n namespaceName --external-name=serviceName.namespace.svc.cluster.local --tcp=80:80 --dry-run=client -oyaml
this should generate something similar to the following:
kind: Service
apiVersion: v1
metadata:
name: nginx
namespace: ingress-ns
spec:
type: ExternalName
externalName: serviceName.namespace.svc.cluster.local #or any external svc
ports:
- port: 80 #specify the port of service you want to expose
targetPort: 80 #port of external service
As described above, create an ingress as below:
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file
The way worked for me is creating ingress for each namespace
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: production
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: api.youtube.com
http:
paths:
- pathType: Prefix
path: "/api/users"
backend:
service:
name: youtube-srv
port:
number: 3000
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: development
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: dev.youtube.com
http:
paths:
- pathType: Prefix
path: "/api/users"
backend:
service:
name: youtube-srv
port:
number: 3000
There is a way to configure your default backend per ingress resource although the documentation says it is usually configured at the ingress controller level.
For example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
namespace: myns
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
...
Here default-http-backend must be defined in the same namespace as the ingress resource.

externally access the application using hostname/subdomain in ingress resource

Need to access the application from external using Ingress hostname/sub-domain for the application that is specified in the below code. eg. test-app.dev-cluster-poc.company.domain.
cat app-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- host: test-app.dev-cluster-poc.company.domain
http:
paths:
- backend:
serviceName: appsvc1
servicePort: 80
path: /app1
- backend:
serviceName: appsvc2
servicePort: 80
path: /app2
While troubleshooting using steps in the url, I found that there is no ADDRESS in the "kubectl get ingress" output. expecting an ip address like below.
but, I am seeing like below, 3rd column is empty.
what are the necessary configuration required to externally access the application like registering the hostname(test-app.dev-cluster-poc.company.domain) or adding A-record or running any dns service in the kubernetes cluster.
what is causing the ADDRESS column empty in "kubectl get ingress" command.
[EDIT]
apiVersion: v1
kind: Service
metadata:
name: appsvc1
namespace: ingress
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app1
Nginx controller service like below.
cat nginx-ingress-controller-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: ingress
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
name: http
- port: 18080
nodePort: 32000
name: http-mgmt
selector:
app: nginx-ingress-lb
1.you can deploy an ingress deployment
2.expose your ingress deployment through port 80
kubectl expose deploy your-deployment-name --port 80
source
3.you can add ingressClassName in your deploy
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress
spec:
ingressClassName: nginx
ingress configuration sample

Kubernetes Ingress to External Service?

Say I have a service that isn't hosted on Kubernetes. I also have an ingress controller and cert-manager set up on my kubernetes cluster.
Because it's so much simpler and easy to use kubernetes ingress to control access to services, I wanted to have a kubernetes ingress that points to a non-kubernetes service.
For example, I have a service that's hosted at https://10.0.40.1:5678 (ssl required, but self signed certificate) and want to access at service.example.com.
You can do it by manual creation of Service and Endpoint objects for your external server.
Objects will looks like that:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
Then, you can create an Ingress object which will point to Service external-ip with port 80:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
So I got this working using ingress-nginx to proxy an managed external service over a non-standard port
apiVersion: v1
kind: Service
metadata:
name: external-service-expose
namespace: default
spec:
type: ExternalName
externalName: <external-service> # eg example.example.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: external-service-expose
namespace: default
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
rules:
- host: <some-host-on-your-side> # eg external-service.yourdomain.com
http:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> # eg 4589
tls:
- hosts:
- external-service.yourdomain.com
secretName: <tls secret for your domain>
of-course you need to make sure that the managed url is reachable from inside the cluster, a simple check can be done by launching a debug pod and doing
curl -v https://example.example.com:4589
If your external service has a dns entry configured on it, you can use kubernetes externalName service.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: myexternal.http.service.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: externalNameservice
namespace: prod
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /
In this way, kubernetes create cname record my-service pointing to myexternal.http.service.com
I just want to update #Moulick answer here according to Kubernetes version v1.21.1, as for ingress the configuration has changed a little bit.
In my example I am using Let's Encrypt for my nginx controller:
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: <some-host-on-your-side> eg managed.yourdomain.com
ports:
- port: <port of external service> eg 4589
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: external-service
namespace: default
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
tls:
- hosts:
- <some-host-on-your-side> eg managed.yourdomain.com
secretName: tls-external-service
rules:
- host: <some-host-on-your-side> eg managed.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> eg 4589

Route traffic to a service in a different namespace with Traefik and Kubernetes

Using Traefik as an ingress controller (on a kube cluster in GCP).
Is it possible to create an ingress rule that uses a backend service from a different namespace?
We have a namespace for each of our "major" versions of code.
1-service.com -> 1-service.com ingress in the 1-service ns -> 1-service svc in the same ns
2-service.com -> 2-service.com ingress in the 2-service ns... and so on
I also would like another ingress rule in the "unversioned" namespace that will route traffic to one of the major releases.
service.com -> service.com ingress in the "service" ns -> X-service in the X-service namespace
I would like to keep major versions separate in k8s using versioned host names (1-service.com etc), but still have a "latest" that points to the latest of the releases.
I believe voyager can do cross namespace ingress -> svc. can Traefik do the same??
You can use a workaround like this:
Create a Service with type ExternalName in your namespace when you want to create an ingress:
apiVersion: v1
kind: Service
metadata:
name: service-1
namespace: unversioned
spec:
type: ExternalName
externalName: service-1.service-1-ns.svc.cluster.local
ports:
- name: http
port: 8080
protocol: TCP
Create an ingress that point to this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: ingress-to-other-ns
namespace: service-1-ns
spec:
rules:
- host: latest.example.com
http:
paths:
- backend:
serviceName: service-1
servicePort: 8080
path: /
Just tested with the following example on EKS. Traefik is deployed in default namespace. This is the config used for the k8s service:
---
apiVersion: v1
kind: Service
metadata:
name: 1-service
namespace: 1-service
labels:
app: 1-service
spec:
selector:
app: 1-service
ports:
- name: http
port: 80
targetPort: 80
And this is the config used for Traefik service that will send the request to different namespace:
services:
1-service:
loadBalancer:
servers:
- url: http://1-service.1-service.svc.cluster.local:80
# - url: http://1-service.1-service:80 # This should work perfectly as well, didn't test it explicitly
As you probably already get that, you can reference to services from different namespace by using SERVICE.NAMESPACE notation, instead of the SERVICE, which will automatically assume that you are referencing a service from the current namespace.