how to Route 70% traffic to ExternalName service and append url? - kubernetes

I want to route 70% percentage of my traffic coming to service A to an external end point and append the URL.
To achieve this I created an externalName type service which points to external endpoint and then use treafik ingress controller to divide the weight in percentage.
My service definition looks something like this:
---
apiVersion: v1
kind: Service
metadata:
name: wensleydale
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: wensleydale
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: test-service
Ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/service-weights: |
test-service: 70%
wensleydale: 30%
name: cheese
spec:
rules:
- http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
- backend:
serviceName: wensleydale
servicePort: 80
path: /
What I want in addition is when traffic goes to test-service, I want to append path.
In my test-service I want the URL to be something like www.google.com/something
I'm open to use other tools to achieve this.

You can do the following:
Use Istio Ingress Gateway instead of a traefik gateway. Istio Ingress Gateway is the recommended way for Ingress control in Istio. See https://istio.io/docs/tasks/traffic-management/ingress/
In the corresponding Virtual Service, use HTTPRewrite directive https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite :
rewrite:
uri: /something

Unfortunately you are hitting a limitation. The traefik ingress docs state this condition on weighting - "The associated service backends must share the same path and host". (https://docs.traefik.io/user-guide/kubernetes/#traffic-splitting) So you can't rewrite the path just for one of the weighted targets. The limitation comes from https://github.com/kubernetes/kubernetes/issues/25485 so you can see the suggestions there, many of which mention istio. (See also https://github.com/zalando/skipper/issues/324)
A simple solution might be to deploy another proxy into the cluster and use that to rewrite the target to the internal service that you can't change. Then your Ingress would be able to use the same path for both.
Another way would be to look at configuring a proxy using a conf file rather than ingress annotations. Configuration snippets may be enough to achieve this but I am not sure. I suspect you'd be best to deploy an additional proxy and expose it externally and configure it directly (avoiding the Ingress abstraction).

Related

kubernetes - route ingress traffic to specific pod for some paths

I have multiple pods, that scale up and down automatically.
I am using an ingress as entry point. I need to route external traffic to a specific pod base on some conditions (lets say path). At the point the request is made I am sure the specific pod is up.
For example lets say I have domain someTest.com, that normally routes traffic to pod 1, 2 and 3 (lets say I identify them by internal ips - 192.168.1.10, 192.168.1.11 and 192.168.1.13).
When I call someTest.com/specialRequest/12, I need to route the traffic to 192.168.1.12, when I call someTest.com/specialRequest/13, I want to route traffic to 192.168.1.13. For normal cases (someTest.com/normalRequest) I just want to do the lb do his epic job normally.
If pods scale up and 192.168.1.14 appears, I need to be able to call someTest.com/specialRequest/14 and be routed to the mentioned pod.
Is there anyway I can achieve this?
Yes, you can easily achieve this using Kubernetes Ingress. Here is a sample code that might help:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: YourHostName.com
http:
paths:
- path: /
backend:
serviceName: Service1
servicePort: 8000
- path: /api
backend:
serviceName: Service2
servicePort: 8080
- path: /admin
backend:
serviceName: Service3
servicePort: 80
Please not that the ingress rules have serviceNames and not pod names, so you will have to create services for your pods. Here is an example for a service which exposes nginx as a service in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
io.kompose.service: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
I am not aware of built-in functionality to implement this (if this is what your really want). You can achieve this by building your own operator for Kubernetes. Your operator may provision a Pod+Ingress combo which will do exactly what you want - forward your traffic to a single pod, or you can provision 2 pods and 1 ingress to achive HA setup.
Depending on the Ingress you are using, it also may be possible to group multiple ingress resources under the same load balancer.
Here is a brief diagram of how this could look like.
would it be feasible to create another application
that can get the path and target the pod directly via
a pattern in the naming convention? for example
${podnamePrefix+param}.${service name}.${namespace}.svc.cluster.local

Google Kubernetes Engine: How to define one Ingress for multiple namespaces?

On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.
In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?
GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.
I would like to do something like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
annotations: # activation certificat ssl
kubernetes.io/ingress.global-static-ip-name: lb-ip-adress
spec:
hosts:
- host: dev.domain.com
http:
paths:
- path: /*
backend:
serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
servicePort: http
- host: domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
servicePort: http
- host: www.domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service
servicePort: http
As GKE requires service to be NodePort I am stuck with prod-service.
Any help will be appreciated.
Thanks a lot
OK here is what I have been doing. I have only one ingress with one backend service to nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: nginx-svc
servicePort: 80
And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
listen [::]:80;
server_name _;
location /{
add_header Content-Type text/plain;
return 200 "OK.";
}
location /segmentation {
proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
}
}
And the deployment will use the above config of nginx via config-map
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
#podAntiAffinity will not let two nginx pods to run in a same node machine
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-configs
mountPath: /etc/nginx/conf.d
livenessProbe:
httpGet:
path: /
port: 80
# Load the configuration files for nginx
volumes:
- name: nginx-configs
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: "TCP"
nodePort: 32111
port: 80
This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.
Use #Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.
The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.
One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)
There are multiple benefits, such as:
One load-balancer can serve multiple namespaces
The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
You can still use container native load balancing
One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.

Best approach to expose kubernetes ingress?

i'm currently using nginx ingress to expose my apps to outside, currently my approach is like this. My question is this the best way to do it? or if not what would be the best practice.
nginx ingress controller service:-
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: "TCP"
- port: 443
name: https
protocol: "TCP"
selector:
app: nginx-ingress-lb
ingress:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
tls:
- hosts:
- myservice.example.com
secretName: sslcerts
rules:
- host: myservice.example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /
so basically i have my nginx ingress controller pods running and expose to a service type of Loadbalacner and i have rules define to determine the routes.
The best approach depends on the environment you will choose to build your cluster, whether you consider using Cloud provider or Bare-metal solutions.
In your example, I guess you are using Cloud provider as a Loadbalancer provisioner, which delivers External IP address as an entry point for your NGINX Ingress Controller. Therefore, you have a quite good possibility to scale your Ingress on demand with various features and options.
I found this Article a very useful to make the comparison between implementation of NGINX Ingress Controller on Cloud and Bare-metal environments.

Kubernetes Ingress Path only works with /

I have configured a kubernetes ingress service but it only works when the path is /
I have tried all manner of different values for the path including:
/*
/servicea
/servicea/
/servicea/*
This is my ingress configuration (that works)
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boardingservice
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /
backend:
serviceName: servicea-nodeport
servicePort: 80
This is my nodeport service
- apiVersion: v1
kind: Service
metadata:
name: servicea-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
nodePort: 30124
selector:
app: servicea
And this is my deployment
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicea
spec:
replicas: 1
template:
metadata:
name: ervicea
labels:
app: servicea
spec:
containers:
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/servicea
name: servicea
ports:
- containerPort: 8080
protocol: TCP
- image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/serviceb
name: serviceab
ports:
- containerPort: 8081
protocol: TCP
If the path is / then I can do this http://my.url.com/api/ping
but as I will have multiple services I want to do this: http://my.url.com/servicea/api/ping but when I set the path to /servicea I get a 404.
I am running kubernetes on AWS with an ingress-nginx ingress controller
Any idea?
You are not using kubernetes Pods as they are intended to be used. A Pod
it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you have two applications, servicea and serviceb, they should be running on different Pods: one pod for servicea and another one for serviceb. This has many benefits: you can deploy them separately, scale them independently, etc.
As the docs say
A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
These Pods can be created using Deployments, as you were already doing. That's fine and recommended.
Once you have the Deployments running, you'd create a different Service that would balance traffic between all the Pods for a given Deployment.
And finally, you want to hit servicea or serviceb depending on the request URL. That can be done with Ingress, as you were trying, but mapping each path to different services. For example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.url.com
http:
paths:
- path: /servicea
backend:
serviceName: servicea
servicePort: 80
- path: /serviceb
backend:
serviceName: serviceb
servicePort: 80
That way, requests going to your ingress controller using the /servicea path would be served by the Pods behind the servicea Service. And requests going to your ingress controller using the /serviceb path would be served by the Pods behind the serviceb Service.
For anyone reading this, my configuration was correct (even though unorthodox, as pointed out by fiunchinho), the error was in my Spring Boot applications, that were running in the containers. I needed to change the context paths to match the Ingress path - I could, of course, have changed the #GetMapping and #PostMapping methods in my Spring Controller, but I opted to change the context path.

Kubernetes Service pointing to External Resource

We have an existing website, lets say example.com, which is a CNAME for where.my.server.really.is.com.
We're now developing new services using Kubernetes. Our first service /login is ready to be deployed. Using a mock HTML server I've been able to deploy two pods with seperate services that map to example.com and example.com/login.
What I would like to do is get rid of my mock HTML server, and provide a service inside of the cluster, that points to our full website outside of the server. Then I can change the DNS for example.com to point to our kubernetes cluster and people will still get the main site from where.my.server.really.is.com.
We are using Traefik for ingress, and these are the changes I've made to the config for the website:
---
kind: Service
apiVersion: v1
metadata:
name: wordpress
spec:
type: ExternalName
externalName: where.my.server.really.is.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wordpress
annotations:
kubernetes.io/ingress.class: traefik
spec:
backend:
serviceName: wordpress
servicePort: 80
rules:
- host: example.com
http:
paths:
- backend:
serviceName: wordpress
servicePort: 80
Unfortunately, when I visit example.com, rather than getting where.my.server.really.is.com, I get a 503 with the body "Service Unavailable". example.com/login works as expected
What have I missed?
Following traefik documentation on using ExternalName
When specifying an ExternalName, Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
I believe you are missing the ports configuration of the Service. Something like
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- name: http
port: 80
type: ExternalName
externalName: where.my.server.really.is.com
You can see a full example in the docs.