I have next situation and I am not sure if it is possible to achieve it using kubernetes only.
There are two pods, one with live service is available on e.g. givemedata.example.com, temporary one on givemedata2.example.com. Temporary one is without any dependency and there just to allow some long term jobs to continue run while real one is down for maintenance.
Cluster is on GKE.
I need to setup failover so that if live one is down for maintenance, failover should redirect calls to temporary one.
Main reason is to use only one endpoint, e.g. givemedata.example.com.
My question is if this is possible and how to do it?
You can install NGINX ingress controller. Add
nginx.ingress.kubernetes.io/server-snippet: annotation to ingress configuration file and add block of code which will check if domain is available, if not it will redirect traffic to different one.
Thanks to this annotation it is possible to add custom configuration in the server configuration block.
Read more: server snippet.
Remember to add this server to nginx.conf file in /etc/nginx directory.
Here is example cofiguration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://givemedata2.example.com;
}
spec:
rules:
- host: givemedata.example.com
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
Related
Hi all we have a Flink application with blue-green deployment which we get using the Flink operator.
Flinkk8soperator for Apache Flink. The operator spins up the following three K8s services after a deployment:
my-flinkapp-14hdhsr (Top level service)
my-flinkapp-green
my-flinkapp-blue
The idea is that one of the two among blue green would be active and would have pods (either blue or green).
And a selector to the the active one would be stored in the top level myflinkapp-14hdhsr service with the selector flink-application-version=blue. Or green. As follows:
Labels: flink-app=my-flinkapp
flink-app-hash=14hdhsr
flink-application-version=blue
Annotations: <none>
Selector: flink-app=my-flinkapp,flink-application-version=blue,flink-deployment-type=jobmanager,
I have an ingress defined as follows which I want to use to point to the top level service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Accept-Encoding "";
sub_filter '<head>' '<head> <base href="/happy-flink-ui/">';
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: flink-secure-ingress-my-flink-app
namespace: happy-flink-flink
spec:
rules:
- host: flinkui-myapp.foo.com
http:
paths:
- path: /happy-flink-ui(/|$)(.*)
backend:
serviceName: my-flinkapp-14hdhsr // This works but....
servicePort: 8081
The issue I am facing is that the top level service keeps changing the hash at the end as the flink operator changes it at every deployment.. eg. myflinkapp-89hddew .etc.
So I cannot have a static service name in the ingress definition.
So I am wondering if an ingress can choose a service based on a selector or a regular expression of the service name which can account for the top level app service name plus the hash at the end.
The flink-app-hash (i.e the hash part of the service name - 14hdsr) is also part of the Labels in the top level service. Anyway I could leverage that?
Wondering if default backend could be applied here?
Have folks using Flink operator solved this a different way?
Unfortunately that's not possible OOB to use any kind of regex/wildcards/jsonpath/variables/references inside .backend.serviceName inside Ingres controller.
You are able to use regex only in path: Ingress Path Matching
And its not going to be implemented soon: Allow variable references in backend spec is in the hanged stage.
Would be very interesting to hear any possible solution for this. It was previously discussed on stack, however without any progress: https://stackoverflow.com/a/60810435/9929015
I am fairly new to Kubernetes and have just deployed my first cluster to IBM Cloud. When I created the cluster, I get a dedicated ingress subdomain, which I will be referring to as <long-k8subdomain>.cloud for the scope of this post. Now, this subdomain works for my app. For example: <long-k8subdomain>.cloud/ping works from my browser/curl just fine- I get the expected JSON response back. But, if I add this subdomain to a CNAME record on my domain provider's DNS settings (I have used Bluehost and IBM Cloud's Internet Services), I get a 404 response back from all routes. However this response is the default nginx 404 response (it says "nginx" under "404 Not Found"). I believe this means that this means the ingress load balancer is being reached, but the request does not get routed right. I am using Kubernetes version 1.20.12_1561 on VPC gen 2 and this is my ingress-config.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Host: <long-k8subdomain>.cloud";
spec:
rules:
- host: <long-k8subdomain>.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
I am pretty sure this problem is due to the annotations. Maybe I am using the wrong ones or I do not have enough. Ideally, I would like something like this: api..com/ to route correctly. I have also read a little bit about default backends, but I have not dove too much into that just yet. Any help would be greatly appreciated, as I have spent multiple hours trying to fix this.
Some sources I have used:
https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning
https://cloud.ibm.com/docs/containers?topic=containers-ingress-types
https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#annotations
Note: The reason why I have the second annotation is because for some reason, requests without that header were not being routed directly. So that was part of my debugging process and I just ended up leaving it as I am not sure if that annotation solves that, so I left it for now.
For the NGINX ingress controller to route requests for your own domain's CNAME record to the service instead of the IBM Cloud one, you need a rule in the ingress where the host identifies your domain.
For instance, if your domain's DNS entry is api.example.com, then change the resource YAML to:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
You should not need the second annotation for this to work.
If you want both of the hosts to work, then you could add a second rule instead of replacing host in the existing one.
So we have a SQL server deployment with replica=2 in K8s which I need to make load balanced. I'm using Haproxy ingress controller to achieve this goal but I'm stuck in configuring Haproxy. I'm trying to configure the Haproxy based on this link and I don't know how to present my two pods to the ingress!
There is this part of the link says:
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: default
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: app
servicePort: 80
The issue is that in "spec.rules.host" section, I don't have any domain rather I have two IPs belong to my SQL pods! How am I supposed to represent my pods to the ingress? Am I doing right?
I've looked it up alot, but no luck!
P.S: What is the best practice for Load Balancing SQL server?
A DNS system translates hostname to IPs.With a domain registered with DNS system you can add a mapping of hostname to IP in the /etc/hosts file of the system from where you want to access the hostname.
I have a Kubernetes ingress that I want to be the default for all paths on a set of hosts, provided there is not a more specific match:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: default-ing
spec:
rules:
- host: host1.sub.example.com
http:
paths:
- backend:
serviceName: my-default-service
servicePort: http
# Note: here we specify the root path intended as a default
path: /
- backend:
serviceName: my-default-service
servicePort: http
path: /route/path/to/default
A second ingress defines a custom service for a specific path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: special-ing
spec:
rules:
- host: host1.sub.example.com
http:
paths:
- backend:
serviceName: special-service
servicePort: http
path: /special
I would expect that the order of adding/deleting the ingresses would not matter, or at least I could have some way of indicating that the path: / in default-ing is always to be ordered last.
When I try the above, the routing is fine as long as I add special-ing before default-ing (or alternatively, add default-ing, then the special-ing, then delete default-ing and re-add it again). When I add them as default-ing, then special-ing, requests to /special are routed to my-default-service instead of special-service.
I want the order of adding/deleting to be independent of the routing that is generated by nginx-ingress-controller, so that my kubectl manipulations are more robust, and if one of the ingresses is recreated nothing will break.
I'm using nginx-ingress-controller:0.19.0
Thanks for any help you can offer!
The short answer is no. I believe your configs should be disallowed by the nginx ingress controller or documented somewhere. Basically what's happening when you have 2 hosts rules with the same value: host1.sub.example.com one is overwriting the other in the server {} block in the nginx.conf that your nginx ingress controller is managing.
So if your add default-ing before special-ing then special-ing will be the actual config. When you add special-ing before default-ing then default-ing will be your only config, special-ing is not supposed to work at all.
Add special-ing, and the configs look something like this:
server {
server_name host1.sub.example.com;
...
location /special {
...
}
location / { # default backend
...
}
...
}
Now add default-ing, and the configs will change to like this:
server {
server_name host1.sub.example.com;
...
location /route/path/to/default {
...
}
location / { # default backend
...
}
...
}
If you add them the other way around the config will look like 1. in the end.
You can find more by shelling into your nginx ingress controller pod and looking at the nginx.conf file.
$ kubectl -n <namespace> exec -it nginx-ingress-controller-pod sh
# cat /etc/nginx/nginx.conf
Update 03/31/2022:
It seems like on newer nginx ingress controller versions. All the rules with the same host get merged into a server block in the nginx.conf
Situation:
I want many customers share a common set of public IPs to access the kubernetes cluster.
Hostname based routing within the cluster it's done. But I want to provide HTTPS for all my customer's domains.
I have a set of edge-router nodes with one public IP each one. There's a Traefik ingress controller configured as DaemonSet listening on these nodes.
Let's supose there can be thousands customers with thousands domains.
My problem is that I want to have mulitple acme sections.
Exctracted from a ConfigMap in my ingress controller manifest:
[acme]
email = "ca#mycompany.com"
storage = "/etc/traefik/acme.json"
entryPoint = "https"
onHostRule = true
caServer = "https://acme-v02.api.letsencrypt.org/directory"
[[acme.domains]]
main = "mycustomer1.com"
[acme.httpChallenge]
entryPoint = "http"
My ideal solution would be have a way to split each customer https configuration in separate files, each one with its own acme settings.
Or, even better, having a way of configure this from the ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: garden
annotations:
kubernetes.io/ingress.class: traefik
#
# LET'S ENCRYPT CONFIGURATION COULD BE HERE.
# THAT WAY IT WOULD BE EASY TO CONFIGURE HTTPS FOR EACH CUSTOMER.
#
spec:
rules:
- host: mycustomer1.com
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
Is there any way to achieve this?
I would suggest trying to create multiple kind: Ingress for each customer and manage them. You will have the possibility to use special configmap for each Ingress class