I have an ingress controller answering that is answering to *.mycompany.com. Is it possible, whenever I define Ingress resources in my cluster, not to specify the whole thing some-service.mycompany.com but only the some-service part.
Thus, if I changed the domain name later on (or for example in my test env) I wouldn't need to specify some-service.test.mycompany.com but things would magically work without changing anything.
For most Ingress Controllers, if you don’t specify the hostname then that Ingress will be a default for anything not matching anything else. The only downside is you can only have one of these (per controller, depending on your layout).
Configuration you want to achieve is possible only in nginx but it's impossible to apply it in nginx-ingress due to limitation of "host:".
"ingress-redirect" is invalid: spec.rules[0].host: Invalid value: "~^subdomain\\..+\\..+$;": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',.
It must start and end with an alphanumeric character (e.g. example.com, regex used for validation is [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
but nginx syntax allows to use syntax llike this: < server_name mail.*; >.
It would work if you put all configuration as http_snippet, but it's a lot of manual work (which you want to avoid).
I think the better solution is to use HELM. Put ingress yaml to helm chart and use values.my_domain in the chart to prepare Ingress object. Then all you need is to run
helm install ingress-chart-name --set values.my_domain=example.com
Related
I have installed NGINX on Azure AKS, as default, from the repository, and set it up to handle http and tcp traffic, inside the name space the controller and services are installed. URLs are mapped to internal services. This works fine.
I then created another namespace and installed the same application with same named services, but - again, on a different namespace. The installation seem to work.
I then tried to install another NGINX controller, this time in the new name space, to control the services located there.
Used helm and added --set controller.ingressClass="custom-class-nginx" and --set controller.ingressClassResource.name="custom-class-nginx" to the ugrade --install helm command line.
I also changed the rules configuration used, to use the "custom-class-nginx" for the ingressClassName value.
I now see both ingresses, each in its own name space.
The first instance, installed as default with class name "nginx" works fine.
The second instance does not load balance and I get the NGINX 404 error, when I try to go to the set URLs.
Also, when I look at the logs of the 1st (default) and 2nd (custom) logs in K9s, I see they both show events from the 1st controller. Yes, even in the custom controller's logs.
What an I missing? What am I doing wrong? I read as much info as I could and it is supposed to be easy.
What configuration am I missing? What do I need to make the 2nd controller respond to the URL coming in and move traffic?
Thanks in advance.
Moshe
I have deployed ambassador edge stack and I am using hosts and mapping resources to route my traffic. I want to implement the mapping in such a way that if there is any double slash in the path, using regex (or any other available way) to remove one slash from it.
For example, if client request https://a.test.com//testapi I want it to be https://a.test.com/testapi.
I search through the ambassador documents but I am unable to find anything that can be of help.
Thank You
There is the Module Resource for emissary ingress.
If present, the Module defines system-wide configuration. This module can be applied to any Kubernetes service (the ambassador service itself is a common choice). You may very well not need this Module. To apply the Module to an Ambassador Service, it MUST be named ambassador, otherwise it will be ignored. To create multiple ambassador Modules in the same namespace, they should be put in the annotations of each separate Ambassador Service.
You should add this to the module's yaml file:
spec:
...
config:
...
merge_slashes: true
If true, Emissary-ingress will merge adjacent slashes for the purpose of route matching and request filtering. For example, a request for //foo///bar will be matched to a Mapping with prefix /foo/bar.
Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.
In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB)
But, as I validate the change, I got an error saying
Invalid value for field 'namedPorts[0].port': '0'. Must be greater
than or equal to 1
even if i didn't change the namedPort.
This issue seems to be the same, but the only solution is a workaround that does not work in my case :
Thanks for your help,
I faced the same issue and #tmirks 's fix didn't work for me.
After experimenting with GCE for a while, I realised that the issue is with the service.
By default all services are type: ClusterIP unless you specified otherwise.
Long story short, if your service isn't exposed as type: NodePort than the GCE load balancer won't route the traffic to it.
From the official Kubernetes project:
nodeport is a requirement of the GCE Ingress controller (and cloud controllers in general). "On-prem" controllers like the nginx ingress controllers work with clusterip:
I'm sure the OP has resolved this by now, but for anyone else pulling their hair out, this might work for you:
There's a bug of sorts in the GCE Load Balancer UI. If you add an empty frontend IP/Port combo by accident, it will create a named port in the Instance Group called port0 with a value of 0. You may not even realize this happened because you won't see the empty frontend mapping in the console.
To fix the problem, edit your instance group and remove port0 from the list of port name mappings.
After many different attempts I simply deleted the ingress object and recreated it and the problem went away. There must be a bug somewhere that leaves artifacts when ingress in updated.
I have a traefik.toml file defined as part of my traefik configmap. The snippet below is the kubernetes endpoint configuration with a labelselector defined:
[kubernetes]
labelselector = "expose=internal"
When I check the traefik status page in this configuration, I see all ingresses listed, not just those with the label "expose: internal" defined.
If I set the kubernetes.labelselector as a container argument to my deployment, however, only the ingresses with the matching label are displayed on the traefik status page as expected:
- --kubernetes.labelselector=expose=internal
According to the Kubernetes Ingress Backend documentation, any label selector format valid in the label selector section of the Labels and Selectors should be valid in the traefik.toml file. I have experimented with both the equality baed (as shown above) and the set-based (to determine if the "expose" label is present, only), neither of which have worked in the toml. The set-based does not seem to work in the container args, but the equality statements do.
I'm assuming there is some issue related to how I've formatted the kubernetes endpoint within the traefik.toml file. Before reporting this issue to github I was hoping someone could clarify the documentation and/or correct any mistakes I've made in the toml file format.
As you have already found out, not passing --kubernetes makes things work for you. The reason is that this parameter not only enables the Kubernetes provider but also sets all defaults. As documented, the command-line arguments take precedence over the configuration file; thus, any non-default Kubernetes parameters specified in the TOML file will be overridden by the default values implied through --kubernetes. This is intended (albeit not ideally documented) behavior.
You can still mix and match both command-line and TOML configuration parameters for Kubernetes (or any other provider, for that matter) by omitting --kubernetes. For instance, you could have your example TOML file
[kubernetes]
labelselector = "expose=internal"
and then invoke Traefik like
./traefik --configfile=config.yaml --kubernetes.namespaces=other
which will cause Traefik to use both the custom label selector expose=internal and watch the namespace other.
I have submitted a PR to clarify the behavior of the command-line provider-enabling parameters with regards to the provider default values.
Indeed, flags take precedence over toml config. It's documented here http://docs.traefik.io/basics/#static-trfik-configuration :)
The problem appears to be the mixing and matching of command line arguments and toml options.
After reading over a few bug reports and some additional misc. documentation I realized that we had enabled the kubernetes backend passing the --kubernetes argument to the traefik container. I realized that defining [kubernetes] in the toml was also enabling the kubernetes backend. On a hunch I removed the command line argument and put the full kubernetes backend config in the toml and everything works as expected.
I'm not sure if this is expected behavior or not but this behavior would seem to suggest it's designed in such a way that command line arguments take precedence over toml config options when duplicate options are provided.