I have a traefik.toml file defined as part of my traefik configmap. The snippet below is the kubernetes endpoint configuration with a labelselector defined:
[kubernetes]
labelselector = "expose=internal"
When I check the traefik status page in this configuration, I see all ingresses listed, not just those with the label "expose: internal" defined.
If I set the kubernetes.labelselector as a container argument to my deployment, however, only the ingresses with the matching label are displayed on the traefik status page as expected:
- --kubernetes.labelselector=expose=internal
According to the Kubernetes Ingress Backend documentation, any label selector format valid in the label selector section of the Labels and Selectors should be valid in the traefik.toml file. I have experimented with both the equality baed (as shown above) and the set-based (to determine if the "expose" label is present, only), neither of which have worked in the toml. The set-based does not seem to work in the container args, but the equality statements do.
I'm assuming there is some issue related to how I've formatted the kubernetes endpoint within the traefik.toml file. Before reporting this issue to github I was hoping someone could clarify the documentation and/or correct any mistakes I've made in the toml file format.
As you have already found out, not passing --kubernetes makes things work for you. The reason is that this parameter not only enables the Kubernetes provider but also sets all defaults. As documented, the command-line arguments take precedence over the configuration file; thus, any non-default Kubernetes parameters specified in the TOML file will be overridden by the default values implied through --kubernetes. This is intended (albeit not ideally documented) behavior.
You can still mix and match both command-line and TOML configuration parameters for Kubernetes (or any other provider, for that matter) by omitting --kubernetes. For instance, you could have your example TOML file
[kubernetes]
labelselector = "expose=internal"
and then invoke Traefik like
./traefik --configfile=config.yaml --kubernetes.namespaces=other
which will cause Traefik to use both the custom label selector expose=internal and watch the namespace other.
I have submitted a PR to clarify the behavior of the command-line provider-enabling parameters with regards to the provider default values.
Indeed, flags take precedence over toml config. It's documented here http://docs.traefik.io/basics/#static-trfik-configuration :)
The problem appears to be the mixing and matching of command line arguments and toml options.
After reading over a few bug reports and some additional misc. documentation I realized that we had enabled the kubernetes backend passing the --kubernetes argument to the traefik container. I realized that defining [kubernetes] in the toml was also enabling the kubernetes backend. On a hunch I removed the command line argument and put the full kubernetes backend config in the toml and everything works as expected.
I'm not sure if this is expected behavior or not but this behavior would seem to suggest it's designed in such a way that command line arguments take precedence over toml config options when duplicate options are provided.
Related
I have deployed a service using Cloud run on gke which uses Knative as an abstraction over k8s. The default MaxRevisionTimeoutSeconds is set to 600s in the knative default config but according to this PR this is customizable.
I couldn't find anything in the official Knative documentation, can anybody help me out here?
UPDATE:
After digging a bit more in knative source code and documentation. It looks like that the MaxRevisionTimeoutSeconds is defined in resource=ConfigMap/config-defaults. So have to update it with custom value.
From this it looks like we can use something called as operator to modify the ConfigMap resource but it did not work probably because gcp's does not use operator to install Knative components. Anyways I went on to install the operator and then used resource=knativeserving to overwrite the config-defaults. But this also did not work when I tried re-deploying service.
The next solution is to directly edit the config-defaults using kubectl edit. I even tried doing this but encountered weird behavior. After editing the YAML file when I used kubectl describe to check the changed value, it sometimes shows the modified value, sometimes shows the old value, and sometimes doesn't even show that particular key-value pair in the YAML. Also, it doesn't work when trying to re-deploy the service after doing this edit.
If anyone can help me with this, it would be really great.
MaxRevisionTimeoutSeconds is a cluster-global setting which enforces the max value for TimeoutSeconds on each Revision. This value exists so that cluster administrators can set upper bounds on the amount of time a single HTTP request can be in the system. Knowing an upper bound can be useful when configuring graceful shutdown settings on the HTTP routing components to prevent dropped requests during upgrades.
It's possible that Cloud Run on GKE has overridden these configurations so that they can upgrade the underlying Istio and Knative components on a predictable schedule. (If you have a 10% upgrade budget and it takes 10m to drain a component, your minimum upgrade time is probably around 110m, taking into account additional scheduling / image fetch / startup time.)
I have an ingress controller answering that is answering to *.mycompany.com. Is it possible, whenever I define Ingress resources in my cluster, not to specify the whole thing some-service.mycompany.com but only the some-service part.
Thus, if I changed the domain name later on (or for example in my test env) I wouldn't need to specify some-service.test.mycompany.com but things would magically work without changing anything.
For most Ingress Controllers, if you don’t specify the hostname then that Ingress will be a default for anything not matching anything else. The only downside is you can only have one of these (per controller, depending on your layout).
Configuration you want to achieve is possible only in nginx but it's impossible to apply it in nginx-ingress due to limitation of "host:".
"ingress-redirect" is invalid: spec.rules[0].host: Invalid value: "~^subdomain\\..+\\..+$;": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',.
It must start and end with an alphanumeric character (e.g. example.com, regex used for validation is [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
but nginx syntax allows to use syntax llike this: < server_name mail.*; >.
It would work if you put all configuration as http_snippet, but it's a lot of manual work (which you want to avoid).
I think the better solution is to use HELM. Put ingress yaml to helm chart and use values.my_domain in the chart to prepare Ingress object. Then all you need is to run
helm install ingress-chart-name --set values.my_domain=example.com
Most Kubernetes objects can be created with kubectl create, but if you need e.g. a DaemonSet — you're out of luck.
On top of that, the objects being created through kubectl can only be customized minimally (e.g. kubectl create deployment allows you to only specify the image to run and nothing else).
So, considering that Kubernetes actually expects you to either edit a minimally configured object with kubectl edit to suit your needs or write a spec from scratch and then use kubectl apply to apply it, how does one figure out all possible keywords and their meanings to properly describe the object they need?
I expected to find something similar to Docker Compose file reference, but when looking at DaemonSet docs, I found only a single example spec that doesn't even explain most of it's keys.
The spec of the resources in .yaml file that you can run kubectl apply -f on is described in Kubernetes API reference.
Considering DeamonSet, its spec is described here. It's template is actually the same as in Pod resource.
I tried (unsuccessfully) to set up an initializer admission controller on k8s 1.10, running in minikube. kubectl does not show 'initializerconfiguration' as a valid object type and attempting 'kubectl create -f init.yaml' with a file containing an initializerConfiguration object (similar to the exmaple found here: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly) returns this:
no matches for kind "InitializerConfiguration" in version "admissionregistration.k8s.io/v1alpha1"
(I tried with /v1beta1 as well, because kubectl api-versions doesn't show admissionregistration.k8s.io/v1alpha1 but does have .../v1beta1; no luck with that, either).
"Initializers" is enabled in the --admission-control option for kube-apiserver and all possible APIs are also turned on by default in minikube - so it should have worked, according to the k8s documentation.
According to the document mentioned in question:
Enable initializers alpha feature
Initializers is an alpha feature, so it is disabled by default. To
turn it on, you need to:
Include “Initializers” in the --enable-admission-plugins flag when starting kube-apiserver. If you have multiple kube-apiserver
replicas, all should have the same flag setting.
Enable the dynamic admission controller registration API by adding admissionregistration.k8s.io/v1alpha1 to the --runtime-config flag
passed to kube-apiserver, e.g.
--runtime-config=admissionregistration.k8s.io/v1alpha1. Again, all
replicas should have the same flag setting.
NOTE: For those looking to use this on minikube, use this to pass runtime-config to the apiserver:
minikube start --vm-driver=none --extra-config=apiserver.runtime-config=admissionregistration.k8s.io/v1alpha1=true
I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!