Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.
Related
I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with aws.amazon.com/cloudwatch-agent-ignore: true automatically so that no CloudWatch metrics (container insights) are created for those pods.
I know that I can achieve that from airflow side with pod mutation hook but for the sake of the argument let's say that I have no control over the configuration of that airflow instance.
I have seen MutatingAdmissionWebhook and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by MutatingAdmissionWebhook.
Is there any way to add that annotation from kubernetes side at pod creation time? The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.
To clarify I am posting community Wiki answer.
You had to use aws.amazon.com/cloudwatch-agent-ignore: true annotation. This means the pod that has one, it will be ignored by amazon-cloudwatch-agent / cwagent.
Here is the excerpt of your solution how to add this annotation to Apache Airflow:
(...) In order to force Apache Airflow to add the
aws.amazon.com/cloudwatch-agent-ignore: true annotation to the task/worker pods and to the pods created by the KubernetesPodOperator you will need to add the following to your helm values.yaml (assuming that you are using the "official" helm chart for airflow 2.2.3):
airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
If you are not using the helm chart then you will need to change the pod_template_file yourself to add the annotation and you will also need to modify the airflow_local_settings.py to include the pod_mutation_hook.
Here is the link to your whole answer.
You can try this repo which is a mutating admission webhook that does this. To date there's no built-in k8s support to do automatic annotation for specific namespace.
I am following this procedure to deploy konghq in my Kubernetes.
The key installation command there is this:
$ kubectl create -f https://konghq.com/blog/kubernetes-ingress-api-gateway/
It works fine when I create one single kinghq deployment. But it doesn't work for two deployments. What would I need to do? I changed the namespace but realized that about of resources are created outside of the namespace.
There is no sense to create 2 ingress controllers under 1 namespace. Would you like have multiple ingress rules under 1 namespace - you are welcome to create 1 Ingress controller and multiple rules.
Consider creating 2 ingress controllers in case you have multiple namespaces.
For example, check Multiple Ingress in different namespaces
I am trying to setup 2 Ingress controllers in my k8s cluster under 2
namespaces. Reason for this setup: Need one to be public which has
route to only one service that we want to expose. Need another one to
be private and has routes to all services including the internal
services
To deep dive into your issue it would be nice to have logs, errors, etc.
In case you still DO need 2 controllers, I would recommend you change namespace resource limits(to avoid issues) and then try deploy again.
To check: Multiple kong ingress controller or just one to different environments
I'm new to Kubernetes CRD. My question is as below:
Usually we need to apply a bunch of built-in resources for an Kubernetes app, like few deployments, services, or ingress. Can they be bundled into a single CRD, without implementing any controllers?
For example, I have a myapp-deploy, myapp-service. Instead if applying them separately, I want to define a new CRD "myapp", similar like:
kind: myapp
spec:
deployment: myapp-deploy
service: my-service
Then apply this new CRD.
Is this supported directly in kubernetes, without implementing own controller?
I read the official document and googled as well, but didn't find the answer.
Thanks!
It is not possible without writing anything. As per your requirement, you need to use kubernetes operator (https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) which will help you to deploy all your resources using one crd.
I have an ingress controller answering that is answering to *.mycompany.com. Is it possible, whenever I define Ingress resources in my cluster, not to specify the whole thing some-service.mycompany.com but only the some-service part.
Thus, if I changed the domain name later on (or for example in my test env) I wouldn't need to specify some-service.test.mycompany.com but things would magically work without changing anything.
For most Ingress Controllers, if you don’t specify the hostname then that Ingress will be a default for anything not matching anything else. The only downside is you can only have one of these (per controller, depending on your layout).
Configuration you want to achieve is possible only in nginx but it's impossible to apply it in nginx-ingress due to limitation of "host:".
"ingress-redirect" is invalid: spec.rules[0].host: Invalid value: "~^subdomain\\..+\\..+$;": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',.
It must start and end with an alphanumeric character (e.g. example.com, regex used for validation is [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
but nginx syntax allows to use syntax llike this: < server_name mail.*; >.
It would work if you put all configuration as http_snippet, but it's a lot of manual work (which you want to avoid).
I think the better solution is to use HELM. Put ingress yaml to helm chart and use values.my_domain in the chart to prepare Ingress object. Then all you need is to run
helm install ingress-chart-name --set values.my_domain=example.com
Based on this it is possible to create environment variables that are the same across all the pods of the deployment that you define.
Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?
Use case:
Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
(In my case I'm using helm, but the same thing can be without helm as well)
What you are looking for is, as far as I know, more like an anti-pattern than impossible.
From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.
monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.
And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...
Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing if hostname == $statefulset-name-0 then MASTER, but I feel like it's not the best idea.
By definition, each pod in a deployment is identical to its other replicas. This is not possible in the yaml definition.
An optional solution will be to override the pod command and have it process and calculate the value of the variable, set the variable (export IS_MASTER=${resolved_value}) and trigger the default entrypoint for the container.
It means you'll have to figure out a logic to implement this (i.e. how does the pod know it should be IS_MASTER=true?). This is an implementation detail that can be done with a DB or other shared common resource used as a flag or semaphore.
All the Pod replicas in the deployment will have the same environment variables and no unique value to identify a particular Pod. Creating multiple Deployments is a better solution.
Not sure why, the OP is for only one Deployment. One solution is to use StatefulSets. The node names would be like web-0, web1, web-2 and so on. In the code check for the host name, if it is web-0 then send emails or else do something else.
It's a dirty solution, but I can't think of a better solution than creating multiple deployments.
One other solution is to use the same Helm Chart for both cases and run one helm deployment for each case. You can overwrite env variables with helm (using --set .Values.foo.deployment.isFirst= "0" or "1")
Please note that Helm/K8s will not allow you to POST the very same configuration twice.
So you will have to conditionally apply some Kubernetes specific configuration (Secrets, ConfigMaps, Secrets etc) on the first deployment only.
{{- if eq .Values.foo.deployment.isFirst "1" }}
...
...
{{- end }}