I'm new to Kubernetes CRD. My question is as below:
Usually we need to apply a bunch of built-in resources for an Kubernetes app, like few deployments, services, or ingress. Can they be bundled into a single CRD, without implementing any controllers?
For example, I have a myapp-deploy, myapp-service. Instead if applying them separately, I want to define a new CRD "myapp", similar like:
kind: myapp
spec:
deployment: myapp-deploy
service: my-service
Then apply this new CRD.
Is this supported directly in kubernetes, without implementing own controller?
I read the official document and googled as well, but didn't find the answer.
Thanks!
It is not possible without writing anything. As per your requirement, you need to use kubernetes operator (https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) which will help you to deploy all your resources using one crd.
Related
I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with aws.amazon.com/cloudwatch-agent-ignore: true automatically so that no CloudWatch metrics (container insights) are created for those pods.
I know that I can achieve that from airflow side with pod mutation hook but for the sake of the argument let's say that I have no control over the configuration of that airflow instance.
I have seen MutatingAdmissionWebhook and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by MutatingAdmissionWebhook.
Is there any way to add that annotation from kubernetes side at pod creation time? The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.
To clarify I am posting community Wiki answer.
You had to use aws.amazon.com/cloudwatch-agent-ignore: true annotation. This means the pod that has one, it will be ignored by amazon-cloudwatch-agent / cwagent.
Here is the excerpt of your solution how to add this annotation to Apache Airflow:
(...) In order to force Apache Airflow to add the
aws.amazon.com/cloudwatch-agent-ignore: true annotation to the task/worker pods and to the pods created by the KubernetesPodOperator you will need to add the following to your helm values.yaml (assuming that you are using the "official" helm chart for airflow 2.2.3):
airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
If you are not using the helm chart then you will need to change the pod_template_file yourself to add the annotation and you will also need to modify the airflow_local_settings.py to include the pod_mutation_hook.
Here is the link to your whole answer.
You can try this repo which is a mutating admission webhook that does this. To date there's no built-in k8s support to do automatic annotation for specific namespace.
Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md
I am looking for a soltion that enables the automatic update of my deployment once the config (env-variable) are updated with new data. As ConfigMaps don´t support such generic rollout, i am thinking about creating an Operator using the operator SDK that achieves this.
The main idea would be creating a CRD and a CR instance that holds the config, use that config in deployment env-variable.. and the controller will reconcile the state of my deployment whenevr a change is happening to my CR.
therefore i am wondering how to use the CR data in my deployment manifest?
I feel this must be possible but haven't managed to find docs on how to do this.
I would ideally like to add service exposure details in a deployment yaml file and the service would come up with a host name and port upon the issuing of a create command with the deployment yaml.
You can write your Deployment manifest (like here) then you can write your Service manifest (like here) and then you can put them in the same file with a --- between them (like here). That is part of the YAML spec rather than a Kubernetes specific thing.
For an example of how you can write the Service and Deployment so that they target the same set of Pods, you can look at the definition of the default backend of the nginx ingress controller for inspiration here.
I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2