What we want to achieve:
We would like to use Airflow to manage our machine learning and data pipeline while using Kubernetes to manage the resources and schedule the jobs. What we would like to achieve is for Airflow to orchestrate the workflow (e.g. Various tasks dependencies. Re-run jobs upon failures) and Kubernetes to orchestrate the infrastructure (e.g cluster autoscaling and individual jobs assignment to nodes). In other words Airflow will tell the Kubernetes cluster what to do and Kubernetes decides how to distribute the work. In the same time we would also want Airflow to be able to monitor the individual tasks status. For example if we have 10 tasks spreaded across a cluster of 5 nodes, Airflow should be able to communicate with the cluster and reports show something like: 3 “small tasks” are done, 1 “small task” has failed and will be scheduled to re-run and the remaining 6 “big tasks” are still running.
Questions:
Our understanding is that Airflow has no Kubernetes-Operator, see open issues at https://issues.apache.org/jira/browse/AIRFLOW-1314. That being said we don’t want Airflow to manage resources like managing service accounts, env variables, creating clusters, etc. but simply send tasks to an existing Kubernetes cluster and let Airflow know when a job is done. An alternative would be to use Apache Mesos but it looks less flexible and less straightforward compared to Kubernetes.
I guess we could use Airflow’s bash_operator to run kubectl but this seems not like the most elegant solution.
Any thoughts? How do you deal with that?
Airflow has both a Kubernetes Executor as well as a Kubernetes Operator.
You can use the Kubernetes Operator to send tasks (in the form of Docker images) from Airflow to Kubernetes via whichever AirflowExecutor you prefer.
Based on your description though, I believe you are looking for the KubernetesExecutor to schedule all your tasks against your Kubernetes cluster. As you can see from the source code it has a much tighter integration with Kubernetes.
This will also allow you to not have to worry about creating the docker images ahead of time as is required with the Kubernetes Operator.
Related
I wanted to know if it's possible to setup a KubernetesExecutor on Airflow but having the webserver and scheduler running on an EC2?
Meaning that tasks would run on Kubernetes pods (EKS in my case) but the base services on a regular EC2.
I tried to find information about the issue but failed short...
the following quote is from Airflow's docs, and it's the reason I'm asking this question
KubernetesExecutor runs as a process in the Airflow Scheduler. The scheduler itself does not necessarily need to be running on Kubernetes, but does need access to a Kubernetes cluster.
Thanks in advance!
Yes, this is entirely possible.
You just need to run your airflow scheduler and airflow webserver on EC2 and configure the EC2 instance to have all the necessary acces (via service account likely - but this is your decision and deployment configuration) to be able to spin pods on your EKS cluster.
Nothing special about it besides that you will have to learn how to run and configure the components to talk to each other - there are no ready-to-use recipes, you will have to simply follow theconfiguration parameters of Airflo, and authentication schemes that you need to have.
For my microservice based application, I am designing a component which is as follows:
Task that we want to execute is of periodic nature. For it, i planned to make use of the Kubernetes cron-jobs. It executes the job every 1 hour. This works perfectly fine.
In few scenarios, i want to execute this task on-demand (in stead of waiting for next hour window). For example, if next job time is 2:00pm, i want to execute it early, say 1:20pm.
There is a related question - How can I trigger a Kubernetes Scheduled Job manually?
But I am not looking for a manual way of achieving it or explicitly calling kubectl
commands. Is there a way do it automatically, based on events/queues?
Our application is deployed on AWS EKS and Azure AKS. Can I integrate the k8 clusters to read onto some queues/pub-subs (ex. aws-sqs, aws-sns) and do it dynamically?
Your help would be immensely appreciated!
If you application is running on Kubernetes and don't want to get migrated to serverless function and keep everything inside the Kubernetes cluster you can use the Knative.
Scale to Zero With Knative
Knative is a serverless platform that is built on top of Kubernetes. It provides higher-level abstractions for common application use cases.
One key feature is its ability to run generic (micro) service-based applications as serverless with the help of built-in scale to zero support. Knative has introduced its own autoscaler, Knative Pod Autoscaler (KPA), that supports scale to zero for any service that uses non-CPU-based scaling matrics.
update your micro service to running with Knative minor change will be there and you can run it on Kubernetes.
We decided to run Airflow on Kubernetes. We would like to make use of the power of Kubernetes, but in a balanced way.
We have some very small tasks in our DAGs, for example create a directory. The KubernetesExecutor spins up a pod for every task, this takes long and therefore is overkill for many short tasks.
My question is, is it possible to configure Airflow to spin up a Kubernetes pod for a whole DAG, instead of a pod per task? (Preferably without Celery)
I do not think it is possible to use one pod per DAG, because KubernetesExecutor is designed to request a pod per task:
When a DAG submits a task, the KubernetesExecutor requests a worker pod from the Kubernetes API. The worker pod then runs the task, reports the result, and terminates.
Maybe combining multiple smaller tasks into one is a way to go.
https://airflow.apache.org/docs/apache-airflow/stable/executor/celery_kubernetes.html
The CeleryKubernetes Executor allows you to use the immediate resources of a celery worker or spin up a pod for a task. I haven’t configured this setup but it seems to match your use case.
i saw that some people use Kubernetes co-exist with slurm, I was just curious as to why you need kubernetes with slurm? what is the main difference between kubernetes and slurm?
Slurm is open source job scheduling system for large and small Linux clusters. It is mainly used as Workload Manager/Job scheduler. Mostly used in HPC (High Performance Computing) and sometimes in BigData.
Kubernetes is an orchestration system for Docker containers using the concepts of ”labels” and ”pods” to group containers into logical units. It was mainly created to run micro-services and AFAIK currently Kubernetes is not supporting Slurm.
Slumr as Job scheduler have more scheduling options than Kubernetes, but K8s is container orchestration system not only Job scheduler. For example Kubernetes is supporting Array jobs and Slurm supports Parallel and array jobs. If you want to dive in to scheduling check this article.
As I mentioned before, Kubernetes is more focused on container orchestration and Slumr is focused on Job/Workload scheduling.
Only thing comes to my mind is that someone needed very personal-customized cluster using WLM-Operator + K8s + Slurm + Singularity to execute HPC/BigData jobs.
Usually Slurm Workload Manager is used by many of the world's supercomputers to optimize locality of task assignments on parallel computers.
I am trying to create a Kubernetes job that consists of two pods that have to be scheduled on separate nodes in our Hybrid cluster. Our requirement is that one of the pods runs on a Windows Server node and the other pod is running on a Linux node (thus we cannot just run two Docker containers from the same pod, which I know is possible, but would not work in our scenario). The Linux pod (which you can imagine as a client) will communicate over the network with the Windows pod (which you can imagine as a stateful server) exchanging data while the job runs. When the Linux pod terminates, we want to also terminate the Windows pod. However, if one of the pods fail, then we want to fail both pods (as they are designed to be a single job)
Our current design is to write a K8S service that handles the communication between the pods, and then apply the service and the two pods to the cluster to "emulate" a job. However, this is not ideal since the two pods are not tightly coupled as a single job and adds quite a bit of overhead to manually manage this setup (e.g. when failures or the job, we probably need to manually kill the service and deployment of the Windows pod). Plus we would need to deploy a new service for each "job", as we require the Linux pod to always communicate with the same Windows pod for the duration of the job due to underlying state (thus cannot use a single service for all Windows pods).
Any thoughts on how this could be best achieved on Kubernetes would be much appreciated! Hopefully this scenario is supported natively, and I would not need to resort in this kind of pod-service-pod setup that I described above.
Many thanks
I am trying to distinguish your distaste for creating and wiring the Pods from your distaste at having to do so manually. Because, in theory, a Job that creates Pods is very similar to what you are describing, and would be able to have almost infinite customization for those kinds of rules. With a custom controller like that, one need not create a Service for the client(s) to speak to their server, as the Job could create the server Pod first, obtain its Pod-specific-IP, and feed that to the subsequently created client Pods.
I would expect one could create a Job controller using only bash and either curl or kubectl: generate the json or yaml that describes the situation you wish to have, feed it to the kubernetes API (since the Job would have a service account - just like any other in-cluster container), and use normal traps to cleanup after itself. Without more of the specific edge cases loaded in my head it's hard to say if that's a good idea or not, but I believe it's possible.