I have three server A,B,C on each machine I'm running Chronos, ZooKeeper, mesos-master, mesos-slave.
Chronos contact mesos-master using ZooKeeper url hence it automatically picks leading master even if some node is down. I'm having high availability here.
Even Chronos run in cluster mode so accessing any of the Chronos I see same list of jobs and everything works fine.
Problem I have here is, Chronos is accessible with any of the three URLs
http://server_node_1:4400
http://server_node_2:4400
http://server_node_3:4400
I have another application which schedules jobs in Chronos using Rest API. Which URL my application has to talk to in order in run in high availabiity mode?
Let's say my application talks to http://server_node_1:4400 for scheduling the job, if Chronos on node server_node_1 is down I'm not able to schedule the Job.
My application needs to talk to single URL in order to schedule job in Chronos. Even if some Chronos node is down, I should be able to schedule the job. Do I need to have some kind of load balancer between my application and Chronos cluster to pick running chronos node for job scheduling? How can I achieve high availability in my scenario?
Use HAProxy for routing to a Chronos instance. This way you can access a Chronos instance using e.g. curl loadbalancer:8081.
haproxy.cfg:
listen chronos_8081
bind 0.0.0.0:8081
mode http
balance roundrobin
option allbackups
option http-no-delay
server chronos01 server_node_1:4400
server chronos02 server_node_2:4400
server chronos03 server_node_3:4400
Or even better, start Chronos via Marathon, which will ensure given number of instances. Then HAProxy configuration could be generated by:
marathon-lb
bamboo
Related
I have 2 questions:
first, what does it mean that the Kubernetes executor is fault tolerance, in other words, what happens if one worker nodes gets down?
Second question, is it possible that the whole Airflow server gets down? if yes, is there a backup that runs automatically to continue the work?
Note: I have started learning airflow recently.
Thanks in advance
This is a theoretical question that faced me while learning apache airflow, I have read the documentation
but it did not mention how fault tolerance is handled
what does it mean that the Kubernetes executor is fault tolerance?
Airflow scheduler use a Kubernetes API watcher to watch the state of the workers (tasks) on each change in order to discover failed pods. When a worker pod gets down, the scheduler detect this failure and change the state of the failed tasks in the Metadata, then these tasks can be rescheduled and executed based on the retry configurations.
is it possible that the whole Airflow server gets down?
yes it is possible for different reasons, and you have some different solutions/tips for each one:
problem in the Metadata: the most important part in Airflow is the Metadata where it's the central point used to communicate between the different schedulers and workers, and it is used to save the state of all the dag runs and tasks, and to share messages between tasks, and to store variables and connections, so when it gets down, everything will fail:
you can use a managed service (AWS RDS or Aurora, GCP Cloud SQL or Cloud Spanner, ...)
you can deploy it on your K8S cluster but in HA mode (doc for postgresql)
problem with the scheduler: the scheduler is running as a pod, and the is a possibility to lose depending on how you deploy it:
Try to request enough resources (especially memory) to avoid OOM problem
Avoid running it on spot/preemptible VMs
Create multiple replicas (minimum 3) for the scheduler to activate HA mode, in this case if a scheduler gets down, there will be other schedulers up
problem with webserver pod: it doesn't affect your workload, but you will not be able to access the UI/API during the downtime:
Try to request enough resources (especially memory) to avoid OOM problem
It's a stateless service, so you can create multiple replicas without any problem, if one gets down, you will access the UI/API using the other replicas
I have a cluster running two services, a web app, and a message queue handler. The app has autoscaling configured, but the queue handler does not. There's also a couple of scheduled tasks that run as well.
If I run a task manually using the cli (aws ecs run-task), the task works properly at first, then then after about 5 minutes loses the ability to make outbound connections.
From looking at the scaling logs, it doesn't look like autoscaling would be causing this issue.
I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don’t need to know the IP address of server. The issue is server has to start on some port and if I run 10 of these tasks only 3 tasks(= number of running instances) run at a time. This is clearly because 10 tasks cannot open the same port. I can manually check for open ports before starting the server and somehow write it to docker shared volume where client can read and connect. But this seems complicated and my server has unnecessary code. For the Services there is dynamic port mapping by using Application Load Balancer but there isn’t anything for simply running tasks.
How can I run multiple socket programs without having to manage the port number in Aws ecs?
If you're using awsvpc mode, each task will get its own eni and there shouldn't be any port conflict. But each instance type has a limited number of enis available. You can increase that by enabling eni trunking which, however is supported by a handful of instance types:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html#eni-trunking-supported-instance-types
I am using spring-boot-starter-quartz 2.2.1.RELEASE to schedule Quartz jobs.And I've deploy my code on two nodes.
And the quartz.properties is like this:
For node one:
org.quartz.scheduler.instanceName: machine1
org.quartz.scheduler.instanceId = AUTO
For node two:
org.quartz.scheduler.instanceName: machine2
org.quartz.scheduler.instanceId = AUTO
So in this situation, each node can run the same scanning job separately.
And now in my database in "qrtz_job_details", I can have two job records ,namely scanJobbyMachine1 and scanJobbyMachine2.
I also deployed an frontend UI on node1 that have RESTful API to schedule jobs. And I use nginx to randomly send my request to one of my nodes.
If I made a request to query all jobs, the request may be sent to node1 and only node1's job will be shown. But I want to show both node1 and node2's jobs.
If I made a request to update scanJobbyMachine1, and it may be sent to node2. And update can't be made, because node2 only have properties file whose instanceName is machine2.
Here is my plan:
Plan A:use cluster mode. But Quartz doesn't support "Allow Job Excution to be pin to a cluster node" yet. So in cluster mode, my job will only be excuted by one node. But I want both node to do the scanning jobs.
here is the issue link in github
Plan B:use Non cluster mode. Then I have to write duplicate APIs in Controller like this:
localhost:8090/machine0/updateJob
localhost:8090/machine1/updateJob
And use nginx to set when I request /machine0/updateJob, send it to 10.110.200.60(machine1's ip), when I request /machine1/updateJob, send it to 10.110.200.62(machine2's ip)
And for queryAllJobs I have to use my backend to send request to 10.110.200.60 and 10.110.200.62 first, and combine the response list in my backend, then show it in the frontend.
Plan C:write another backend with two properties files. just to schedule the jobs and don't excute these jobs (I don't know if this can work) and depoyed it on these two nodes.
I really don't want write and deploy another backend like Plan C or write duplicate APIs like Plan B.
Any good ideas?
your problem is a cluster problem :)
Perhaps you could dynamically configure your job registration: as node1 start, it will be alone on the cluster and run jobs only on this one. As node2 start, the two nodes can be called.
I have a task that takes approximately 3 minutes to run. It pulls data from a remote server and makes cpu-intensive analysis on it. This task will be invoked by an api call. Upon the api call, i am planning to give client a unique task id and assign the task to a celery worker. Then the client will poll the server with the given task id to see if the task is completed by celery worker and its result it saved to a result backend. I think of using nginx, gunicorn, flask and dockerize them for a easy deploy in case i need to distribute this architecture across multiple machines.
The problem is that the client may poll different servers due to load balancer and if not handled well, the polled server’s celery’s result backend might not have the task’s result but other server’s celery result backend has it.
Is it possible to use a single result backend over multiple celery instances and make different celery instances wuery the same result backend? What might be other possible ways to solve this other than using cloud storage like S3?
Would I have this problem only if I have multiple machines or would it happen even if I have multiple gunicorn instances in a single machine where nginx acts as a load balancer on them?
Not that it is possible to use a single result backend by all Celery workers, but that is the only setting that makes sense! Same goes for the broker in most cases, unless you have a complicated Celery infrastructure with exchanges, and complicated routes...