Does each task in an ECS cluster get its own disk space? - amazon-ecs

I have an ECS cluster set up. I can launch several tasks that all point to the same task definition, and I see them running with different container runtime id.
I understand that Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
What I want to understand is does each task gets its own disk space as well? Suppose I append logs to a static file (logs/application_logs.txt). Will each running task only have its own logs in that file?
If 3 tasks are running together, will the logs of all 3 tasks end up in logs/application_logs.txt?

What I want to understand is does each task gets its own disk space as
well? Suppose I append logs to a static file
(logs/application_logs.txt). Will each running task only have its own
logs in that file?
Each fargate task gets its own storage space allocated. This is unique to the task. Each running task will only have its own logs in that particular location.
If 3 tasks are running together, will the logs of all 3 tasks end up in >logs/application_logs.txt?
No they will not. You can add data volumes to tasks that can be shared between multiple tasks if you wanted to. See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html for more info.

Quoting the doc:
When provisioned, each Fargate task receives the following storage.
Task storage is ephemeral. After a Fargate task stops, the storage is
deleted.
10 GB of Docker layer storage
An additional 4 GB for volume mounts. This can be mounted and shared among containers using the volumes, mountPoints and volumesFrom
parameters in the task definition.
Yes, if you have provisioned storage inside volumes section of task definition then your task gets a non-persistent storage.
If you're appending logs to a file and there are 3 tasks running then I guess each one of them will have their own log file.

Related

EFS storage growing too big

We have a ECS fargate cluster that runs the fluentd application for collecting logs and routing them to elasticsearch. Logs are buffered on the disk(file buffer) before being routed to the destination. Since we are using FARGATE we mount the buffer path /var/log/fluentd/buffer/ to EFS.
What we would ideally expect is, the data in the buffer path will be flushed to elasticsearch and the buffer directory will be deleted. However we see a huge number of these buffer directories from tasks that have died and restarted several months ago.
So when a ECS tasks dies and comes back up again (autoscaling) it creates a new path /var/log/fluentd/buffer/ that gets mounted on EFS while also holding on to the path /var/log/fluentd/buffer/. I am not sure if its the EFS that holding on to these and remounting back on the new tasks.
Is there a way to delete these stale directories from EFS and just have paths specific to the running tasks. At a time, we have 5 tasks running in a service.
Any help is appreciated?

Kubernetes Handling a Sudden Request of Processing Power (Such as a Python Script using 5 Processes)

I have a little scenario that I am running in my mind with the following setup:
A Django Web Server running in Kubernetes with the ability to autoscale resources (Google Kubernetes Engine), and I set the resource values to be requesting nodes with 8 Processing Units (8 Cores) and 16 GB Ram.
Because it is a web server, I have my frontend that can call a Python script that executes with 5 Processes, and here's what I am worried about:
I know that If I run this script twice on my webserver (located in the same container as my Django code), I am going to be using (to keep it simple) 10 Processes/CPUs to execute this code.
So what would happen?
Would the first Python script be ran on Pod 1 and the second Python script (since we used 5 out of the 8 processing units) trigger a Pod 2 and another Node, then run on that new replica with full access to 5 new processes?
Or, would the first Python script be ran on Replica 1, and then the second Python script be throttled to 3 processing units because, perhaps, Kubernetes is allocating based on CPU usage in the Replica, not how much processes I called the script with?
If your system has a Django application that launches scripts with subprocess or a similar mechanism, those will always be in the same container as the main server, in the same pod, on the same node. You'll never trigger any of the Kubernetes autoscaling capabilities. If the pod has resource limits set, you could get CPU utilization throttled, and if you exceed the configured memory limit, the pod could get killed off (with Django and all of its subprocesses together).
If you want to take better advantage of Kubernetes scheduling and resource management, you may need to restructure this application. Ideally you could run the Django server and each of the supporting tasks in a separate pod. You would then need a way to trigger the tasks and collect the results.
I'd generally build this by introducing a job queue system such as RabbitMQ or Celery into the mix. The Django application adds items to the queue, but doesn't directly do the work itself. Then you have a worker for each of the processes that reads the queue and does work. This is not directly tied to Kubernetes, and you could run this setup with a RabbitMQ or Redis installation and a local virtual environment.
If you deploy this setup to Kubernetes, then each of the tasks can run in its own deployment, fed by the work queue. Each task could run up to its own configured memory and CPU limits, and they could run on different nodes. With a little extra work you can connect a horizontal pod autoscaler to scale the workers based on the length of the job queue, so if you're running behind processing one of the tasks, the HPA can cause more workers to get launched, which will create more pods, which can potentially allocate more nodes.
The important detail here, though, is that a pod is the key unit of scaling; if all of your work stays within a single pod then you'll never trigger the horizontal pod autoscaler or the cluster autoscaler.

Task definitions AWS Fargate

Let us say I am defining a task definition in AWS Fargate, this task definition would be used to start up tasks that involve a multi-container application regarding 2 web servers. How many task definitions would I need, how many tasks would I pay for and how many services are create?
I have read a lot of documentation, but it does not click for me. Is there anyone who can explain the correlation between: task definitions, task/s, Docker containers, services and ECS Fargate clusters?
A task definition is a specification. You use it to define one or more containers (with image URIs) that you want to run together, along with other details such as environment variables, CPU/memory requirements, etc. The task definition doesn't actually run anything, its a description of how things will be set up when something does run.
A task is an actual thing that is running. ECS uses the task definition to run the task; it downloads the container images, configures the runtime environment based on other details in the task definition. You can run one or many tasks for any given task definition. Each running task is a set of one or more running containers - the containers in a task all run on the same instance.
A service in ECS is a way to run N tasks all using the same task definition, and keep those N tasks running if they happen to shut down unexpectedly. Those N tasks can run on different instances in EC2 (although some may run on the same instance depending on the placement strategy used for the service); on Fargate, there are no instances and the tasks "just run", so you don't have to think about placement strategies. You can also use services to connect those tasks to a load balancer, so that requests from a client inside or outside of AWS can be routed evenly cross all N tasks. You can update the task definition used by a service, which will then trigger a rolling update (starting up and shutting down running tasks) so that all running tasks will be using the new version of the task definition after the deployment completes. This is used, for example, when you create a new container image and want your service to be updated to use the latest version.
A service is scoped to a cluster. A cluster is really just a name. Different clusters can have different IAM policies and roles, so that you can restrict who can create services in different clusters using IAM.

Airflow Memory Error: Task exited with return code -9

According to both of these Link1 and Link2, my Airflow DAG run is returning the error INFO - Task exited with return code -9 due to an out-of-memory issue. My DAG run has 10 tasks/operators, and each task simply:
makes a query to get one of my BigQuery tables, and
writes the results to a collection in my Mongo database.
The size of the 10 BigQuery tables range from 1MB to 400MB, and the total size of all 10 tables is ~1GB. My docker container has default 2GB of memory and I've increased this to 4GB, however I am still receiving this error from a few of the tasks. I am confused about this, as 4GB should be plenty of memory for this. I am also concerned because, in the future, these tables may become larger (a single table query could be 1-2GB), and I'd like to avoid these return code -9 errors at that time.
I'm not quite sure how to handle this issue, since the point of the DAG is to transfer data from BigQuery to Mongo daily, and the queries / data in-memory for the DAG's tasks is necessarily fairly large then, based on the size of the tables.
As you said, the error message you get corresponds to an out of memory issue.
Referring to the official documentation:
DAG execution is RAM limited. Each task execution starts with two
Airflow processes: task execution and monitoring. Currently, each node
can take up to 6 concurrent tasks. More memory can be consumed,
depending on the size of the DAG.
High memory pressure in any of the GKE nodes will lead the Kubernetes scheduler to evict pods from nodes in an attempt to relieve that pressure. While many different Airflow components are running within GKE, most don't tend to use much memory, so the case that happens most frequently is that a user uploaded a resource-intensive DAG. The Airflow workers run those DAGs, run out of resources, and then get evicted.
You can check it with following steps:
In the Cloud Console, navigate to Kubernetes Engine -> Workloads
Click on airflow-worker, and look under Managed pods
If there are pods that show Evicted, click each evicted pod and look for the The node was low on resource: memory message at the top of the window.
What are the possible ways to fix OOM issue?
Create a new Cloud Composer environment with a larger machine type than the current machine type.
Ensure that the tasks in the DAG are idempotent, which means that the result of running the same DAG run multiple times should be the same as the result of running it once.
Configure task retries by setting the number of retries on the task - this way when your task gets -9'ed by the scheduler it will go to up_for_retry instead of failed
Additionally you can check the behavior of CPU:
In the Cloud Console, navigate to Kubernetes Engine -> Clusters
Locate Node Pools at the bottom of the page, and expand the default-pool section
Click the link listed under Instance groups
Switch to the Monitoring tab, where you can find CPU utilization
Ideally, the GCE instances shouldn't be running over 70% CPU at all times, or the Composer environment may become unstable during resource usage.
I hope you find the above pieces of information useful.
I am going to chunk the data so that less is loaded into any 1 task at any given time. I'm not sure yet whether I will need to use GCS/S3 for intermediary storage.

AWS Fargate vs Batch vs ECS for a once a day batch process

I have a batch process, written in PHP and embedded in a Docker container. Basically, it loads data from several webservices, do some computation on data (during ~1h), and post computed data to an other webservice, then the container exit (with a return code of 0 if OK, 1 if failure somewhere on the process). During the process, some logs are written on STDOUT or STDERR. The batch must be triggered once a day.
I was wondering what is the best AWS service to use to schedule, execute, and monitor my batch process :
at the very begining, I used a EC2 machine with a crontab : no high-availibilty function here, so I decided to switch to a more PaaS approach.
then, I was using Elastic Beanstalk for Docker, with a non-functional Webserver (only to reply to the Healthcheck), and a Crontab inside the container to wake-up my batch command once a day. With autoscalling rule min=1 max=1, I have HA (if the container crash or if the VM crash, it is restarted by AWS)
but now, to be more efficient, I decided to move to some ECS service, and have an approach where I do not need to have EC2 instances awake 23/24 for nothing. So I tried Fargate.
with Fargate I defined my task (Fargate type, not the EC2 type), and configure everything on it.
I create a Cluster to run my task : I can run "by hand, one time" my task, so I know every settings are corrects.
Now, going deeper in Fargate, I want to have my task executed once a day.
It seems to work fine when I used the Scheduled Task feature of ECS : the container start on time, the process run, then the container stop. But CloudWatch is missing some metrics : CPUReservation and CPUUtilization are not reported. Also, there is no way to know if the batch quit with exit code 0 or 1 (all execution stopped with status "STOPPED"). So i Cant send a CloudWatch alarm if the container execution failed.
I use the "Services" feature of Fargate, but it cant handle a batch process, because the container is started every time it stops. This is normal, because the container do not have any daemon. There is no way to schedule a service. I want my container to be active only when it needs to work (once a day during at max 1h). But the missing metrics are correctly reported in CloudWatch.
Here are my questions : what are the best suitable AWS managed services to trigger a container once a day, let it run to do its task, and have reporting facility to track execution (CPU usage, batch duration), including alarm (SNS) when task failed ?
We had the same issue with identifying failed jobs. I propose you take a look into AWS Batch where logs for FAILED jobs are available in CloudWatch Logs; Take a look here.
One more thing you should consider is total cost of ownership of whatever solution you choose eventually. Fargate, in this regard, is quite expensive.
may be too late for your projects but still I thought it could benefit others.
Have you had a look at AWS Step Functions? It is possible to define a workflow and start tasks on ECS/Fargate (or jobs on EKS for that matter), wait for the results and raise alarms/send emails...