EC2 instance profile vs task execution role in ECS - amazon-ecs

What is the difference between EC2 instance profile and task execution role in AWS ECS.As I understand both are serving the same purpose when we deploy ECS cluster using EC2 instance. Is task execution role only apply for the faregate ?
Please help me to understand deference between these two concepts

Related

Task or Container scale-in protection in AWS ECS Fargate

I have an ECS Fargate service which uses CloudWatch alarms to scale-in/scale-out using service auto-scaling. The task containers have long processing times (upto 40 minutes) and I don't want a running container to get killed when a scale-in happens. Is there way to do that for an ECS task/service?
PS: I have looked at the stopTimeout property in a task-definition but its max value is only 120 seconds. I have also looked at scale-in protection for EC2 instances but haven't found any such solution for an ECS Fargate task.
Support for ECS task scale-in protection was released on 2022-11-10: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-scale-in-protection.html
In summary, you can use the new ECS container agent endpoint from inside a task to mark it as protected:
PUT $ECS_AGENT_URI/task-protection/v1/state -d
'{"ProtectionEnabled":true}'
Alternatively, you can use the UpdateTaskProtection API to achieve the same result from outside the task: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateTaskProtection.html

Airflow with KubernetesExecutor workers (EKS) and webserver+scheduler on EC2

I wanted to know if it's possible to setup a KubernetesExecutor on Airflow but having the webserver and scheduler running on an EC2?
Meaning that tasks would run on Kubernetes pods (EKS in my case) but the base services on a regular EC2.
I tried to find information about the issue but failed short...
the following quote is from Airflow's docs, and it's the reason I'm asking this question
KubernetesExecutor runs as a process in the Airflow Scheduler. The scheduler itself does not necessarily need to be running on Kubernetes, but does need access to a Kubernetes cluster.
Thanks in advance!
Yes, this is entirely possible.
You just need to run your airflow scheduler and airflow webserver on EC2 and configure the EC2 instance to have all the necessary acces (via service account likely - but this is your decision and deployment configuration) to be able to spin pods on your EKS cluster.
Nothing special about it besides that you will have to learn how to run and configure the components to talk to each other - there are no ready-to-use recipes, you will have to simply follow theconfiguration parameters of Airflo, and authentication schemes that you need to have.

Can I run multiple services in same go on same ECS fargate Cluster

I have a ECS fargate cluster up and running and it has 1 service and 1 task definition attached to it.
The task definition already has 2 container images described.This cluster is up and running.
Can I create a new service and for another application and configure it with this Existing ECS cluster.
If yes, will both the service run simultaneously.
From the AWS Documentation in regards Amazon ECS Clusters
An Amazon ECS cluster is a logical grouping of tasks or services. Your
tasks and services are run on infrastructure that is registered to a
cluster.
So I believe, you should be able to run multiple services in a cluster that is attached to its related task definition in the ECS.
Source Documentation - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Deploying K8S cluster without default worker pool in IBM Cloud

Good day to you.
I am implementing VPC and K8S modules for Terraform to deploy a complete virtual datacenter including compute resources in the IBM managed cloud. I would like to have full control of the worker pools attributes, like
name
flavor
zone
size
and therefore I would like to delete the default worker pool. This should ideally happen during the deployment by terraform.
Does anyone know, whether it is possible?
I tried to set the worker count to zero and define a specific worker pool, but this creates me a cluster with to worker pools and one worker in the default pool.
Best regards.
Jan
#Jan-Hendrik Palic unfortunately, the IBM Cloud Kubernetes Service API does not support this scenario at the moment. Because Terraform uses the API, there is no way right now to create a cluster without the default worker pool.