Issue with mounting EFS access point from an AWS ECS Fargate task (Cloudformation related) - amazon-ecs

I am using AWS cloudformation to provision some resources. Part of it is to create an ECS task definition that will mount an EFS access point. A custom resource is defined in Cloudformation which a lambda function in Python will run the ECS Fargate task. However, when I create a stack from the Cloudformation template to provision all the things, the ECS task failed to mount the EFS through the access point with the following error message:
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-082b4402fbb9c9972.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first. : unsuccessful EFS utils command execution; code: 1
I met similar error before when I created the ECS task in an AZ without a mount target. But it is definitely not the case.
If I run the ECS task manually from the console or run a local python code to run the ECS task, there are no error at all.
Since the cloudformation template are nested templates which create all VPC and other resources together, I am not sure whether the Cloudformation custom resource (Lambda calling ECS task) should have more DependsOn: resources. I have already added the mount targets and acccess point to DependsOn:.
Tried to separate the Cloudformation custom resource into another file so that this part will only be created after all other parts of stack are completed. However, the result is the same.
PS I have added 300 seconds delay to the lambda function which call the ECS task. It works normally afterwards. Then I tried to create the original stack without 300 seconds' delay. The result is also positive. Just wondered what the problem was.

Related

Aws ECS Fargate enforce readonlyfilesystem

I need to enforce on ECS Fargate services 'readonlyrootFileSystem' to reduce Security hub vulnerabilities.
I thought it was an easy task by just setting it true in the task definition.
But it backfired as the service does not deploy because the commands in the dockerfile are not executed because they do not have access to folders and also this is incompatible with ssm execute commands, so I won't be able to get inside the container.
I managed to set the readonlyrootFileSystem To true and have my service back on by mounting a volume. To do I mounted a tmp volume that is used by the container to install dependencies at start and a data volume to store data (updates).
So now according to the documentation the security hub vulnerability should be fixed as the rule needs that variable not be False but still security hub is flagging the task as non complaint.
---More update---
the task definition of my service spins also a datadog image for monitoring. That also needs to have its filesystem as readonly to satisfy security hub.
Here I cannot solve as above because datadog agent needs access to /etc/ folder and if I mount a volume there I will lose files and the service wont' start.
is there a way out of this?
Any ideas?
In case someone stumbles into this.
The solution (or workaround, call it as you please), was to set readonlyrootFileSystem True for both container and sidecard (datadog in this case) and use bind mounts.
The rules for monitoring ECS using datadog can be found here
The bind mount that you need to add for your service depend on how you have setup your dockerfile.
in my case it was about adding a volume for downloading data.
Moreover since with readonly FS ECS exec (SSM) does not work, if you want this you also have to add mounts: if added two mounts in /var/lib/amazon and /var/log/amazon. This will allow to have ssm (docker exec basically into your container)
As for datadog, I just needed to fix the mounts so that the agent could work. In my case, since it was again a custom image, I mounted a volume on /etc/datadog-agent.
happy days!

AWS ECS Blue Green Deployments - CloudFormation Error

Trying to execute a blue/green deployment of an ECS task within AWS using the CloudFormation approach (as documented here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green.html) and the deployment fails.
The initial stack deployment works fine and the ECS task is deployed and running as expected with the correct load balancer and target group etc. However when updating the task definition, to trigger a blue/green deployment, it fails with the message:
Imports and exports are currently not supported on templates using hooks
The deployment is created in CodeDeploy, so it's obviously triggered as expected, but the deployment screen in AWS console shows the following error:
The deployment failed because the stack update that triggered this CodeDeploy deployment failed in CloudFormation. In the AWS CloudFormation console, go to the Events tab to view status and error messages.
But the puzzling thing is the CloudFormation template does not appear to contain any imports or exports. I have even tried copying the yml from the documented example and it doesn't work.
I'm executing the CloudFormation updates using Serverless Framework, but I don't think that's an issue, the error is logged in the CloudFormation stack events tab.
Probably not unreasonable to expect the example in the AWS documentation to work?
So we did find the cause of this issue, and in fact the problem was actually caused by running the CloudFormation template via the serverless framework.
The serverless approach works for all our other AWS deployments, but the CodeDeploy transform explicitly requires for there to be no outputs from the CF template - however serverless actually adds the name of the S3 bucket that it uses as an output, which breaks this particular use case.
Therefore the solution was to invoke the CF template directly from the AWS CLI and it works perfectly.

Discover AWS ECS cluster association from running container (self managed cluster)

I'm working with ECS with self managed EC2 based clusters. We have 1 cluster for each env, dev/stage/prod
I'm struggling to have my containers in ECS be aware of what cluster / environment they start in so that on task start up time they can properly configure themselves without having to bake the env specific config into the images.
It would be really easy if there was some command to run inside the container that could return the cluster name. It seems like that should be easy. I can think of a few sub optimal ways to do this. get the container/host IP and look up the instance. Try to grab /etc/ecs/ecs.config from the host instance etc...
It seems like there should be a better way. my google skills are failing... thx!
The ECS Task Metadata endpoint, available at ${ECS_CONTAINER_METADATA_URI_V4}/task within any ECS task, will return the cluster name, among other things.
Alternatively, if you were using an IaC tool such as Terraform or CloudFormation to build your ECS tasks, it would be trivial to inject the cluster name as an environment variable in the tasks.
Mark B's answer is better but before I got that I found this solution:
Add ECS_ENABLE_CONTAINER_METADATA=true to the /etc/ecs/ecs.config file on the ec2 host and you will have access to the ecs.config file as well as having the file available as and env var. See:
[Ecs Container Metadata File][1]
I think Mark's answer is better b/c this solution involves editing the userdata script for the host instances
[1]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html

Unable to access EFS from ECS Fargate task

Trying to launch a Fargate task that uses an EFS Volume.
When starting the task from ECS Console, I'm getting this error :
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-019a4b2d1774c5586.efs.eu-west-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first. : unsuccessful EFS utils command execution; code: 1
File system Id is correct. I've mounted the volume from an ec2 instance in the same VPC, all good.
Following steps defined here : https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/?nc1=h_ls
I cannot figure out where to specify outbound rule for ECS service or task. See image
Thanks in advance.
As #MarkB stated, i've edited the outbound rule and added the port 2049 (NFS) to the EFS security group, and it's workin fine.
Basically the ECS'S security group should allow ssh in the ingress and nfs protocol on the port 2049 to the Securitygroup of the mount target and
Mount target's security group should allow nfs protocol on the 2049 port.

AWS ecs scheduled task with cloudwatch

I am trying to create scheduled task with cloudwatch.
I am using this page
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
The problem i see is when i run task normally then aws asks
vpc
subnets
Launchtype
BUT when i use cloudwatch target then it dont ask for vpc, subnets etc. why is that ?
CloudFormation has not been updated to accommodate some Fargate functionality yet. If you get an error while trying to deploy an ECS task from CloudFormation,
try using the command line interface (aws events put-target) instead, which allows you to add a target that contains the required ECS parameters for launch type and network config.
Here is an example of how I configured my ECS tasks to be deployed from the CLI instead of CloudFormation:
1. Add vpc/subnet config to a variable, NETWORK_CONFIGURATION:
NETWORK_CONFIGURATION='{"awsvpcConfiguration":{"AssignPublicIp":"ENABLED","SecurityGroups": \["'${AWS_NETWORKCONFIG_SECURITY_GROUP}'"],"Subnets":["'${AWS_NETWORKCONFIG_SUBNET}'"]}}'
Run the following command to deploy your task, which will take the vpc config from the variable declared above
aws events put-targets \
--rule events-rule--${TASK_NAME} \
--targets '{"Arn":"arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':cluster/ecs-cluster-1","EcsParameters":{"LaunchType":"FARGATE","NetworkConfiguration":'${NETWORK_CONFIGURATION}',"TaskCount": 1,"TaskDefinitionArn": "arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':task-definition/ecs-task-'${TASK_NAME}'"},"Id": "ecs-targets-'${TASK_NAME}'","RoleArn": "arn:aws:iam::'${AWS_ACCOUNT_ID}':role/ecsEventsRole"}' \
;