AWS ecs scheduled task with cloudwatch - aws-cloudformation

I am trying to create scheduled task with cloudwatch.
I am using this page
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
The problem i see is when i run task normally then aws asks
vpc
subnets
Launchtype
BUT when i use cloudwatch target then it dont ask for vpc, subnets etc. why is that ?

CloudFormation has not been updated to accommodate some Fargate functionality yet. If you get an error while trying to deploy an ECS task from CloudFormation,
try using the command line interface (aws events put-target) instead, which allows you to add a target that contains the required ECS parameters for launch type and network config.
Here is an example of how I configured my ECS tasks to be deployed from the CLI instead of CloudFormation:
1. Add vpc/subnet config to a variable, NETWORK_CONFIGURATION:
NETWORK_CONFIGURATION='{"awsvpcConfiguration":{"AssignPublicIp":"ENABLED","SecurityGroups": \["'${AWS_NETWORKCONFIG_SECURITY_GROUP}'"],"Subnets":["'${AWS_NETWORKCONFIG_SUBNET}'"]}}'
Run the following command to deploy your task, which will take the vpc config from the variable declared above
aws events put-targets \
--rule events-rule--${TASK_NAME} \
--targets '{"Arn":"arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':cluster/ecs-cluster-1","EcsParameters":{"LaunchType":"FARGATE","NetworkConfiguration":'${NETWORK_CONFIGURATION}',"TaskCount": 1,"TaskDefinitionArn": "arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':task-definition/ecs-task-'${TASK_NAME}'"},"Id": "ecs-targets-'${TASK_NAME}'","RoleArn": "arn:aws:iam::'${AWS_ACCOUNT_ID}':role/ecsEventsRole"}' \
;

Related

using aws cli how to get list of tasks id per ecs service

using AWS CLI how to get list of tasks ids per ecs services. When I use describe-services it does not list task id details only the count of number of tasks
You would use the aws ecs list-tasks --cluser <cluster-name> --service-name <service-name> command to get a list of tasks for a specific service.

Issue with mounting EFS access point from an AWS ECS Fargate task (Cloudformation related)

I am using AWS cloudformation to provision some resources. Part of it is to create an ECS task definition that will mount an EFS access point. A custom resource is defined in Cloudformation which a lambda function in Python will run the ECS Fargate task. However, when I create a stack from the Cloudformation template to provision all the things, the ECS task failed to mount the EFS through the access point with the following error message:
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-082b4402fbb9c9972.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first. : unsuccessful EFS utils command execution; code: 1
I met similar error before when I created the ECS task in an AZ without a mount target. But it is definitely not the case.
If I run the ECS task manually from the console or run a local python code to run the ECS task, there are no error at all.
Since the cloudformation template are nested templates which create all VPC and other resources together, I am not sure whether the Cloudformation custom resource (Lambda calling ECS task) should have more DependsOn: resources. I have already added the mount targets and acccess point to DependsOn:.
Tried to separate the Cloudformation custom resource into another file so that this part will only be created after all other parts of stack are completed. However, the result is the same.
PS I have added 300 seconds delay to the lambda function which call the ECS task. It works normally afterwards. Then I tried to create the original stack without 300 seconds' delay. The result is also positive. Just wondered what the problem was.

Jenkins cron job to run selenium & k8s

I am working on a project in which I have created a k8s cluster to run selenium grid locally. I want to schedule the tests to run and until now I have tried to create a Jenkins cron job to do so. For that I am using k8s plugin in Jenkins.
However I am not sure about the steps to follow. Where should I be uploading the kube config file? There are a few options here:
Build Environment in Jenkins
Any ideas or suggestions?
Thanks
Typically, you can choose any option, depending on how you want to manage the system, I believe:
secret text or file option will allow you to copy/paste a secret (with a token) in Jenkins which will be used to access the k8s cluster. Token based access works by adding an HTTP header to your requests to the k8s API server as follows: Authorization: Bearer $YOUR_TOKEN. This authenticates you to the server. This is the programmatic way to access the k8s API.
configure kubectl option will allow you to perhaps specify the config file within Jenkins UI where you can set the kubeconfig. This is the imperative/scriptive way of configuring access to the k8s API. The kubeconfig itself contains set of keypair based credentials that are issued to a username and signed by the API server's CA.
Any way would work fine! Hope this helps!
If Jenkins is running in Kubernetes as well, I'd create a service account, create the necessary Role and RoleBinding to only create CronJobs, and attach your service account to your Jenkins deployment or statefulset, then you can use the token of the service account (by default mounted under /var/run/secrets/kubernetes.io/serviceaccount/token) and query your API endpoint to create your CronJobs.
However, if Jenkins is running outside of your Kubernetes cluster, I'd authenticate against your cloud provider in Jenkins using one of the plugins available, using:
Service account (GCP)
Service principal (Azure)
AWS access and secret key or with an instance profile (AWS).
and then would run any of the CLI commands to generate a kubeconfig file:
gcloud container clusters get-credentials
az aks get-credentials
aws eks update-kubeconfig

Access AWS Tags inside ECS docker containers

I am creating a ECS Service with a few resource tags through a cloudformation template. Once the service is up and running, is there a way I can access these aws tags from within the container?
I was thinking if there is a way to make them available as environment variables in the container?
Run one of the following command from within the container:
list-tags-for-resource CLI commands with task ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:task/test/186de825c8EXAMPLE10bf1c3bb142
list-tags-for-resource CLI commands with service ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:service/test/service

How to set up secrets in ECS task definition for container environment variable?

I try to set up AWS ECS task definition of my docker frontend container to an AWS backend url.
In my .env.production:
REACT_APP_HOST=secrets.BACKEND_URL
how should I modify my secrets format or syntax, so that in my ECS task definition when I set container environment variable can be correctly used?
key: BACKEND_URL value:xxxxx
Thanks
You need to use Secrets block in ECS task definition, then during run time, ECS will retrieve the secret value and inject as env variable into container.
Some docs if you use with CF, CLI or TF are similar as well
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-secret.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html