ECS CLI efsVolumeConfiguration Docker Compose / ECS Params yml - amazon-ecs

I have a service on ECS deployed through ecs-cli compose service up
The ECS/EFS documentation says to configure your task definition like this to mount an efs volume in an ecs container:
{
"containerDefinitions": [
{
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"mountPoints": [
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "efs-html"
}
],
"name": "nginx",
"image": "nginx"
}
],
"volumes": [
{
"name": "efs-html",
"efsVolumeConfiguration": {
"fileSystemId": "fs-1234",
"rootDirectory": "/path/to/my/data"
}
}
],
"family": "nginx-efs"
}
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html#efs-create
How does that translate to docker-compose/ecs-params.yml syntax?

Was resolved in 1019.
Documentation can be found here.

It does not seem to be supported just yet. See https://github.com/aws/amazon-ecs-cli/issues/1009

Related

'Create service' for a Cluster with Ec2 type is giving error in AWS console

Trying out sample ECS with EC2 type in AWS free tier.
Created a Cluster for ec2 instance.
Then created task-def for ec2 resources with image uri - public.ecr.aws/ubuntu/nginx:latest, OS : Linux/X86_64, image t2.micro.
While creating/deploying the service, getting error on selecting the task-def created.
There was an error deploying nginx-service
Resource handler returned message: "Error occurred during operation 'ECS Deployment Circuit Breaker was triggered'." (RequestToken: 1ab71394-b41e-190a-df10-6a87d62a7915, HandlerErrorCode: GeneralServiceException)
task-def-json
{
"taskDefinitionArn": "arn:aws:ecs:ap-northeast-1:930446195568:task-definition/ecs-task-def:1",
"containerDefinitions": [
{
"name": "nginx",
"image": "public.ecr.aws/ubuntu/nginx:latest",
"cpu": 0,
"portMappings": [
{
"name": "nginx-80-tcp",
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": []
}
],
"family": "ecs-task-def",
"executionRoleArn": "arn:aws:iam::930446195568:role/ecsTaskExecutionRole",
"networkMode": "bridge",
"revision": 1,
"volumes": [],
"status": "ACTIVE",
"placementConstraints": [],
"compatibilities": [
"EC2"
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "512",
"memory": "1024",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2023-02-15T17:11:45.596Z",
"registeredBy": "arn:aws:iam::930446195568:user/admin_user",
"tags": []
}

AWS ECS Task Definition: Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host

I am trying to run Wazuh/Wazuh docker container on ECS. I was able to register task definition and launch container using Terraform. However, I am facing an issue with "Volume"(Data Volume) while registering tak definition using AWS CLI command.
Command: aws ecs --region eu-west-1 register-task-definition --family hids --cli-input-json file://task-definition.json
Error:
ParamValidationError: Parameter validation failed:
Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host
2019-08-29 07:31:59,195 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 514,
"containerPort": 514,
"protocol": "udp"
},
{
"hostPort": 1514,
"containerPort": 1514,
"protocol": "udp"
},
{
"hostPort": 1515,
"containerPort": 1515,
"protocol": "tcp"
},
{
"hostPort": 1516,
"containerPort": 1516,
"protocol": "tcp"
},
{
"hostPort": 55000,
"containerPort": 55000,
"protocol": "tcp"
}
],
"image": "wazuh/wazuh",
"essential": true,
"name": "chids",
"cpu": 1600,
"memory": 1600,
"mountPoints": [
{
"containerPath": "/var/ossec/data",
"sourceVolume": "ossec-data"
},
{
"containerPath": "/etc/filebeat",
"sourceVolume": "filebeat_etc"
},
{
"containerPath": "/var/lib/filebeat",
"sourceVolume": "filebeat_lib"
},
{
"containerPath": "/etc/postfix",
"sourceVolume": "postfix"
}
]
}
],
"volumes": [
{
"name": "ossec-data",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_etc",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_lib",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "postfix",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
}
I tried by adding "host" parameter(however it supports Bind Mounts only). But got the same error.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/var/ossec/data"
},
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
ECS should register the task definition having 4 Data Volumes and associated mount points.
Got the issue.
Removed "dockerVolumeConfiguration" parameter from "Volume" configuration and it worked.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/ecs/ossec-data"
}
},
{
"name": "filebeat_etc",
"host": {
"sourcePath": "/ecs/filebeat_etc"
}
},
{
"name": "filebeat_lib",
"host": {
"sourcePath": "/ecs/filebeat_lib"
}
},
{
"name": "postfix",
"host": {
"sourcePath": "/ecs/postfix"
}
}
]
Can you check on your version of awscli?
aws --version
According to all the documentation, your first task definition should work fine and I tested it locally without any issues.
It might be that you are using an older aws cli version where the syntax was different or parameters were different at the time.
Could you try updating your aws cli to the latest version and try again?
--
Some additional info I found:
Checking on the aws ecs cli command, they added docker volume configuration as part of the cli in v1.80
The main aws-cli releases updates periodically to update the commands but they don't provide much info on what specific versions of each command is changed:
https://github.com/aws/aws-cli/blob/develop/CHANGELOG.rst
If you update your aws-cli version things should work

Can't register ecs task with docker hub private repository image

I want to run an ECS task with a docker image from private docker hub repository.
I followed all instructions in this doc https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html .
Then I created a task definition json:
{
"containerDefinitions": [
{
"name": "signage-next-graphql",
"image": "docker.io/private/next-graphql:latest",
"repositoryCredentials": {
"credentialsParameter": "arn:aws:secretsmanager:us-east-2: 385945872227:secret:dockerhub-personal-pTsU9e"
},
"memory": 500,
"essential": true,
"portMappings": [
{
"hostPort": 5000,
"containerPort": 5000
}
]
}
],
"volumes": [],
"memory": "900",
"cpu": "128",
"placementConstraints": [],
"family": "next-graphql",
"executionRoleArn": "arn:aws:iam::385945872227:role/ecsTaskExecutionRole",
"taskRoleArn": ""
}
when I run aws ecs register-task-definition --family "${ECS_TASK_FAMILY}" --cli-input-json "file://./ecsTaskDefinition.json" --region "${AWS_TARGET_REGION}" , i'm getting the error;
Unknown parameter in containerDefinitions[0]: "repositoryCredentials",
must be one of: name, image, cpu, memory, memoryReservation, links,
portMappings, essential, entryPoint, command, environment,
mountPoints, volumesFrom, linuxParameters, hostname, user,
workingDirectory, disableNetworking, privileged,
readonlyRootFilesystem, dnsServers, dnsSearchDomains, extraHosts,
dockerSecurityOptions, dockerLabels, ulimits, logConfiguration,
healthCheck
Is aws documentation not updated? I'd expect it to be up to date.

Marathon-LB multiple instances through Bridge network doesnt work

I am using marathon-lb on dcos. When the load increases i get an error 'Maximum connections reached' and marathon-lb fails.
So i am trying to get multiple instances of marathon-lb running on the same node with below config. But this doesnt work, when i try the healthcheck it fails. On the other hand if i give hostport value (9090) one instance runs successfully and second instance keeps waiting. SO no matter what i cant have 2 instances working.
Isn't the Brdige network support to help run multiple instances? Any help is appreciated..
{
"id": "/marathon-lb-test3",
"acceptedResourceRoles": [
"slave_public"
],
"args": [
"sse",
"-m",
"http://marathon.mesos:8080",
"--group",
"external"
],
"backoffFactor": 1.15,
"backoffSeconds": 1,
"container": {
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "tcp",
"servicePort": 10001
},
{
"containerPort": 9090,
"hostPort": 9090,
"protocol": "tcp",
"servicePort": 10006
},
{
"containerPort": 443,
"hostPort": 0,
"protocol": "tcp",
"servicePort": 10007
},
{
"containerPort": 9091,
"hostPort": 0,
"protocol": "tcp",
"servicePort": 10008
},
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp",
"servicePort": 8080
}
],
"type": "DOCKER",
"volumes": [
{
"containerPath": "/marathon-lb/templates",
"hostPath": "/opt/marathon-lb/templates",
"mode": "RW"
}
],
"docker": {
"image": "mesosphere/marathon-lb:v1.11.1",
"forcePullImage": false,
"privileged": true,
"parameters": []
}
},
"cpus": 0.1,
"disk": 0,
"env": {
"HAPROXY_GLOBAL_DEFAULT_OPTIONS": "redispatch,httpclose,forceclose"
},
"healthChecks": [
{
"gracePeriodSeconds": 300,
"ignoreHttp1xx": false,
"intervalSeconds": 60,
"maxConsecutiveFailures": 3,
"portIndex": 1,
"timeoutSeconds": 20,
"delaySeconds": 15,
"protocol": "HTTP",
"path": "/_haproxy_health_check"
}
],
"instances": 2,
"maxLaunchDelaySeconds": 3600,
"mem": 1024,
"gpus": 0,
"networks": [
{
"mode": "container/bridge"
}
],
"requirePorts": false,
"upgradeStrategy": {
"maximumOverCapacity": 1,
"minimumHealthCapacity": 1
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"fetch": [],
"constraints": []
}
You have at least 2 free Public Agents or change it from "external" to "internal" if you aplications work with a internal network communications, when you add new node to cluster DCOS , you can set as public agent, once time that marathon load in this node, and so you DNS shall resolve domain in this ip

Specify ECR image instead of S3 file in Cloud Formation Elastic Beanstalk template

I'd like to reference an EC2 Container Registry image in the Elastic Beanstalk section of my Cloud Formation template. The sample file references an S3 bucket for the source bundle:
"applicationVersion": {
"Type": "AWS::ElasticBeanstalk::ApplicationVersion",
"Properties": {
"ApplicationName": { "Ref": "application" },
"SourceBundle": {
"S3Bucket": { "Fn::Join": [ "-", [ "elasticbeanstalk-samples", { "Ref": "AWS::Region" } ] ] },
"S3Key": "php-sample.zip"
}
}
}
Is there any way to reference an EC2 Container Registry image instead? Something like what is available in the EC2 Container Service TaskDefinition?
Upload a Dockerrun file to S3 in order to do this. Here's an example dockerrun:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "mydockercfg"
},
"Image": {
"Name": "quay.io/johndoe/private-image",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080:80"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
Use this file as the s3 key. More info is available here.