Run CLI command on ECS Fargate container - amazon-ecs

How is it possible to run a CLI command within a container that's using ECS/Fargate?

DEPRECATED: As mentioned on this answer (How can I run commands in a running container in AWS ECS using Fargate) you cannot do it due to the fact AWS doesn't give you access to the underlying infrastructure.
UPDATE: Pierre below mentions an announcement from AWS allowing to do just that.

AWS have now launched Amazon ECS Exec, which allows you to directly interact with containers: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html.

As i know and having experience on ECS you are not allowed to do it. aws does not give you access to the underlying resources.
if you are using fargate + EC2 Configuration then also it is not to access EC2.

I don't know if this is what you are trying to achieve, but if you want you can run a command on a new container that you instantiate for the occasion through a CloudWatch Rule
It will be enough to create a new task definition and indicate the command to execute (in the example executing a Laravel command)
ECSReputationSchedulerTask:
Type: AWS::ECS::TaskDefinition
Properties:
Cpu: 256
ExecutionRoleArn: !ImportValue ECSTaskExecutionRole
Family: TaskDefinitionFamily
Memory: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ContainerDefinitions:
-
Command:
- "php"
- "/home/application/artisan"
- "execute:operation"
Name: 'MySchedulerContainer'
...
and then reference it into a CloudWatch rule (unfortunately this can't be done via CloudFormation yet)

You may be able to script your container to execute a cli command, but you cannot ssh into the container though.
If you can invoke a .sh file from the CMD command in the Dockerfile, the cli command will get executed as long as you have aws-cli installed on the docker image.
In the Dockerfile make sure to run pip3 install awscli --upgrade --user before you invoke your script that contains cli commands.
As an alternative, you can use boto3 for Python or the AWS SDK for JavaScript, which both have comprehensive documentation and enable you to run all the commands you could have run via cli

Related

Docker compose equivalent of `docker run --gpu=all` option

To automate the configuration (docker run arguments) used to launch a docker container, I am writing a docker-compose.yml file.
My container should have access to the GPU, and so I currently use docker run --gpus=all parameter. This is described in the Expose GPUs for use docs:
Include the --gpus flag when you start a container to access GPU
resources. Specify how many GPUs to use. For example:
$ docker run -it --rm --gpus all ubuntu nvidia-smi
Unfortunately, Enabling GPU access with Compose doesn't describe this use case exactly. This guide uses the deploy yaml element, but in the context of reserving machines with GPUs. In fact, another documentation says that it will be ignored by docker-compose:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
After trying it and solving a myriad of problems along the way, I have realized that it is simply the documentation that is out of date.
Adding the following yaml block to my docker-compose.yml resulted in nvidia-smi being available to use.
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

Adding a Second Service with AWS Copilot

I've very familiar with doing all of this (quite tedious) stuff manually with ECS.
I'm experimenting with Copilot - which is really working - I have one service up really easily, but my solution has multiple services/containers.
How do I now add a second service/container to my cluster?
Short answer: change to your second service's code directory and run copilot init again! If you need to specify a different dockerfile, you can use the --dockerfile flag. If you need to use an existing image, you can use --image with the name of an existing container registry.
Long answer:
Copilot stores metadata in SSM Parameter Store in the account which was used to run copilot app init or copilot init, so as long as you don't change the AWS credentials you're using when you run Copilot, everything should just work when you run copilot init in a new repository.
Some other use cases:
If it's an existing image like redis or postgres and you don't need to customize anything about the actual image or expose it, you can run
copilot init -t Backend\ Service --image redis --port 6379 --name redis
If your service lives in a separate code repository and needs to access the internet, you can cd into that directory and run
copilot init --app $YOUR_APP_NAME --type Load\ Balanced\ Web\ Service --dockerfile ./Dockerfile --port 1234 --name $YOUR_SERVICE_NAME --deploy
So all you need to do is run copilot init --app $YOUR_APP_NAME with the same AWS credentials in a new directory, and you'll be able to set up and deploy your second services.
Copilot also allows you to set up persistent storage associated with a given service by using the copilot storage init command. This specifies a new DynamoDB table or S3 bucket, which will be created when you run copilot svc deploy. It will create one storage addon per environment you deploy the service to, so as not to mix test and production data.

Can i use the New-NavContainer in powershell to host a container in Azure?

I am trying to use the following to create a container in Azure:
New-NavContainer -accept_eula -containerName "test" -auth Windows -imageName
"mcr.microsoft.com/businesscentral/sandbox:base" -includeCSide -enableSymbolLoading -licenseFile
"licence.flf"
But it doesnt seem to allow setting the ResourceGroup in Azure:
So instead i tried using the following:
az container create --name test--image "mcr.microsoft.com/businesscentral/sandbox" --resource-
group testGroup --os-type Windows --cpu 2 --memory 3 --environment-variables ACCEPT_EULA=Y
ACCEPT_OUTDATED=Y USESSL=N --ip-address public --port 80 443 7048 7049 8080
I use the image name "mcr.microsoft.com/businesscentral/sandbox", but does that get the latest image ?
But where do i specify the LicenseFile ?
If you prefer to have your development sandbox in a container on your local machine, you must have Docker installed and working on your machine.
First, the username and password that you defined will be converted into PowerShell credential objects, and then the New-NavContainer command does all the heavy lifting to create your sandbox.
You can also immediately create a NAV Container on Azure Container Instances via the Azure CLI.
If you want to create the NAV container and uploading your Development License, you need to add this parameter in the environment variables:
-e ACCEPT_EULA=Y USESSL=N LICENSEFILE=c:\myfolder\license.flf
For more details, you could refer to this article.

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.