Running containers on multiple AWS EC2 instances from docker-compose file - docker-compose

I know it is not possible to run containers on multiple hosts using a single docker-compose file. And this needs something like docker swarm to do it.
Is there anything equivalent to this on AWS - ECS or Fargate?

Yes, You can do this on ECS using EC2 type or using Fargate type cluster.
If you are moving with ECS EC2 Type model you can opt for different task placement strategies for placing your task on different nodes to achieve any model you want like AZ Spread, Binpack etc
Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html
If you are opting for Fargate ECS Type then you don't have to take care of underlying EC2 nodes as those are managed by AWS in this case.
Also, there is a big difference in docker-compose and docker-swarm.
Docker swarm is the orchestration which is required for your use case as of now.

Related

Add EFS volume to ECS for persistent mongodb data

I believe this requirement seems pretty straight forward for anyone trying to host their Tier3 i.e. database in a container.
I have MVP 3x Tier MERN app using -
1x Container instance
3x ECS services (Frontend, Backend and Database)
3x Tasks (1x running task per service)
The Database Task (mongodb) has its task definition updated to use EFS and have tested stopping the task and re-starting a new one for data persistence.
Question - How to ensure auto mount of EFS volume on the ECS container host (SPOT instance). If ECS leverages cloud formation template under the covers, do I need to update or modify this template to gain this persistent efs volume auto mounted on all container ec2 instances? I have come across various articles talking about a script in the ec2 launch config but I don't see any launch config created by ECS / cloud formation.
What is the easiest and simplest way to achieve something as trivial as persistent efs volume across my container host instances. Am guessing task definition alone doesn't solve this problem?
Thanks
Actually, I think below steps achieved persistence for the db task using efs -
Updated task definition for the database container to use EFS.
Mounted the EFS vol on container instance
sudo mount -t efs -o tls fs-:/ /database/data
The above mount command did not add any entries within the /etc/fstab but still seems to be persistent on the new ECS SPOT instance.

AWS ECS Fargate update docker image only

Have deployed a Fargate cluster/service/task using pulumi (Python).
"pulumi up" works well.
Now we want to keep all pieces as are, load balancer etc, and just update the docker image of the running task.
How is this done?
Running "pulumi up" again, creates a new cluster.

How can I access AWS ECS containers?

I have a task running a container in AWS ECS. There doesn't seam to be any ECS CLI commands to access that container. Is is possible to log directly into a container running in ECS?
Yes, you can access the ECS container if you deployed using AWS - ECS - EC2 option. You can get the container IP from the ECS - ECS instances tab and SSH into the instance to see it there. Make sure the Security Group of this instance allows SSH access. Let me know if this helps!

How to do multi-tiered application deployment using Docker?

I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish

ECS or EB for a single-container application with Docker?

I deployed a single-container SailsJS application with Docker (image size is around 597.4 MB) and have hooked it up to ElasticBeanstalk.
However, since ECS was built for Docker, might it be better to use that over EB?
Elastic Beanstalk (EB) is a PaaS solution in AWS family and it provides very high level concepts: you have applications, versions and you create environments.
EC2 Container service (ECS) is a very low level cluster scheduling platform. You have to manually describe a lot of configuration for your Docker containers, link them and also manually setup load balancers and everything else you need.
So, EB is much simpler to use and maintain. ECS is more complicated, but it uses your resources in a very efficient way.
Also, EB has two different Docker types: single-container and multi-container. Multi-container uses ECS internally.
My advice: use Elastic Beanstalk. ECS is a good fit if you have big number of different applications that you need to run efficiently in a cluster.