I have a task running a container in AWS ECS. There doesn't seam to be any ECS CLI commands to access that container. Is is possible to log directly into a container running in ECS?
Yes, you can access the ECS container if you deployed using AWS - ECS - EC2 option. You can get the container IP from the ECS - ECS instances tab and SSH into the instance to see it there. Make sure the Security Group of this instance allows SSH access. Let me know if this helps!
Related
Have deployed a Fargate cluster/service/task using pulumi (Python).
"pulumi up" works well.
Now we want to keep all pieces as are, load balancer etc, and just update the docker image of the running task.
How is this done?
Running "pulumi up" again, creates a new cluster.
I have a ECS fargate cluster up and running and it has 1 service and 1 task definition attached to it.
The task definition already has 2 container images described.This cluster is up and running.
Can I create a new service and for another application and configure it with this Existing ECS cluster.
If yes, will both the service run simultaneously.
From the AWS Documentation in regards Amazon ECS Clusters
An Amazon ECS cluster is a logical grouping of tasks or services. Your
tasks and services are run on infrastructure that is registered to a
cluster.
So I believe, you should be able to run multiple services in a cluster that is attached to its related task definition in the ECS.
Source Documentation - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html
I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I have deployed an AWS batch job which creates an ECS Fargate task in the background. This is all deployed in a public subnet with internet access. I have verified that running the docker container in a standalone EC2 instance in the same subnet has internet connectivity and everything works fine. After reading the AWS documentation I cannot determine why my ECS docker container is not able to access the internet. Is there some special configuration needed for this to work?
I know it is not possible to run containers on multiple hosts using a single docker-compose file. And this needs something like docker swarm to do it.
Is there anything equivalent to this on AWS - ECS or Fargate?
Yes, You can do this on ECS using EC2 type or using Fargate type cluster.
If you are moving with ECS EC2 Type model you can opt for different task placement strategies for placing your task on different nodes to achieve any model you want like AZ Spread, Binpack etc
Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html
If you are opting for Fargate ECS Type then you don't have to take care of underlying EC2 nodes as those are managed by AWS in this case.
Also, there is a big difference in docker-compose and docker-swarm.
Docker swarm is the orchestration which is required for your use case as of now.