I have deployed an AWS batch job which creates an ECS Fargate task in the background. This is all deployed in a public subnet with internet access. I have verified that running the docker container in a standalone EC2 instance in the same subnet has internet connectivity and everything works fine. After reading the AWS documentation I cannot determine why my ECS docker container is not able to access the internet. Is there some special configuration needed for this to work?
Related
Today, I was reading about the AWS container orchestration tool which is ECS. I have one question about this. If we place an Ec2 instance on a private subnet, then we require a NAT gateway so that ECS agent can provide info to the ECS service. But how does the ECS service manage the orchestration task if ECS seats on the public network and ec2(container instance) in private.
I have a ECS fargate cluster up and running and it has 1 service and 1 task definition attached to it.
The task definition already has 2 container images described.This cluster is up and running.
Can I create a new service and for another application and configure it with this Existing ECS cluster.
If yes, will both the service run simultaneously.
From the AWS Documentation in regards Amazon ECS Clusters
An Amazon ECS cluster is a logical grouping of tasks or services. Your
tasks and services are run on infrastructure that is registered to a
cluster.
So I believe, you should be able to run multiple services in a cluster that is attached to its related task definition in the ECS.
Source Documentation - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html
I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080
I have a task running a container in AWS ECS. There doesn't seam to be any ECS CLI commands to access that container. Is is possible to log directly into a container running in ECS?
Yes, you can access the ECS container if you deployed using AWS - ECS - EC2 option. You can get the container IP from the ECS - ECS instances tab and SSH into the instance to see it there. Make sure the Security Group of this instance allows SSH access. Let me know if this helps!