Can I run multiple services in same go on same ECS fargate Cluster - service

I have a ECS fargate cluster up and running and it has 1 service and 1 task definition attached to it.
The task definition already has 2 container images described.This cluster is up and running.
Can I create a new service and for another application and configure it with this Existing ECS cluster.
If yes, will both the service run simultaneously.

From the AWS Documentation in regards Amazon ECS Clusters
An Amazon ECS cluster is a logical grouping of tasks or services. Your
tasks and services are run on infrastructure that is registered to a
cluster.
So I believe, you should be able to run multiple services in a cluster that is attached to its related task definition in the ECS.
Source Documentation - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html

Related

Deploy application in EKS cluster from another EKS cluster

I am trying to deploy my application programmatically from one EKS cluster to all other EKS clusters. To do that I am getting kubeconfig details using clusterDescribe EKS api.
Steps in my code
Get name and region of EKS cluster
Describe EKS cluster using aws eks sdk
Using describe data, I am building kubeclient.
Using kubeclient, I can deploy application in EKS cluster.
The above steps work from my local machine for any EKS cluster in my account. but If I run my program from one EKS cluster(cluster1) to deploy my application into another(cluster2)
then I get a timeout error in my 4th step.
Can you help me what I am missing?
I am not sure what are you planning but their are tools available which you can deploy on EKS and can do deployment on any kubernetes cluster, cloud accound etc.
You should check Spinnaker open source tool for this.
My team help companies to use spinnaker in their environments.

Airflow with KubernetesExecutor workers (EKS) and webserver+scheduler on EC2

I wanted to know if it's possible to setup a KubernetesExecutor on Airflow but having the webserver and scheduler running on an EC2?
Meaning that tasks would run on Kubernetes pods (EKS in my case) but the base services on a regular EC2.
I tried to find information about the issue but failed short...
the following quote is from Airflow's docs, and it's the reason I'm asking this question
KubernetesExecutor runs as a process in the Airflow Scheduler. The scheduler itself does not necessarily need to be running on Kubernetes, but does need access to a Kubernetes cluster.
Thanks in advance!
Yes, this is entirely possible.
You just need to run your airflow scheduler and airflow webserver on EC2 and configure the EC2 instance to have all the necessary acces (via service account likely - but this is your decision and deployment configuration) to be able to spin pods on your EKS cluster.
Nothing special about it besides that you will have to learn how to run and configure the components to talk to each other - there are no ready-to-use recipes, you will have to simply follow theconfiguration parameters of Airflo, and authentication schemes that you need to have.

Can I add nodes running on my machine to AWS EKS cluster?

Well, I read the user guide of AWS EKS service. I created a managed node group for the EKS cluster successfully.
I don't know how to add the nodes running on my machine to the EKS cluster. I don't know whether EKS support. I didn't find any clue in its document. I read the 'self-managed node group' chapter, which supports add a self-managed EC2 instances and auto-scaling group to the EKS cluster rather than a private node running on other cloud instance like azure, google cloud or my machine.
Does EKS support? How to do that if supports?
This is not possible. It is (implicitly) called out in this page. All worker nodes need to be deployed in the same VPC where you deployed the control plane (not necessarily the same subnets though). EKS Anywhere (to be launched later this year) will allow you to deploy a complete EKS cluster (control plane + workers) outside of an AWS region (but it won't allow running the control plane in AWS and workers locally).
As far as I know, EKS service doesn't support adding self nodes to the cluster. But the 'EKS Anywhere' service does, which has not been online yet, but soon.

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

How to Scale up and Scale down cluster instances in AWS ECS

We have an application to create/start/stop containers inside AWS ECS. we are not making use of ecs services because we don't want container to be started if it is stopped by an application.
So how to automate scale-in/scale-out of the cluster instances in ecs without using ecs services?
Below is the documentation which will tell you step by step how to scale your container instances.
Scaling Container Instances
So how this works is :
Say you have one Container Instance and 2 services running on it.
You are required to increase the ECS Service but it will not scale as it doesn't have resources available on one Container Instance.
Following up the documentation, you can set up CloudWatch Alarms on let's say MemoryReservation metric for your cluster.
When the memory reservation of your cluster rises above 75% (meaning that only 25% of the memory in your cluster is available to for new tasks to reserve), the alarm triggers the Auto Scaling group to add another instance and provide more resources for your tasks and services.
Depending on the Amazon EC2 instance types that you use in your
clusters, and quantity of container instances that you have in a
cluster, your tasks have a limited amount of resources that they can
use while running. Amazon ECS monitors the resources available in the
cluster to work with the schedulers to place tasks. If your cluster
runs low on any of these resources, such as memory, you are eventually
unable to launch more tasks until you add more container instances,
reduce the number of desired tasks in a service, or stop some of the
running tasks in your cluster to free up the constrained resource.