I'm running Superset in AWS ECS using Fargate. This instance of Superset is for internal use only. I want to be able to configure ECS to scale to zero tasks when not in use. I am aware that it will take time (Possibly minutes) to come back up, the end-users of this application are content with waiting a few minutes.
Situation:
AWS ECS deployed using Fargate
Autoscaling set to a max of 2 and a min of 0
Want to scale to 0 when not in use (after, say, an hour)
Scaling ECS down to zero when not in use is not possible. ECS is designed to run continuously, unlike Lambda functions that can be turned on and off as requests arrive.
However, if your internal users only access the application during known hours (say business hours), then you can use scheduled scaling to scale to zero during specific hours.
You can use put-scheduled-action for that.
aws application-autoscaling put-scheduled-action --service-namespace ecs \
--schedule "cron(15 12 * * ? *)" \
...
This AWS Blog post explains it in more detail: https://aws.amazon.com/blogs/containers/optimizing-amazon-elastic-container-service-for-cost-using-scheduled-scaling/
Related
I have a fairly small cluster of 6 nodes, 3 client, and 3 server nodes. Important configurations,
storeKeepBinary = true,
cacheMode = Partitioned (some caches's about 5-8, out of 25 have this as TRANSACTIONAL)
AtomicityMode = Atomic
backups = 1
readFromBackups = false
no persistence
When I run the app for some load/performance test on-prem on 2 large boxes, 3 clients on one box, and 3 servers on another box, all within docker containers, I get a decent performance.
However, when I move them over to AWS and run them in EKS, the only change I make is to change the cluster discovery from standard TCP (default) to Kubernetes-based discovery and run the same test.
But now the performance is very bad, I keep getting,
WARN [sys-#145%test%] - [org.apache.ignite] First 10 long-running transactions [total=3]
Here the transactions are running more than a min long.
In other cases, I am getting,
WARN [sys-#196%test-2%] - [org.apache.ignite] First 10 long-running cache futures [total=1]
Here the associated future has been running for > 3 min.
Most of the places 'google search' has taken me, talks flaky/inconsistent n/w as the cause.
The app and the test seem to be ok since on a local on-prem this works just fine and the performance is decent as well.
Wanted to check if others have faced this or when running on Kubernetes in the public cloud something else needs to be done. Like somewhere I read nodes need to be pinned to the host in a cloud/virtual environment, but it's not mandatory.
TIA
We stand up a lot of clusters for testing/poc/deving and its up to us to remember to delete them
What I would like is a way of setting a ttl on an entire gke cluster and having it get deleted/purged automatically.
I could tag the clusters with a timestamp at creation and have an external process running on a schedule that reaps old clusters, but it'd be great if I didn't have to do that- it might be the only way but maybe there is a gke/k8s feature for this?
Is there a way to have the cluster delete itself without relying on an external service? I suppose it could spawn a cloud function itself- but Im wondering if there is a native gke/k8s feature to do this more elegantly
You can spawn GKE cluster with Alpha features. Such clusters exist for one month maximum and then are auto-deleted.
Read more: https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters
Try Cloud Scheduler and hook it up with your build server. Cloud Scheduler supports Http , App Engine , Pub/Sub endpoints.
I don't believe there is a native way to do this, but it doesn't seem unreasonable to use cloud scheduler to every so often trigger a cloud function which looks for appropriately labeled clusters and triggers their deletion via the API.
I am running an ecs service using Fargate on AWS. Each task there completes a single operation and dies (fetching a message from a SQS queue and decode/encode a video file). Now I designed an autoscaling policy like below,
If SQS queue size is more than 5 increment desired count to 1 (repeat every 60 seconds).
If SQS queue size is less than 2 decrement desired count to 1 (repeat every 60 seconds).
But what AWS is doing is than when queue size gets down below 2, it kills out running tasks leaving the corresponding operation "broken". I don't want AWS to kill the running tasks (because they will automatically die within sometime when the command completes) but just to set the desired count to 0 so that the tasks doesn't get "respawned". So literally I want my tasks to be unstoppable during auto-scaling.
How I can achieve this in ECS service and aws_ecs_autoscaling_target. Please note that I am using terraform to provision the service.
Thanks in advance.
I had to solve this issue in a different approach. I had to create a small Lambda function which gets triggered by the cloudwatch alarm and starts a Fargate task using StartTask. This workflow suited well here rather than using autoscaling policy.
I have a service which runs 1 task. The task takes 2 hours to run and runs daily. My ideal scenario would be this:
I update my service to from 0 desired tasks to 1 desired task
ECS sees that in order to run the service I need an EC2 Instance. It therefore spins up an instance to run the task.
When the task finishes it updates the service to 0 desired tasks.
ECS sees that I don't need the instance to run 0 tasks and turns it off
Using the ECS admin it looks like this is possible but in reality, when I scale up my service from 0->1 task, it just complains there is no instances to run the task, rather than autoscaling an instance. I set the auto scale policies of the cluster to min=0, desired=1, max=1 however it makes no difference.
I'd like to know if my ideal scenario is indeed possible, or if there is a better way to achieve this goal.
Thanks in advance,
Unfortunately point 2 and 4 is not true for ECS (EC2 launch type). By default it will neither launch the EC2 instance nor terminate the instance.
Actually FARGATE is more costlier than ECS(EC2 lauch type). But for your use case FARGATE will be much more cheaper [1] compare to ECS(EC2 lauch type).
But again FARGATE would not be the best option. According to your use case best option would be AWS Batch [2]. Batch uses the ECS as a back end and main advantage using batch is it will also perform the step 2 and step 4 mentioned in your use case.
[1] https://aws.amazon.com/fargate/pricing/
[2] https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html
Here is my case:
I have about 100 EC2 instances and everyone runs a java application (Java SE application, not Java EE application), I want to deploy my complied jar files and library to all the instances and then make the application run on everyone's application. Because the application is changing from time to time, every time I have to spend two hours to do this job.
Do you know if there's a management tool or software that can help me to do this work automatically, and what is your practice to deploy this application?
Do you have an auto deployment workflow for development on AWS?
Kwatee (http://www.kwatee.net), our free and lightweight deployment tool, supports EC2 instances as well as elastic load balancing. There's a short screencast of a small EC2 deployment here.
Since you're using java you can utilize AWS Elastic Beanstalk.
Development Lifecycle:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Java.sdlc.html
Managing the Environment:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/using-features.managing.html
There are many more article links on the same page, probably will need to read all of them but these are the two that I feel are most related too your question. I haven't used this product so i can't give any first hand experience but it seems to be designed to help you with your exact problem.
Boxfuse does exactly what you want.
For your Java SE application you literally only have to execute:
boxfuse create my-javase-app -apptype=load-balanced
boxfuse scale my-javase-app -capacity=100:t2.micro
boxfuse run my-javase-app-1.0.jar -env=prod
This will
Create a new application and configure it to use an ELB
Scale it to 100 t2.micro instances
Create AMI
Create an ELB
Create a security group
Create an auto-scaling group
Launch your instance(s)
Any subsequent update will be done as a zero downtime blue/green deployment.
You can use AutoScaling Launch Configuration and AutoScaling Group to launch 100 EC2 instances. But hold on, Need to request EC2 instance limit with EC2 instance type to AWS Support. Typically, It will take 1 business day to complete the request.
First you suppose to create AutoScaling Launch Configuration in AWS Console. AutoScaling Launch Configuration includes type, storage, security group and you can add scripts to run at launch of EC2 Instance.
Next is AutoScaling Group. You have to choose which launch configuration is suitable for your autoscaling group. In AutoScaling Group configuration, you must mention min and max count (i.e) it will launch EC2 instances based on min count and launches upto max count. CloudWatch monitoring can be used for AutoScaling group. CloudWatch will work based on EC2 instance CPU Utilization and alarm settings.
Elastic Load Balancing will help to distribute traffic among EC2 Instances. If you want to use ELB, you have to create before AutoScaling Group. In AutoScaling Group you may include ELB to handle and distribute traffic.