How to deploy about a java application to about 100 EC2 instance? - deployment

Here is my case:
I have about 100 EC2 instances and everyone runs a java application (Java SE application, not Java EE application), I want to deploy my complied jar files and library to all the instances and then make the application run on everyone's application. Because the application is changing from time to time, every time I have to spend two hours to do this job.
Do you know if there's a management tool or software that can help me to do this work automatically, and what is your practice to deploy this application?
Do you have an auto deployment workflow for development on AWS?

Kwatee (http://www.kwatee.net), our free and lightweight deployment tool, supports EC2 instances as well as elastic load balancing. There's a short screencast of a small EC2 deployment here.

Since you're using java you can utilize AWS Elastic Beanstalk.
Development Lifecycle:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Java.sdlc.html
Managing the Environment:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/using-features.managing.html
There are many more article links on the same page, probably will need to read all of them but these are the two that I feel are most related too your question. I haven't used this product so i can't give any first hand experience but it seems to be designed to help you with your exact problem.

Boxfuse does exactly what you want.
For your Java SE application you literally only have to execute:
boxfuse create my-javase-app -apptype=load-balanced
boxfuse scale my-javase-app -capacity=100:t2.micro
boxfuse run my-javase-app-1.0.jar -env=prod
This will
Create a new application and configure it to use an ELB
Scale it to 100 t2.micro instances
Create AMI
Create an ELB
Create a security group
Create an auto-scaling group
Launch your instance(s)
Any subsequent update will be done as a zero downtime blue/green deployment.

You can use AutoScaling Launch Configuration and AutoScaling Group to launch 100 EC2 instances. But hold on, Need to request EC2 instance limit with EC2 instance type to AWS Support. Typically, It will take 1 business day to complete the request.
First you suppose to create AutoScaling Launch Configuration in AWS Console. AutoScaling Launch Configuration includes type, storage, security group and you can add scripts to run at launch of EC2 Instance.
Next is AutoScaling Group. You have to choose which launch configuration is suitable for your autoscaling group. In AutoScaling Group configuration, you must mention min and max count (i.e) it will launch EC2 instances based on min count and launches upto max count. CloudWatch monitoring can be used for AutoScaling group. CloudWatch will work based on EC2 instance CPU Utilization and alarm settings.
Elastic Load Balancing will help to distribute traffic among EC2 Instances. If you want to use ELB, you have to create before AutoScaling Group. In AutoScaling Group you may include ELB to handle and distribute traffic.

Related

Web application deployment approach using Google Cloud - GKE

I deploying a python + tensorflow + flask application using a fully managed Google Cloud Run Service (1 vCPUs and 4 GB Ram).
System works fine but it is really slow, so I am evaluating ways of making it fast (it needs to run 20-30 times faster than what is doing now)
What would be the best approach?
To use a Kubernetes Cluster with one or two powerful machines
To use a Kubernetes Cluster with 3-5 weaker machines
To forget about Kubernets/Docker and run everything on single powerfull VM
Something else maybe?
For now I don't expect to have more than 10 users at a time but I want to be able to scale it up eventually.
You might want to evaluate according to your use case
Per this article, Fully managed Cloud Run is an ideal serverless platform for stateless containerized microservices that don’t require Kubernetes features like namespaces, co-location of containers in pods (sidecars) or node allocation and management.
GKE is a great choice if you are looking for a container orchestration platform that offers advanced scalability and configuration flexibility.
You mentioned you are looking the cheaper/easier method to develop, but this will probably not be as scalable, efficient or manageable, you might want to take a closer look at all cloud compute options in GCP to see what could benefit your use case the most.
You mentioned your use case is CPU intensive, so you might want to leverage the high CPU machine types, these might be used directly by creating a VM, creating an instance group or using them in other services like GKE or App Engine

What's the value proposition of running Cloud Run versus a normal service in GKE?

Is there any advantage if I use Cloud Run instead of deploying a normal service/container in GKE?
I will try to add my perspective.
This answer does not cover running containers in Google Cloud Run Kubernetes. The reason is that we wanted an almost zero cost solution for a legacy PHP website. Cloud Run fit perfectly and we had an easy time both porting the code and learning Cloud Run.
We needed to do something with a legacy PHP website. This website was running on Windows Server 2012, IIS and PHP 7.0x. The cost was over $100.00 per month - mostly for Windows licensing fees for a VM in the cloud. The site was not accessed very much but was needed for various business reasons.
A decision was made Thursday (4/18/2019) was that we needed to learn Google Cloud Run, so we decided to port this site to a container and try to run the container in Google Cloud. Nothing like a real world example to learn the details.
Friday, we ported the PHP code to Apache. Very easy process. We did not worry about SSL as we intend to use Cloud Run SSL.
Saturday we started to learn Cloud Run. Within an hour we had the Hello World PHP example running. Link.
Within two hours we had the containerized website running in Cloud Run. Again, very simple.
Then we learned how to configure Cloud Run SSL with our DNS server.
End result:
Almost zero cost for a PHP website running in Cloud Run.
Approximately 1.5 days of effort to port the legacy code and learn Cloud Run.
Savings of about $100.00 per month (no Windows IIS server).
We do not have to worry about SSL certificates from now on for this site.
For small websites that are static, Cloud Run is a killer product. The learning curve is very small even if you do not know Google Cloud. You just need to configure gcloud for container builds and deployment. This means developers can be independant of needing to master GCP.
There are many distinctions in using Cloud Run to expose a service as compared to running it natively in GKE. The primary of these is that Cloud Run provides more of a serverless infrastructure. Basically you declare that you want to expose a service and then let GCP do the rest. Contrast this with creating a Kubernetes cluster and then defining your service in pods. With a manually created GKE cluster, the nodes and environment are always on which means that you are billed for them regardless of utilization. With Cloud Run, your service is merely available and you are only billed for actual consumption. If your service not being called, your costs are zero. Another advantage is that you don't have to predict your utilization needs and allocate sufficient nodes. Scaling happens automatically for you.
See also these presentations from Google Next 19:
Migrating from a Monolith to Microservices (Cloud Next '19)
What's New in Serverless Compute? (Cloud Next '19)
Run Containers on GCP's Serverless Infrastructure (Cloud Next '19)
Run Cloud Functions Everywhere (Cloud Next '19)
Container Once, Serverless Anywhere (Cloud Next '19)

Adding Desired State Configuration extension to a service fabric VMSS

We recently needed to add the Microsoft.Powershell.DSC extension to our VMSS that contain our service fabric cluster. We redeployed the cluster using our ARM template, with the addition of the new extension for DSC. During the deployment we observed that as many as 4 out of 5 scale set instances were in the restarting stage at a given time. The services in our cluster were also unresponsive during that time. The outage was only a few minutes long, but this seems like something that should not happen.
Reliability Level: Silver
Durability Level: Bronze
This is caused by the selected durability level 'bronze'.
The durability tier is used to indicate to the system the privileges
that your VMs have with the underlying Azure infrastructure. In the
primary node type, this privilege allows Service Fabric to pause any
VM level infrastructure request (such as a VM reboot, VM reimage, or
VM migration) that impact the quorum requirements for the system
services and your stateful services. In the non-primary node types,
this privilege allows Service Fabric to pause any VM level
infrastructure requests like VM reboot, VM reimage, VM migration etc.,
that impact the quorum requirements for your stateful services running
in it.
Bronze - No privileges. This is the default and is recommended if you are only > running stateless workloads in your cluster.
I suggest reading this article. Its a MS employee blog. I'll copy out the relevant part:
If you don’t mind all your VMs being rebooted at the same time, you can set upgradePolicy to “Automatic”. Otherwise set it to “Manual” and take care of applying changes to the scale set model to individual VMs yourself. It is fairly easy to script rolling out the update to VMs while maintaining application uptime. See https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set for more details.
If your scale set is in a Service Fabric cluster, certain updates like changing OS version are blocked (currently – that will change in future), and it is recommended that upgradePolicy be set to “Automatic”, as Service Fabric takes care of safely applying model changes (like updated extension settings) while maintaining availability.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

cloudformation best practices in AWS

We are at early stages with running our services on AWS. We have our server hosted in AWS, in a VPC, having private and public subnets and have multiple instances in private and public subnets using ELB and autoscaling setup (using AMIs) for frontend web servers. The whole environement(VPC, security groups, EC2 instances, DB instances, S3 buckets, cloudfront) are setup manually using AWS console at first.
Application servers host jboss and war files are deployed on the servers.
As per AWS best practices we want to create whole infrastructure using cloudformation and have setup test/stage/prod environment.
-Would it be a good idea to have all the above componenets (VPC, security groups, EC2 instances, DB instances, S3 buckets, cloudfront etc) using one cloudformation stack/template? Or we should we create two stacks 1) having network replated components and 2) having EC2 related components?
-Once we have a prod envoronemtn running with cloudformation stact and In case we want to update the new AMIs on prod in future, how can we update the live running EC2 instances using cloudformation without interruptions?
-What are the best practices/multiple ways for code deployment to multiple EC2 notes when a new release is done? We dont use Contineus integration at the moment.
It's a very good idea to separate your setup into multiple stacks. One obvious reason is that stacks have certain limits that you may reach eventually. A more practical reason is that you don't really need to update, say, your VPC every time you just want to deploy a new version. The network architecture typically changes less frequently. Another reason to avoid having one huge template, or to make changes to an "important" template needlessly, is that you always run the risk of messing things up. If there's an error in your template and you remove an important resource by accident (e.g. commented out) you'll be very sorry. So separating stacks out of sheer caution is probably a good idea.
If you want to update your application you can simply update the template with the new AMIs and CFN will know what needs to be recreated or updated. You can read about rolling updates here. However, I'd recommend considering using something a bit more straightforward for deploying your actual code, like Ansible or Chef.
I'd also recommend you look into Docker for packaging and deploying your application's nodes. Very handy.