dev opsworks - updating the app, will it drainstop apache - deployment

I have an autoscaled environment at my production, which is currently a havoc when we update build on it, so we thought we better move to dev opsworks at AWS to make the process more easy for us.
We can't afford a downtime, not now not ever, never ever; a second worth of loss while updating a build and may be restarting apache costs a fortune.
We can't possibly afford to just let our machine be terminated by autoscale policy when a new update comes in with new AMI based ec2 machine, actually when autoscale terminates a machine under any circumstances it doesn't care for your running requests on that machine, it just shuts it down while what it should rather do is a graceful shutdown, by something like drainstop on apache, so it could first at least finish the work in hand.
now that opsworks is here, and we are planning to use it to update our builds more automagically, will the new update push run the recipes again, in fact this paragraph which i just read worries me more, does it mean that it won't update the build automatically on new instances.
After you have modified the app settings, you must deploy the app.
When you first deploy an app, the Deploy recipes download the code and
related files to the app server instances, which then run the local
copy. If you modify the app in the repository, you must ensure that
the updated code and related files are installed on your app server
instances. AWS OpsWorks automatically deploys the current app version
to new instances when they are started. For existing instances,
however, the situation is different:
You must manually deploy the updated app to online instances.
You do not have to deploy the updated app to offline instance
store-backed instances, including load-based and time-based instances;
AWS OpsWorks automatically deploys the latest app version when they
are restarted.
You must restart offline EBS-backed 24/7 instances and manually deploy
the app; AWS OpsWorks does not run the Deploy recipes on these
instances when they are restarted.
You cannot restart offline EBS-backed load-based and time-based
instances, so the simplest approach is to delete the offline instances
and add new instances to replace them.
Because they are now new instances, AWS OpsWorks will automatically
deploy the current app version when they are started.

First of all, let me state that I've started looking into OpsWorks just about 2 weeks ago. So I'm far from being a pro. But here's my understanding how it works:
We need to differentiate between instances that are instance store backed, and instances that are EBS backed:
The instance store disappears together with the instance once it's shut down. Therefore, bringing it up again, starts from zero. It has to download the latest app again and will deploy that.
For EBS backed instances the deployed code remains intact (persisted) exceeding the lifetime of the instance to which it is attached. Therefore, bringing an EBS backed instance back to life, will not update your app automatically. The old version remains deployed.
So your first decision needs to be what instance type to use. It is generally a good idea to have the same version of your app on all instances. Therefore I would suggest going with EBS-backed instances which will not automatically deploy new versions when booting up. In this case, deploying a new version would mean to bring up brand new instances that will be running the new code automatically (as they are new), and then destroying the old instances. You will have a very short time during which old and new code will run side by side.
However, if you prefer to have always the very latest version deployed and can afford risking discrepancies between the individual instances for an extended period of time (e.g. having different app versions deployed depending on when an instance was originally started), then instance store backed might be your choice. Every time a new instance spins up, the latest and greatest code will be deployed. If you want to update existing ones, just bring up new instances instead and kill the existing ones.
Both strategies should give you the desired effect of zero downtime. The difference is on when and how the latest code is being deployed. Combine this with HAProxy to have better control which servers will be used. You can gradually move traffic from old instances to new instances for example.

Related

Best Strategy for deploying test and dev app versions to Google Compute Platform

Google Compute Platform
I've got an Angular (2) app and a Node.js middleware (Loopback) running as Services in an App Engine in a project.
For the database, we have a Compute Engine running PostgreSQL in that same project.
What we want
The testing has gone well, and we now want to have a test version (for ongoing upgrade testing/demo/etc) and a release deployment that is more stable for our initial internal clients.
We are going to use a different database in psql for the release version, but could use the same server for our test and deployed apps.
Should we....?
create another GCP project and another gcloud setup on my local box to deploy to that new project for our release deployment,
or is it better to deploy multiple versions of the services to the single project with different prefixes - and how do I do that?
Cost is a big concern for our little nonprofit. :)
My recommendation is the following:
Create two projects, one for each database instance. You can mess around all you want in the test project, and don't have to worry about messing up your prod deployment. You would need to store your database credentials securely somewhere. A possible solution is to use Google Cloud Project Metadata, so your code can stay the same between projects.
When you are ready to deploy to production, I would recommend deploying a new version of your App Engine app in the production project, but not promoting it to the default.
gcloud app deploy --no-promote
This means customers will still go to the old version, but the new version will be deployed so you can make sure everything is working. After that, you can slowly (or quickly) move traffic over to the new version.
At about 8:45 into this video, traffic splitting is demoed:
https://vimeo.com/180426390
Also, I would recommend aggressively shutting down unused App Engine Flexible deployments to save costs. You can read more here.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.

How does a production deployment for a cloud based software happen?

Lets take the example of a heavily used cloud based software.
When a deployment happens, let's say users are online.
Won't the server require stop & start after deploy? How is the service continuity maintained?
How will the ongoing user sessions / unsaved data be continued post deploy?
How is the risk managed? (Lets say an issue comes up after deploy and you need to revert to the older version, now imagine a user has already worked on the new version and saved some data with it, which is not compatible with previous versions)
Whether the server require stop & start after deploy depends on the technology being used. Any state information that is kept within the server can be externalized (i.e. written to disk) for the purpose of updating the application. Whether this is neccessary and whether this is done depends on the technology being used.

Upgrading Mongodb when config instances use the same binary as data instances

I'm about to upgrade a sharded MongoDB environment from 2.0.7 to 2.2.9, ultimately I want to upgrade to 2.4.9 but apparently I need to do this via 2.2. The release notes for 2.2 state that the config servers should have their binaries upgraded first, then the shards. I currently have the config instances using the same Mongo binary as the data instances. Essentially there are three shards each with three replicas, and one replica out of each shard also functions as a config instance. Since they share a binary I can't upgrade the config instances independent of some of the data instances.
Would upgrading some data instances before all of the config instances cause any problems, assuming I've disabled the balancer?
Should I change the config instances to use a different copy of the binary? If so, what's the best way to go about this for an existing production setup running on Ubuntu 12?
Should I remove the three data instances from the replica sets, upgrade the config instances, then start the data instances up again, effectively updating them as well, but in the right order? This last option is a bit hairy as some are primaries, so I would have to step them down before removing them from the replica sets. This last option would also occur again when I have to do the next upgrade, so I'm not really a fan.
I resolved this issue by:
Adding the binaries for the new version to a new folder.
Restarting the config instances using the new binaries so that the data instances could continue to run with the old binaries
Once all of the config servers were upgraded I created yet another folder in which to put the same new binaries from step 1
I then restarted the data instances using these new binaries
Now the config instances and data instances on the same server are using the new binaries but in different folders so that it will be easier to upgrade them for the next release
Note that there are other steps involved with the upgrade, and these are specified in the release notes which you should always follow. However, this is how I dealt with the shared binary problem which is not directly addressed in the release notes.
A lot of the tutorials seem to use a single binary for data and config instances on a single server but this is problematic when it's time to upgrade. I'd suggest always using separate binaries for your config and data instances.
Even if the config server and data server share the same binary, you can upgrade them one by one. The first step is upgrade the mongodb package. The second step is to shut down the config server, restart it using the new binary. The third step is to shut down the data server, restart it using the new binary.
I invite you to look to the release notes of each release you have to pass through. MongoDB team is explaining all these steps.
For example here you can find how to upgrade from 2.2 to 2.4 :
http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#upgrade-a-sharded-cluster-from-mongodb-2-2-to-mongodb-2-4
The basic steps are:
Upgrade all mongos instances in the cluster.
Upgrade all 3 mongod config server instances.
Upgrade the mongod instances for each shard, one at a time.
Once again, look at the release notes, this is should be your first step ;)

Deployment approach to bring down time for Java

We have our J2EE based application basically It is small e-commerce apps that run across global (multiple time zones). When ever we have to deploy the patch it take around 3 hrs time (DB backup,DB changes,Java changes,QA smoke testing). I knew its too high. I want to bring down this deployment time to less than 30 mins.
Now I would brief about application infra: We got two Jboss server and single DB, load balancer is configured for both jboss server. It is not cluster env.
Currently what we do :
We bring down both jboss and DB
Take DB backup
Make the DB changes, run some script
Make the java changes, run patches
Above steps will take around 2 hrs for us
Than QA will do testing for one hr. than bring up the server.
Can you suggest some better approach to achieve this? My main question, when we have multiple jboss and single DB. How to make deployment smooth
One approach I've heard that Netflix uses, but have not had a chance to use myself:
Make all of your DB schema changes both forward and backward compatible with the current version of software running, and the one you are about to deploy. Make the new software version continue to write any data the old version needs. Hopefully this is a minimal set.
Backup your running DB (most DBs don't require downtime for backups), and deploy your database schema updates at least a week prior to your software deploy.
Once your db changes have burnt in and seem to be bug free with the current running version, reconfigure your load balancer to point to only one instance of your JBoss servers. Deploy your updated software to the other instance and have QA smoke test it offline while the other server continues to server production request.
When QA is happy with the results, point the LB to just the offline JBoss server (with the new software). When that comes online, update the software on the newly offline JBoss server, and have QA smoke test if desired. If successful, point the LB to both JBoss instances.
If QA finds major bugs, and a quick bug fix and "roll-forward" is not possible, roll back to the previous version of the deployed software. Since your schema and new code is backward compatible, you won't have lost data.
On your next deploy, remove any garbage from your schema (like columns unused by the current deploy) in a way that makes it still backward and forward compatible.
Although more complex than your current approach, this approach should reduce your deployment downtime with minimal risk.