Upgrading Mongodb when config instances use the same binary as data instances - mongodb

I'm about to upgrade a sharded MongoDB environment from 2.0.7 to 2.2.9, ultimately I want to upgrade to 2.4.9 but apparently I need to do this via 2.2. The release notes for 2.2 state that the config servers should have their binaries upgraded first, then the shards. I currently have the config instances using the same Mongo binary as the data instances. Essentially there are three shards each with three replicas, and one replica out of each shard also functions as a config instance. Since they share a binary I can't upgrade the config instances independent of some of the data instances.
Would upgrading some data instances before all of the config instances cause any problems, assuming I've disabled the balancer?
Should I change the config instances to use a different copy of the binary? If so, what's the best way to go about this for an existing production setup running on Ubuntu 12?
Should I remove the three data instances from the replica sets, upgrade the config instances, then start the data instances up again, effectively updating them as well, but in the right order? This last option is a bit hairy as some are primaries, so I would have to step them down before removing them from the replica sets. This last option would also occur again when I have to do the next upgrade, so I'm not really a fan.

I resolved this issue by:
Adding the binaries for the new version to a new folder.
Restarting the config instances using the new binaries so that the data instances could continue to run with the old binaries
Once all of the config servers were upgraded I created yet another folder in which to put the same new binaries from step 1
I then restarted the data instances using these new binaries
Now the config instances and data instances on the same server are using the new binaries but in different folders so that it will be easier to upgrade them for the next release
Note that there are other steps involved with the upgrade, and these are specified in the release notes which you should always follow. However, this is how I dealt with the shared binary problem which is not directly addressed in the release notes.
A lot of the tutorials seem to use a single binary for data and config instances on a single server but this is problematic when it's time to upgrade. I'd suggest always using separate binaries for your config and data instances.

Even if the config server and data server share the same binary, you can upgrade them one by one. The first step is upgrade the mongodb package. The second step is to shut down the config server, restart it using the new binary. The third step is to shut down the data server, restart it using the new binary.

I invite you to look to the release notes of each release you have to pass through. MongoDB team is explaining all these steps.
For example here you can find how to upgrade from 2.2 to 2.4 :
http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#upgrade-a-sharded-cluster-from-mongodb-2-2-to-mongodb-2-4
The basic steps are:
Upgrade all mongos instances in the cluster.
Upgrade all 3 mongod config server instances.
Upgrade the mongod instances for each shard, one at a time.
Once again, look at the release notes, this is should be your first step ;)

Related

Possible to remove cluster from 2.4.8 server and import it on a fresh 2.5 server?

due to a wrong installation method (single node) we would like to migrate our existing kubernetes cluster to a newer and HA-rancher-kubernetes cluster.
Can someone tell me if it’s safe to do following:
remove (previously imported) cluster from our 2.4.8 single-node rancher installation
register this cluster again on our new kubernetes-managed 2.5 rancher-cluster?
We already tried this with our development cluster and it worked fine, the only thing which was necessary to do was to:
create user/admin accounts again
reassign all namespaces to the corresponding rancher projects
Would be nice to get some more opinions on this, right now it looks more or less safe :smiley:
Also does someone know what happens if one kubernetes cluster is registered/imported to two rancher instances at the same time (like 2.4.8 and 2.5 at the same time) - I know its probably really a bad Idea - just want to get a better understanding if I’m wrong :D
Just to give some feedback after we solved it by our selfes:
We removed the old installation and imported the cluster again in the new installation, worked fine wihtout problems.
We also had another migration, where we moved again since our old rancher installation didnt work anymore (on a managed kubernetes cluster, which is not recommended) and wasnt also reachable anymore.
we just imported the cluster into the new rancher installation, without removing it from the old one. Everything worked fine also, except that the projects-namespace associations were broken and we had to reassign all namespaces to new projects again. Also somehow our alerting is still not working correctly, the notification emails still point to our old rancher installation domain, even though we changed all config files which we found.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.

dev opsworks - updating the app, will it drainstop apache

I have an autoscaled environment at my production, which is currently a havoc when we update build on it, so we thought we better move to dev opsworks at AWS to make the process more easy for us.
We can't afford a downtime, not now not ever, never ever; a second worth of loss while updating a build and may be restarting apache costs a fortune.
We can't possibly afford to just let our machine be terminated by autoscale policy when a new update comes in with new AMI based ec2 machine, actually when autoscale terminates a machine under any circumstances it doesn't care for your running requests on that machine, it just shuts it down while what it should rather do is a graceful shutdown, by something like drainstop on apache, so it could first at least finish the work in hand.
now that opsworks is here, and we are planning to use it to update our builds more automagically, will the new update push run the recipes again, in fact this paragraph which i just read worries me more, does it mean that it won't update the build automatically on new instances.
After you have modified the app settings, you must deploy the app.
When you first deploy an app, the Deploy recipes download the code and
related files to the app server instances, which then run the local
copy. If you modify the app in the repository, you must ensure that
the updated code and related files are installed on your app server
instances. AWS OpsWorks automatically deploys the current app version
to new instances when they are started. For existing instances,
however, the situation is different:
You must manually deploy the updated app to online instances.
You do not have to deploy the updated app to offline instance
store-backed instances, including load-based and time-based instances;
AWS OpsWorks automatically deploys the latest app version when they
are restarted.
You must restart offline EBS-backed 24/7 instances and manually deploy
the app; AWS OpsWorks does not run the Deploy recipes on these
instances when they are restarted.
You cannot restart offline EBS-backed load-based and time-based
instances, so the simplest approach is to delete the offline instances
and add new instances to replace them.
Because they are now new instances, AWS OpsWorks will automatically
deploy the current app version when they are started.
First of all, let me state that I've started looking into OpsWorks just about 2 weeks ago. So I'm far from being a pro. But here's my understanding how it works:
We need to differentiate between instances that are instance store backed, and instances that are EBS backed:
The instance store disappears together with the instance once it's shut down. Therefore, bringing it up again, starts from zero. It has to download the latest app again and will deploy that.
For EBS backed instances the deployed code remains intact (persisted) exceeding the lifetime of the instance to which it is attached. Therefore, bringing an EBS backed instance back to life, will not update your app automatically. The old version remains deployed.
So your first decision needs to be what instance type to use. It is generally a good idea to have the same version of your app on all instances. Therefore I would suggest going with EBS-backed instances which will not automatically deploy new versions when booting up. In this case, deploying a new version would mean to bring up brand new instances that will be running the new code automatically (as they are new), and then destroying the old instances. You will have a very short time during which old and new code will run side by side.
However, if you prefer to have always the very latest version deployed and can afford risking discrepancies between the individual instances for an extended period of time (e.g. having different app versions deployed depending on when an instance was originally started), then instance store backed might be your choice. Every time a new instance spins up, the latest and greatest code will be deployed. If you want to update existing ones, just bring up new instances instead and kill the existing ones.
Both strategies should give you the desired effect of zero downtime. The difference is on when and how the latest code is being deployed. Combine this with HAProxy to have better control which servers will be used. You can gradually move traffic from old instances to new instances for example.

Deployment approach to bring down time for Java

We have our J2EE based application basically It is small e-commerce apps that run across global (multiple time zones). When ever we have to deploy the patch it take around 3 hrs time (DB backup,DB changes,Java changes,QA smoke testing). I knew its too high. I want to bring down this deployment time to less than 30 mins.
Now I would brief about application infra: We got two Jboss server and single DB, load balancer is configured for both jboss server. It is not cluster env.
Currently what we do :
We bring down both jboss and DB
Take DB backup
Make the DB changes, run some script
Make the java changes, run patches
Above steps will take around 2 hrs for us
Than QA will do testing for one hr. than bring up the server.
Can you suggest some better approach to achieve this? My main question, when we have multiple jboss and single DB. How to make deployment smooth
One approach I've heard that Netflix uses, but have not had a chance to use myself:
Make all of your DB schema changes both forward and backward compatible with the current version of software running, and the one you are about to deploy. Make the new software version continue to write any data the old version needs. Hopefully this is a minimal set.
Backup your running DB (most DBs don't require downtime for backups), and deploy your database schema updates at least a week prior to your software deploy.
Once your db changes have burnt in and seem to be bug free with the current running version, reconfigure your load balancer to point to only one instance of your JBoss servers. Deploy your updated software to the other instance and have QA smoke test it offline while the other server continues to server production request.
When QA is happy with the results, point the LB to just the offline JBoss server (with the new software). When that comes online, update the software on the newly offline JBoss server, and have QA smoke test if desired. If successful, point the LB to both JBoss instances.
If QA finds major bugs, and a quick bug fix and "roll-forward" is not possible, roll back to the previous version of the deployed software. Since your schema and new code is backward compatible, you won't have lost data.
On your next deploy, remove any garbage from your schema (like columns unused by the current deploy) in a way that makes it still backward and forward compatible.
Although more complex than your current approach, this approach should reduce your deployment downtime with minimal risk.

How to handle a ear deployment which copies files under the application servers bootstrap folder

Our database drivers are usually copied under <jboss.home>\common\lib folder in JBoss 5.1 and this is quite annoying since if you have to upgrade the driver you will have to re-start the JBoss 5.1 server. How does everyone else handle such situations in a production environment?
Upgrading database drivers is not something you want to do on a running server. Your connection pools will all be using the "old" driver - there's no sensible way to make that switch without a restart.
If downtime is important to you, then you should be using more than one server in a cluster, and perform rolling upgrades/restarts on each one.