This evening I noticed my staging server ran out of disk space.
When investigating I saw each time I deploy my loopback-js app with strongloop process manager, it installs a brand new app in a new folder.
After deploying 20 times I have 20 versions, which each take up 140 Mb.
I assume those folders make it easy to switch between versions, but I cannot figure out how I should do that with strong-pm and if I can specify how many versions should be saved, etc...
How do these versions, rollback-functionality work in strongloop-process manager and where can i find documentation?
At the moment there is no true "rollback" mechanism in strong-pm. The closest you can get is to deploy a previously deployed git commit, which will re-use the previous deployment that matches that commit's hash.
Related
We have our J2EE based application basically It is small e-commerce apps that run across global (multiple time zones). When ever we have to deploy the patch it take around 3 hrs time (DB backup,DB changes,Java changes,QA smoke testing). I knew its too high. I want to bring down this deployment time to less than 30 mins.
Now I would brief about application infra: We got two Jboss server and single DB, load balancer is configured for both jboss server. It is not cluster env.
Currently what we do :
We bring down both jboss and DB
Take DB backup
Make the DB changes, run some script
Make the java changes, run patches
Above steps will take around 2 hrs for us
Than QA will do testing for one hr. than bring up the server.
Can you suggest some better approach to achieve this? My main question, when we have multiple jboss and single DB. How to make deployment smooth
One approach I've heard that Netflix uses, but have not had a chance to use myself:
Make all of your DB schema changes both forward and backward compatible with the current version of software running, and the one you are about to deploy. Make the new software version continue to write any data the old version needs. Hopefully this is a minimal set.
Backup your running DB (most DBs don't require downtime for backups), and deploy your database schema updates at least a week prior to your software deploy.
Once your db changes have burnt in and seem to be bug free with the current running version, reconfigure your load balancer to point to only one instance of your JBoss servers. Deploy your updated software to the other instance and have QA smoke test it offline while the other server continues to server production request.
When QA is happy with the results, point the LB to just the offline JBoss server (with the new software). When that comes online, update the software on the newly offline JBoss server, and have QA smoke test if desired. If successful, point the LB to both JBoss instances.
If QA finds major bugs, and a quick bug fix and "roll-forward" is not possible, roll back to the previous version of the deployed software. Since your schema and new code is backward compatible, you won't have lost data.
On your next deploy, remove any garbage from your schema (like columns unused by the current deploy) in a way that makes it still backward and forward compatible.
Although more complex than your current approach, this approach should reduce your deployment downtime with minimal risk.
I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.
I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases)
I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients.
I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade.
As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this.
Thanks in advance...
When you do an upgrade your server ID should stay the same. You may need to chnage it is you want to clone your enviroment.
In your test senario you are creating a clone of the TFS server rather than a strate upgrade.
ChangeServerID
You are probably running into problems as this has been run on your test envionment to facilitate it runing on the same network as your production TFS server.
All workspaces and shelvesets remain unchanged, and people will be able to continue working immediately. Even checked-out files are OK and will be picked up correctly.
I would recommend upgrading the server first, and keep the clients as 2008 (using the Forward Compatibility Pack), and then upgrading the clients to 2010 as and when the projects are upgraded.
We will be embarking on an Application developement project (.NET 3.5) for a large organization. As we started thinking about the upgrades we would be giving across the machines, we are looking at options like ClickOnce.
What we need is a push model, as long as the client machine is connected to the network, the server can send updates. I believe ClickOnce is a pull model(although by specifying minimum version we can kind of push). Also ClickOnce downloads complete files only, it cannot download the change (byte difference) among the files.
Can anyone point me to a better tool that can be used here. Also better strategies, if any, are welcome, we are in a very early stage of the project.
I don't have a definitive answer on better options, but I've used ClickOnce and can offer some advice.
There are several update options with ClickOnce (before starting, after starting, check every time, check every X Hours/Days/Weeks, etc). You can also throw those out and write code to check for updates. It's not a "push" from the server, but your client could poll for updates which would be the next best thing. Just remember, the application is going to have to restart after the update to see changes.
ClickOnce only downloads changed files. However, the progress dialog always shows the entire size of the application even if it's only downloading a single file. Everyone worries about that, but it's just a bug with the progress dialog.
Finally, I'm a big fan of keeping it simple. It's really easy to over-think these things and create a monstrosity that was never needed. We went through something similar at my company. We were so worried about users downloading unnecessary bytes, we broke our apps up into more, smaller assemblies. It turned into a nightmare; apps were harder to maintain and performed worse on the client. We finally undid it all and wasted weeks just to end up where we started.
I'm not saying you don't need the features you're asking for, I don't know your scenario. Just educate yourself first and know what you're getting yourself into.
We use clickonce at my company (about few hundred users for the app geographically dispersed). By specifying the minimum version we can make sure that every app installation gets updated after deployment automatically. You are right that clickonce downloads full files only but only files that have changed since previous version. If that is still a concern you can break your application into more smaller assemblies. I think you can also use netmodules but then Visual Studio has not built in support for that.
In general clickonce has worked good for us.
I am just in the process of implementing such a service on top of my distributed application platform. In essence I have developed a "push" model for corporates that follows these basic principles:
Software upgrades are "managed" from the server, NOT from the client, which is in line with the deployment of corporate software as opposed to user software (this is a very important point)
Software upgrades can be customised per client application on the server, i.e. the server can deploy unique configurations to every client if required
Software upgrades can be deployed to clients at different times, or all at the same time, or any combination of the two
The software upgrade version can be specified per client, i.e. different versions can be deployed to different clients as required
All software upgrades for all clients can be "managed" from a single server, i.e. the software upgrading "service" is consistent across any application, and all applications can utilise the software upgrading "service"
Clients can implement a software upgrade policy of automatic (application restarts as soon as the upgrade has been downloaded and available at the client), manual (application needs to be "sent" a custom "force upgrade"
message"), or on restart (application upgrades on shutdown if an upgrade has been downloaded and is available)
All auto-upgrading functionality is transparent to any running applications as this is all performed in autonomous background threads and all inter-process communication and file transfer is handled by my framework
In essence this now allows me (or will allow me when I have tidied a few things up and thoroughly tested the implementation) to manage the version of any application developed by me from a central server after it has been initially installed, without any client intervention.
There seems to be a problem with ClickOnce deployments.
The manifest file is executed on the client machine, and there's a check to see if a new version is available. If a new version is available, this gets copied over to the client machine. BUT the old version remains.
This can be a problem. If the application is upgraded on a regular basis, this will end up occupying a large and continually growing disk space. This could be a problem at a work place where multiple users all logged on to the same Citrix server.
Is there a straightforward solution to ClickOnce not cleaning up after itself? Is there some setting that I'm missing?
Later Edit
This question actually states something that's incorrect. In reality ClickOnce upgrades only leave the previous version behind, and versions before that are cleaned up. I'll leave the question here (as opposed to deleting it) as this is a misunderstanding that others could have as well.
According to Microsoft Click once does clean up after itself however it will always leave the previous one version behind to enable roll-back functionality.
see http://www.sayedhashimi.com/PermaLink,guid,520010a7-6ce7-47ec-af0f-a57694bf3d41.aspx for more info.