Affecting application version history - azure-service-fabric

In experimenting with a sample SF app and playing with upgrades and versioning I am noticing that it keeps a rather long history of versions. Below is a screenshot of my app in SFExplorer. Is there any way to control how much history is retained, or can I cull out versions I'll never use again.
Or should I not even be concerned with this? (even though I am!)

What you're seeing here is application registration. Before you can create an application instance, you have to register the application type and a version. When you upgrade your application, you register a new version of the same application type. This is the PowerShell command that does it (Visual Studio uses this on your behalf when you upgrade through it):
Register-ServiceFabricApplicationType
Over time, you'll see a bunch of versions of your application registered. If you don't want them registered anymore, you can simply unregister them using the corollary command:
Unregister-ServiceFabricApplicationType -ApplicationTypeName SFDemoType -ApplicationTypeVersion 1.0.2
While we have that screenshot in front of us, here are a couple cool things about application registration:
You can create instances of any registered application type + version at any time with the new commands:
New-ServiceFabricApplication -ApplicationName fabric:/SFDemo2 -ApplicationTypeName SFDemoType -ApplicationTypeVersion 1.0.7
This means you can do cool things like create side-by-side instances of the same application type but of different versions. Say you want to test out a new version of an application without upgrading an existing instance yet. You can register the new version, but instead of upgrading an existing instance of that application type, you can simply create a new instances of the new version of the application type.
You can "upgrade" a running application instance from any version of an application type to any other version of an application type using the upgrade command:
Start-ServiceFabricApplicationUpgrade -ApplicationName fabric:/SFDemo -ApplicationTypeVersion 1.0.20 -FailureAction Rollback -Monitored
For example, say you just upgraded your application instance from 1.0.15 to 1.0.20. After a while, you find a bug in 1.0.20. You can use the same application upgrade command to "upgrade" back to 1.0.15. In fact, the version strings are just strings - they can be anything you want. You can upgrade from version "banana" to version "Tuesday" if you want!
So yeah, you can unregister old versions if you think you'll never need them again. But it's great to have a version history, because you can actually do interesting stuff with it!

Related

Service Fabric Test-ServiceFabricApplicationPackage powershell crash

After upgrade to sdk 2.5.216 and runtime 5.5.216 Test-ServiceFabricApplicationPackage command works only for complete package. In case of partial app upgrade (some Pkg are removed) it results in "Windows PowerShell has stopped working". I have tested on several computers and several apps. to reproduce:
create test app with 2 services and deploy.
change app version and particular service version.
create package and remove Pkg folder from it for the service without modifications.
connect to Service Fabric and test like Test-ServiceFabricApplicationPackage -ApplicationPackagePath "..path" -ImageStoreConnectionString "fabric:ImageStore"
Maybe somebody was able to overcome this issue? or at least has similar behavior so I'm not alone in Universe.
Thanks!
Alex
Take a look at https://github.com/Azure/service-fabric-issues/issues/259
This is a bug in our code. It happens when a compressed package was uploaded and provisioned in the cluster. Testing a new version of the application fails because settings file was not found in the provisioned version.
We fixed the issue and it will become available in one of our next releases.
Meanwhile, you can skip compression or test the version 2 application package without passing in the image store connection string.
Apologies for the inconvenience!

Manual rollback of an application / service

I have a Service Fabric application with a few services underneath it. They are all currently sitting at version 1.0.0.
I deploy an update out to the cluster for version 2.0.0. Everything is running fine and the deployment succeeds. I notice a very large but in the version. Is there a way to manually rollback to version 1.0.0? The only thing I have found is automatic rollback during an upgrade.
Matt's answer is correct, but I'll elaborate a bit on it here.
The key is in understanding the different steps during application deployment:
Copy
Register
Create
Upgrade
Visual Studio rolls these up into single "publish" and "upgrade" operations to make it easy and convenient. But these are actually individual commands in the Service Fabric management API (through PowerShell, C# or HTTP). Let's take a quick look at what these steps are:
Copy:
This just takes your compiled application package and copies it up to the cluster. No big deal.
Register:
This is the important step in your case. Register basically tells the cluster that it can now create instances of your application. Most importantly, you can register multiple versions of the same application. At this point, your applications aren't running yet.
Create:
This is where instances of your registered applications are created and start running.
Before we get to upgrade, let's look at what's on your cluster. The first time you go through this deployment process with version 1.0.0 of your application (call it FooType), you'll have just that one type registered:
FooType 1.0.0
Now you're ready to upgrade. You first copy your new application package with a new version (2.0.0) up to the cluster. Then, you register the new version of your application. Now you have two versions of that type registered:
FooType 1.0.0
FooType 2.0.0
Then when you run the upgrade command, Service Fabric takes your instance of 1.0.0 and upgrades it to 2.0.0. If you need to roll it back once the upgrade is complete, you simply use the same upgrade command to "upgrade" the application instance from 2.0.0 back to 1.0.0. You can do this because 1.0.0 is still registered in the cluster. Note that the version numbers are in fact not meaningful to Service Fabric other than that they are different strings. I can use "orange" and "banana" as my version strings if I want.
So the key here is that when you do a "publish" from Visual Studio to upgrade your application, it's doing all of these steps: it's copying, registering, and upgrading. In your case, you don't actually want to re-register 1.0.0 because it's already registered on the cluster. You just want to issue the upgrade command again.
For an even longer explanation, see: Blue/Green Deployments with Azure ServiceFabric
Just follow the same upgrade procedure, but targeting your 1.0.0 version instead. "Rollback" is just an "upgrade" to your older version.

Behavior difference between Actor and Service projects in Azure Service Fabric

In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.

Google App Engine: Deployed Source doesn't have Local updates

I'm working with Google App Engine in Eclipse w/ JSP pages in Windows 7.
I already have an app deployed and working, but I am unable to make changes to it for some reason.
If I make changes and debug locally, my localhost page is showing the changes that I implement.
While I am not getting any errors in the deployment, the same changes that work on my local debug are no longer showing up, so I can't update my app.
I thought updating the version number might help, but I had no luck with this.
Any ideas? Thanks.
Are you deploying the same version (as specified in appengine-web.xml) as the default version that is running on your app? If not, you'll have to access your new deployment at http://newversion.appname.appspot.com, or change your default version in app engine to your newly deployed version.
I have had the same problems too, especially when the changes concerned the static pages. Some little things to check:
If you have set an expiration date in your app.yaml, your browser cache could be holding the file.
If it’s specific to the online contents, it could be an intermediary cache (such as a squid server) serving the outdated contents, in which case you’d have to flush the cache to get the new version.
You could start by checking the log on the GAE console to see if the request is received by the server, that would help you debug.
Another trick, if you’re being served an outdated version of http://yourapp.appspot.com/index, try and pass a dummy argument to force the browser to update the version, for instance : http://yourapp.appspot.com/index?p=1

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.