Service Fabric - Anyway to prevent initialization of the old version of services during upgrade? - azure-service-fabric

I have noticed that when I upgrade my application containing a stateful service fabric will, as needed, promote old versions to primary to clear the path for upgrading a domain.
This ends up problematic when there is a non-backwards compatible release being pushed out. For instance, lets say my services query a database as they are promoted to primary. If I push an older release where a recently added database column is absent, the database roles back to that version and removes the column. Then during the upgrade fabric attempts to promote an old instance to primary, but that fails because the old instance is looking for the column that is no longer present in the database. This causes the whole upgrade to fail.
Is there anyway to upgrade the application but avoiding fabric promoting the current (pre deploy) version of my service to primary?

Related

Is it possible to deploy only Application Parameters (cloud.xml) in Service Fabric without having to change the Application version?

Is there a possibility to deploy only Application Parameters (cloud.xml) somehow without having to redeploy the code or upgrade the application version? For example after deploying the Application on Service fabric cluster, there could be a minor thing I might need to change in one of the parameters in cloud.xml. And it won't make sense to upgrade the version of the Application for this minor change.
I have combed through various sites but did not find anything suitable.
I'd say - no, you can't do this. Here is what Service Fabric application upgrade says on this -
During an upgrade, Service Fabric compares the new application manifest with the previous version and determines which services in the application require updates. Service Fabric compares the version numbers in the service manifests with the version numbers in the previous version. If a service has not changed, that service is not upgraded.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.

Manual rollback of an application / service

I have a Service Fabric application with a few services underneath it. They are all currently sitting at version 1.0.0.
I deploy an update out to the cluster for version 2.0.0. Everything is running fine and the deployment succeeds. I notice a very large but in the version. Is there a way to manually rollback to version 1.0.0? The only thing I have found is automatic rollback during an upgrade.
Matt's answer is correct, but I'll elaborate a bit on it here.
The key is in understanding the different steps during application deployment:
Copy
Register
Create
Upgrade
Visual Studio rolls these up into single "publish" and "upgrade" operations to make it easy and convenient. But these are actually individual commands in the Service Fabric management API (through PowerShell, C# or HTTP). Let's take a quick look at what these steps are:
Copy:
This just takes your compiled application package and copies it up to the cluster. No big deal.
Register:
This is the important step in your case. Register basically tells the cluster that it can now create instances of your application. Most importantly, you can register multiple versions of the same application. At this point, your applications aren't running yet.
Create:
This is where instances of your registered applications are created and start running.
Before we get to upgrade, let's look at what's on your cluster. The first time you go through this deployment process with version 1.0.0 of your application (call it FooType), you'll have just that one type registered:
FooType 1.0.0
Now you're ready to upgrade. You first copy your new application package with a new version (2.0.0) up to the cluster. Then, you register the new version of your application. Now you have two versions of that type registered:
FooType 1.0.0
FooType 2.0.0
Then when you run the upgrade command, Service Fabric takes your instance of 1.0.0 and upgrades it to 2.0.0. If you need to roll it back once the upgrade is complete, you simply use the same upgrade command to "upgrade" the application instance from 2.0.0 back to 1.0.0. You can do this because 1.0.0 is still registered in the cluster. Note that the version numbers are in fact not meaningful to Service Fabric other than that they are different strings. I can use "orange" and "banana" as my version strings if I want.
So the key here is that when you do a "publish" from Visual Studio to upgrade your application, it's doing all of these steps: it's copying, registering, and upgrading. In your case, you don't actually want to re-register 1.0.0 because it's already registered on the cluster. You just want to issue the upgrade command again.
For an even longer explanation, see: Blue/Green Deployments with Azure ServiceFabric
Just follow the same upgrade procedure, but targeting your 1.0.0 version instead. "Rollback" is just an "upgrade" to your older version.

How does a production deployment for a cloud based software happen?

Lets take the example of a heavily used cloud based software.
When a deployment happens, let's say users are online.
Won't the server require stop & start after deploy? How is the service continuity maintained?
How will the ongoing user sessions / unsaved data be continued post deploy?
How is the risk managed? (Lets say an issue comes up after deploy and you need to revert to the older version, now imagine a user has already worked on the new version and saved some data with it, which is not compatible with previous versions)
Whether the server require stop & start after deploy depends on the technology being used. Any state information that is kept within the server can be externalized (i.e. written to disk) for the purpose of updating the application. Whether this is neccessary and whether this is done depends on the technology being used.

dev opsworks - updating the app, will it drainstop apache

I have an autoscaled environment at my production, which is currently a havoc when we update build on it, so we thought we better move to dev opsworks at AWS to make the process more easy for us.
We can't afford a downtime, not now not ever, never ever; a second worth of loss while updating a build and may be restarting apache costs a fortune.
We can't possibly afford to just let our machine be terminated by autoscale policy when a new update comes in with new AMI based ec2 machine, actually when autoscale terminates a machine under any circumstances it doesn't care for your running requests on that machine, it just shuts it down while what it should rather do is a graceful shutdown, by something like drainstop on apache, so it could first at least finish the work in hand.
now that opsworks is here, and we are planning to use it to update our builds more automagically, will the new update push run the recipes again, in fact this paragraph which i just read worries me more, does it mean that it won't update the build automatically on new instances.
After you have modified the app settings, you must deploy the app.
When you first deploy an app, the Deploy recipes download the code and
related files to the app server instances, which then run the local
copy. If you modify the app in the repository, you must ensure that
the updated code and related files are installed on your app server
instances. AWS OpsWorks automatically deploys the current app version
to new instances when they are started. For existing instances,
however, the situation is different:
You must manually deploy the updated app to online instances.
You do not have to deploy the updated app to offline instance
store-backed instances, including load-based and time-based instances;
AWS OpsWorks automatically deploys the latest app version when they
are restarted.
You must restart offline EBS-backed 24/7 instances and manually deploy
the app; AWS OpsWorks does not run the Deploy recipes on these
instances when they are restarted.
You cannot restart offline EBS-backed load-based and time-based
instances, so the simplest approach is to delete the offline instances
and add new instances to replace them.
Because they are now new instances, AWS OpsWorks will automatically
deploy the current app version when they are started.
First of all, let me state that I've started looking into OpsWorks just about 2 weeks ago. So I'm far from being a pro. But here's my understanding how it works:
We need to differentiate between instances that are instance store backed, and instances that are EBS backed:
The instance store disappears together with the instance once it's shut down. Therefore, bringing it up again, starts from zero. It has to download the latest app again and will deploy that.
For EBS backed instances the deployed code remains intact (persisted) exceeding the lifetime of the instance to which it is attached. Therefore, bringing an EBS backed instance back to life, will not update your app automatically. The old version remains deployed.
So your first decision needs to be what instance type to use. It is generally a good idea to have the same version of your app on all instances. Therefore I would suggest going with EBS-backed instances which will not automatically deploy new versions when booting up. In this case, deploying a new version would mean to bring up brand new instances that will be running the new code automatically (as they are new), and then destroying the old instances. You will have a very short time during which old and new code will run side by side.
However, if you prefer to have always the very latest version deployed and can afford risking discrepancies between the individual instances for an extended period of time (e.g. having different app versions deployed depending on when an instance was originally started), then instance store backed might be your choice. Every time a new instance spins up, the latest and greatest code will be deployed. If you want to update existing ones, just bring up new instances instead and kill the existing ones.
Both strategies should give you the desired effect of zero downtime. The difference is on when and how the latest code is being deployed. Combine this with HAProxy to have better control which servers will be used. You can gradually move traffic from old instances to new instances for example.