JBPM Workflow patch generation - workflow

I have been using JBPM workflow in my project and I have a small question regarding generating the database patches or SQL statements to apply JBPM workflow modifications.
Currently JBPM workflow provides a way to refresh the JBPM tables in schema with the deployment of the latest process definitions. However what if my system is already live with process definition deployed with state X and now I have modified the process definition file to accommodate change X2. I still need to be able to deploy the delta changes without disrupting the instances of old saved data.
Is it possible to generate only "delta" database scripts for the JBPM process definition modification? And what are other good tools which can be used to modify process definitions more intuitively?
To reiterate on my problem, JBPM deploy cleans the JBPM tables of old instances maintained there and then redeploys the latest files; how do I generate the delta without deleting old data? Are there any user friendly tools for that?
Any help in this regard will be appreciated.

I'm not sure to have understood correctly your issue. JBpm doesn't clean tables for old process instances when you deploy a new process definition.
When you deploy a new process definition with the same name of an existing one, you get new version of that process definition.
Existing process instances continue running with the process definition version they was started with, while new process instances take the latest version unless you specify the precise version to be used.
In theory, process definition can also be modified for running process instances using the API. In doing so, you must pay attention to make these changes compatible with the flowing of these instances.

Related

Is there any way to move individual entity from one server to another in Master data services?

I have master data model with some entity and it is deployed on production server.
Now i have created 2 more new entity in development server and wanted to move only these two entity.
If anyone has any idea please share with me.
Thanks !
You have two options.
Web-app(easiest): On your Dev server, go to System Administration. Click on Deployment and create a package. You then deploy this package by going on the production server, follow the same steps, but choose deploy instead of create under the 'deployment' button.
The alternative is to use the MDSModelDeploy.exe. You can find it on the server by going to the appropriate folder. Generally it's in this path: C:\Program Files\Microsoft SQL Server\130\Master Data Services\Configuration.
I recommend you use this method, as you have more control. You can choose to deploy with data, or without or clone your model. You can read more here ([https://learn.microsoft.com/en-us/sql/master-data-services/deploy-a-model-deployment-package-by-using-mdsmodeldeploy][1])
I can also recommend you consider the ModelPackageEditor when your model starts getting big. Then you have control over what you need to deploy, as in entities, views, business rules etc.
You need to have a deployment strategy in place, because if your development and production is not exactly the same, then you run into deployment errors. It normally happens when you create, for example business rules on the environment to which you are deploying and it is not on your dev environment. MDS uses copious amounts of id's and if the models are not in sync, then you run into problems.

Automatically Combine/Bundle EF Migration(s)

I'm wondering if it is possible to run some automated tasks either on a Release (web deploy) action, or Branch Merge (TFS) action?
Ideally I would like to set up a process that will automatically combine EF migrations since the last release. I'm still looking into how I would automate this, but I think the first step is hooking into a suitable event.
I haven't setup a build server yet, but I'm guessing if the above isn't possible then this would be an option for attaching a custom procedure to the MSBuild task?
Alternatively, if anyone has experience in automating things like this I would be happy to hear it. I am the head of development at a web development company and I would like to facilitate our current processes by automating some of our standard procedures, and this is something we do over any over again for each development!
I appreciate your time looking at my question, thanks.
VSTS and TFS2015 both support a CI/CD process via their new build and release system. Very flexible and powerful. Check it out!
https://msdn.microsoft.com/Library/vs/alm/Release/getting-started/understand-rm
VS/WebDeploy does support deploying EF migrations with a web application:
https://msdn.microsoft.com/en-us/library/dd394698?f=255&MSPPError=-2147217396#efcfmigrations
This works fine for deploying a small application/system but when you want to deploy a larger system with many components it doesn't work as well. We create MSDeploy packages for each component of the system. For example, this is how we deploy SQL databases:
http://dotnetcatch.chief7.space/2016/02/10/deploying-a-database-project-with-msdeploy/

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

Behavior difference between Actor and Service projects in Azure Service Fabric

In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.

How does a production deployment for a cloud based software happen?

Lets take the example of a heavily used cloud based software.
When a deployment happens, let's say users are online.
Won't the server require stop & start after deploy? How is the service continuity maintained?
How will the ongoing user sessions / unsaved data be continued post deploy?
How is the risk managed? (Lets say an issue comes up after deploy and you need to revert to the older version, now imagine a user has already worked on the new version and saved some data with it, which is not compatible with previous versions)
Whether the server require stop & start after deploy depends on the technology being used. Any state information that is kept within the server can be externalized (i.e. written to disk) for the purpose of updating the application. Whether this is neccessary and whether this is done depends on the technology being used.