What is the difference between "App fabric workflow service" and "Workflow manager 1.0"
Both used to host workflows. For me workflow manager looks good because it is scalable, we can create workflow hosting farm using multiple servers.
will "Workflow manager" replace "appfabric workflow"? for new project what to select?
This is a tough one.
AppFabric Workflow Services (actually WCF workflow services) are hosted in WorkflowServiceHost, but to be honest, we can see that AppFabric workflow hosting is not really evolving much. Especially in combination with BizTalk tools (adapter & mapper) through BizTalk AppFabric connect, it is nice to build some things.
Workflow Manager is the technology that was shipped with SharePoint Server 2013, together with Service Bus for Windows Server. To be honest, it is a V1, but this will probably be the technology that will be evolved (especially since SharePoint is the biggest customer of this technology ;))
The nice thing about Workflow Manager is that it is built to be cloud-ready (isolation, scalability, security...). You also have the concept of the Trusted Surface (http://msdn.microsoft.com/en-us/library/windowsazure/jj193509(v=azure.10).aspx) This allows you to sandbox customization.
So, my bet would be: if your product/platform is a long term thing, go for Workflow Manager, but live with the V1 concepts, or ignore the Trusted Surface sandboxing.
If you build it for shorter term, go for AppFabric still.
Hope this helps
Jurgen Willis (http://blogs.msdn.com/b/workflowteam/archive/2012/10/24/announcing-the-release-of-workflow-manager-1-0.aspx) when announcing Workflow Manager 1.0 answered this question.
A major difference between them is that the AppFabric (for Workflows) is supposed to be for hosting Workflow Services based on WorkflowServiceHost(WFSH). Meaning that the workflows in AppFabric are all services and expect to be invoked as services consuming and exposing WCF Soap Services.
But the Workflow Manager can host any type of Workflow including services. You can have workflows initiated that does not receive or send any messages, but only does DB transactions.
Some follow up I found.
App Fabric is going to be discontinued according to this:
http://blogs.msdn.com/b/appfabric/archive/2015/04/02/windows-server-appfabric-1-1-ends-support-4-2-2016.aspx
And Sharepoint Server 2016 relies on App Fabric:
https://redmondmag.com/articles/2015/05/12/sharepoint-2016-and-infopath.aspx
Workflow Manager 1.0 was shipped with Sharepoint Server 2013 as mentioned previously in this thread. Does that mean that Workflow Manager is also discontinued or will it come as a version 2.0 when Sharepoint Server 2016 is released? Any other information about where all this is going is very welcome.
The question:
will "Workflow manager" replace "appfabric workflow"? for new project
what to select?
still seems unanswered to me.
Windows Workflow Foundation is such a great and potent framework, and it is troublesome if you don't have an on premise host system like AppFabric you can rely on.
Sam Vanhoutte is right:
Cons of workflow manager is that it really is a a V1 product, the two main issues that I ran into when using it were:
Workflows hosted in Workflow Manager are expected to be declarative: adding your own custom code can be tricky, documentation is not extensive.
Workflow manager does not allow you to force persistence of a workflow state easily. There is some mention that delay activities will persist state, however, the Persist Activity is explicitly not supported. I have run into cases while building workflows where the same activity is executed multiple times because of a problem in the hosting environment configuration or because an exception in a custom code activity crashes the host instead of suspending the workflow as it does when using AppFabric.
If you have the time to put in to learn the platform and deal with V1 issues I would definitely choose workflow manager, if you have experience with hosting in AppFabric be prepared for significant differences.
Windows fabric or service fabric are the ones which are used to form service bus cluster ring. Service fabric is used in sb1.1 with tls1.2 support version. The previous versions use windows fabric.
App fabric is not used by workflow manager. It is used by sharepoint.
Related
What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.
In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.
I'm looking for best practice in continuous delivery of windows services.
Currently we hava a set of powershell scripts that unintall, reboot, install updates but error handling is tricky. We are reviewing System center but are there any other options available for deploying a windows service?
We've been using Presto since Dec 2011, and have done over 1,000 deployments. Most of what we deploy are Windows services.
What's nice is that we set up our apps and servers in Presto, then we can repeatedly deploy, to any server (or multiple servers at once), by just hitting a button. Presto will copy our official release binaries, update all of the items in our app config files, create and start the service, etc...
So, if you have an application that has 30 manual steps to deploying it, you can enter these steps in Presto, then it's done automatically for you after that.
It's worth a look: http://presto.codeplex.com/
Your most basic and generally accepted best option comes from this thread, which basically links to a Microsoft support article on creating an installer for the windows service.
Newbie to automated azure deployment here! I have the happy task of automating our deployment to the cloud. I have also done some reading and discovered that the 2 main tools are MSbuild and Powershell. Please could anyone tell me why i would use one over the other or indeed if there are any better ways to automate the deployment. Keeping in mind that my main concern is performance and i need this deplymrnt to be as fast as possible.
Any insight would be most welcome.
I'm a fan of using PowerShell for deployments. It's pretty quick to set up and the script can be pretty straight forward.
MSBuild can be great too. I use MSBuild from TFS Team Build to kick off a PowerShell script to do the deployment. Works like a champ.
A good starting point would be http://blogs.msdn.com/b/tomholl/archive/2011/12/06/automated-build-and-deployment-with-windows-azure-sdk-1-6.aspx. This blog does a great job of showing you how to build and deploy with Team Build.
If you don't want/need the Team Build and MSBuild part, then just look at his PowerShell script. That covers the basics of getting a deployment from your dev environment to Windows Azure.
You should use Web Deploy, it only takes about a minute to deploy a fix. See these links
http://blogs.msdn.com/b/cloud/archive/2011/04/19/enabling-web-deploy-for-windows-azure-web-roles-with-visual-studio.aspx
http://channel9.msdn.com/Blogs/funkyonex/Speed-Up-Azure-Deployments-with-the-New-Web-Deployment-Feature
At SplendidCRM, we had a similar need to automate deployments to Azure, but as our need was to service our live customers, we had to develop using C#. We have been watching Azure for many years, but it was not until they provided a DNS service did it make sense to make the move. Using the Azure Resource Manager (ARM) libraries, we were able to automate VM creation, SQL database creation and DNS name creation. In addition to the Microsoft documentation for ARM, we found it particularly useful to be able to get the Microsoft source code for the PowerShell scripts that wrap ARM. This is because the documentation does not always provide a complete set of settings.
In the end, we decided to release the Azure deployment code as part of a new Ultimate edition that combines order and customer management with software deployment.
I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.