I am new to bonita and still trying to wrap my head around workflow management. I have this project in maven with a service layer in project architecture that does CRUD. Am exploring how the service layer can be called/call the process definition/instance/variable to access the database. Can anyone give me an insight?
From Bonita to your service layer you can use:
Connectors: they are associated with tasks when creating processes definitions and executed when a process instance reach the task.
Event handlers: they are associated with specific Bonita Engine events (such as creating a new process instance, initializing a step...). They can be use to trigger some code execution that should apply to several deployed processes.
From your service layer to Bonita you can use:
Engine APIs using the Java client library
REST API
Related
I want the ability for clients to create their own stateless services and be able to upload/publish it to join an existing cluster. Is this doable? I understand that I need to update the application manifests dynamically but not sure how or if this is possible programmatically without side effects of the service fabric runtime processes.
The workflow is to upload the code (zipped file maybe or whatever) via an API gateway.
The first thing to keep in mind is that you do not deploy individual services to a Service Fabric cluster. You deploy applications, which can contain one or more services.
So the key question to ask is whether you need the new code to be integrated with an existing application type or not. It sounds like what you're trying to do is just enable multiple clients to deploy independent applications on a shared Service Fabric cluster, in which case you would not be modifying existing application types, but deploying entirely new ones.
Thus, you would need your API gateway to dynamically generate application and service manifests, combine them with the client-provided code to create an application package, then copy, register, and create those applications in the cluster. As far as the Service Fabric runtime is concerned, this looks no different than if you had deployed an application type built and packaged in Visual Studio. Processes running existing applications are not impacted.
I see these both in Bluemix, but what is the difference between them?
Cloud Foundry and OpenWhisk are two Bluemix Compute models that a developer can used to power an application's workload.
I'll give a very high-level summary of both services and when I would use them...
Cloud Foundry
IBM Bluemix was originally based off Cloud Foundry's open technology. It is a cloud computing platform as a service that supports the full lifecycle, from initial development, through all testing stages, to deployment.
Cloud Foundry has a CLI program called cf which is the primary tool to interact with Bluemix (or Bluemix provides a web GUI for this).
Cloud Foundry introduces the concepts of Organizations that contain Spaces which you can think of as workspaces. Different spaces typically correspond to different lifecycle stages for an application.
Cloud Foundry introduces the concepts of Services and Applications. A Cloud Foundry service usually performs a particular function (like a database service), and an application usually has services and their keys bound to it.
OpenWhisk
OpenWhisk is a brand new IBM Cloud developed distributed event-driven compute model.
It has a distributed automatically scaling serverless architecture that executes application logic on events.
OpenWhisk also has a CLI program called wsk which can be used to run your code snippets, or actions, on OpenWhisk.
OpenWhisk introduces the concepts of Triggers, Actions, and Rules.
Triggers are a class of events emitted by event sources.
Actions encapsulate the actual code to be executed which support multiple language bindings including Node.js, Swift and arbitrary binary programs encapsulated in Docker Containers. Actions invoke any part of an open ecosystem including existing Bluemix services for analytics, data, cognitive, or any other 3rd party service.
Rules are an association between a trigger and an action.
Cloud Foundry vs. OpenWhisk
So the question remains: when should you use Cloud Foundry, or when should you use OpenWhisk?
In my limited experience using OpenWhisk, here are my thoughts. I like to think of OpenWhisk as an easily implementable automatically scaling architecture that application developers can use without needing much prior knowledge in backend development. I think of Cloud Foundry as a lower level in the software stack which might give you more customization, but will likely take more skill and knowledge for setting it up.
I would use Cloud Foundry if I...
Was a backend & application developer.
Had experience creating and connecting services together.
Needed functionality that just might not be possible using OpenWhisk.
I would use OpenWhisk if I...
Was an application developer.
Didn't want to worry about a server.
Didn't want to learn different programming languages, etc. to figure out how to set up my server.
Really wanted focus on developing my application and have the backend just work.
Hope that helped.
Edit:
Here's a cool image that I found that illustrates this:
CloudFoundry is a PaaS (Platform-as-a-service) platform, which means in a nutshell, that it hosts the platform for your application to run on. Examples of a platform include node.js or a JVM.
OpenWhisk is a serverless platform. The term FaaS (Function-as-a-service) seems to be emerging as well. You upload code, which is executed once an event happens. That event might be anything, ranging from a simple HTTP request to a change happening in your database.
The fundamental difference between the two is the mode of operation. PaaS means, you're still running a server-process. You'll have a long running process which listens to events and executes your logic, once an event happens. All the other time, the process is idle, still requiring CPU cycles and memory to actually listen for events.
In serverless, the platform takes the burden of "listening for events". Once an event happens, your code is instantiated and executed. That code is shutdown afterwards thus not requiring any resources anymore. That also explains why OpenWhisk actions have a time limitation of 5 minutes. It is not meant to have long running actions.
Disclaimer: Both platforms support a lot more than I described here, I tried to keep it down to the most substantial difference between the both.
Here is the current scenerio:
I have an existing web application that is using play
I need to create an actor system with a http interface that will re-use some libraries in #1
In order to simplify development (not having to start multiple processes during development etc), I was thinking to add my actor system + http interface to my existing play application like:
http://localhost:8080/akka/api/....
I believe I can create a separate akka thread pool inside of the application.conf.
Production configuration:
Now when I push to production, since my play application will run on multiple web servers, I would have a configuration flag which would enable or disable the akka system in my play application.
Now I can deploy my akka service to single server (or more in the future) by enabling it in the configuraiton, and in the other regular www services I would disable it.
Is this a good idea, will it work?
If so, how could I enable/disable the akka part of it using a simple flag in my configuration?
From what I have seen, people normally start the akka system from inside of a controller, maybe I can do it in the onStart stage and if the akka-flag is disabled just skip it during startup?
What is the difference between "App fabric workflow service" and "Workflow manager 1.0"
Both used to host workflows. For me workflow manager looks good because it is scalable, we can create workflow hosting farm using multiple servers.
will "Workflow manager" replace "appfabric workflow"? for new project what to select?
This is a tough one.
AppFabric Workflow Services (actually WCF workflow services) are hosted in WorkflowServiceHost, but to be honest, we can see that AppFabric workflow hosting is not really evolving much. Especially in combination with BizTalk tools (adapter & mapper) through BizTalk AppFabric connect, it is nice to build some things.
Workflow Manager is the technology that was shipped with SharePoint Server 2013, together with Service Bus for Windows Server. To be honest, it is a V1, but this will probably be the technology that will be evolved (especially since SharePoint is the biggest customer of this technology ;))
The nice thing about Workflow Manager is that it is built to be cloud-ready (isolation, scalability, security...). You also have the concept of the Trusted Surface (http://msdn.microsoft.com/en-us/library/windowsazure/jj193509(v=azure.10).aspx) This allows you to sandbox customization.
So, my bet would be: if your product/platform is a long term thing, go for Workflow Manager, but live with the V1 concepts, or ignore the Trusted Surface sandboxing.
If you build it for shorter term, go for AppFabric still.
Hope this helps
Jurgen Willis (http://blogs.msdn.com/b/workflowteam/archive/2012/10/24/announcing-the-release-of-workflow-manager-1-0.aspx) when announcing Workflow Manager 1.0 answered this question.
A major difference between them is that the AppFabric (for Workflows) is supposed to be for hosting Workflow Services based on WorkflowServiceHost(WFSH). Meaning that the workflows in AppFabric are all services and expect to be invoked as services consuming and exposing WCF Soap Services.
But the Workflow Manager can host any type of Workflow including services. You can have workflows initiated that does not receive or send any messages, but only does DB transactions.
Some follow up I found.
App Fabric is going to be discontinued according to this:
http://blogs.msdn.com/b/appfabric/archive/2015/04/02/windows-server-appfabric-1-1-ends-support-4-2-2016.aspx
And Sharepoint Server 2016 relies on App Fabric:
https://redmondmag.com/articles/2015/05/12/sharepoint-2016-and-infopath.aspx
Workflow Manager 1.0 was shipped with Sharepoint Server 2013 as mentioned previously in this thread. Does that mean that Workflow Manager is also discontinued or will it come as a version 2.0 when Sharepoint Server 2016 is released? Any other information about where all this is going is very welcome.
The question:
will "Workflow manager" replace "appfabric workflow"? for new project
what to select?
still seems unanswered to me.
Windows Workflow Foundation is such a great and potent framework, and it is troublesome if you don't have an on premise host system like AppFabric you can rely on.
Sam Vanhoutte is right:
Cons of workflow manager is that it really is a a V1 product, the two main issues that I ran into when using it were:
Workflows hosted in Workflow Manager are expected to be declarative: adding your own custom code can be tricky, documentation is not extensive.
Workflow manager does not allow you to force persistence of a workflow state easily. There is some mention that delay activities will persist state, however, the Persist Activity is explicitly not supported. I have run into cases while building workflows where the same activity is executed multiple times because of a problem in the hosting environment configuration or because an exception in a custom code activity crashes the host instead of suspending the workflow as it does when using AppFabric.
If you have the time to put in to learn the platform and deal with V1 issues I would definitely choose workflow manager, if you have experience with hosting in AppFabric be prepared for significant differences.
Windows fabric or service fabric are the ones which are used to form service bus cluster ring. Service fabric is used in sb1.1 with tls1.2 support version. The previous versions use windows fabric.
App fabric is not used by workflow manager. It is used by sharepoint.
I have a scenario where during the system install time, a few services were deployed on to the OSGi container and these services will be listening for other bundles that provide data and are dynamically installed and uninstalled at runtime.
These data providers do not expose any services and should not even invoke services; my idea is to enable the pre-deployed services to listen for the event of installation of these data provider bundles and if the pattern matches, then process and persist the data into the data store.
For example I have a WidgetService which will listen for installation or uninstallation event of Widget data provider bundles, ShppingCartService that will listen for the installation/uninstallation events from ShoppableItem data provider bundles, etc.
This helps me to keep the processing and persisting logic be centralized and my data providers need not write any code to make their data processed. All that is expected from the data provider bundles is the Service Name/id, Service Version,PreRequisites, and the data that they need to publish.
I have read several articles on OSGi that explain dynamic pluggability of services and the clients able to discover or discard services based on their availabity; however, those are all talking about scenarios where clients are to be intelligent to discover and execute the services they are interested in.
My intention is to make client completely unaware of any service discovery, for that matter any code. All that the client passes is the info about the service the client is interseted in, the dependencies, and the data; the client should be completely dumb.
Is this possible in OSGi? I'm ready to consider this architecture even at the cost of extending a few of the OSGi core framework classes!
I have found some what, may be, remotely related question on stack overflow at :
Discovering Bundle MetaData with out installing the bundle
However, I want a hook or an event that will call my respective service when one or more data provider bundles have been installed. These data provider bundles can be interested in any of the services that are installed in the system. I'm even ready to write a central bunle repository manager/listener kind of stuff that will listen to any bundle installation and invoke my Service facade that will decide which service to execute based on the meta data provided by the data provider bundle.
I'm just starting OSGi, so need a little direction on how to move forward...
I'll be really thankful to you guys/girls :) if you can help me achieve this!
I have a doubt deep in my mind that this may not be readily available in OSGi, and even if that is true I'm ready to spend time and extend the framework to achieve this. All I need is a few guidelines and a clear direction. Who knows, if OSGi is really lacking this functionality, then it would be a very useful add-on to a future OSGi spec.
You might have a look at section 4.7 (Events) of the OSGi Core spec. The Framework raises BundleEvents when there is a change in the lifecycle of a bundle, e.g. when it is installed or uninstalled. What you need to do is to implement a BundleListener, which then will receive the events, so your service can react on the changes.
I have described a design pattern that I call "OSGi Mediator", which may be a solution to your problem.
The items you want to mediate would only require to register with the service registry; all the dependencies could be managed by your mediator implementation.