Is Windows Workflow Foundation compliance with WfMC Standard? - workflow

Is Windows Workflow Foundation compliance with WfMC Standard?
http://www.wfmc.org/wfmc-standards-framework.html

You are mixing different concepts and so your question doesn't make sense.
XPDL, just like BPEL, BPMN, among other standards, are no more than notations developed to represent workflows through text (usually XML) or graphically, using diagrams.
That being said, WF4 is an .NET API which sits below standards, allowing you to implement any chosen standard, such as WfMC Standard: XPDL.
WF4, also in comparison with WF3, is a highly flexible and extensible API which gives you the freedom, at least in theory, to implement every type of workflows you can imagine with more or less code, depending on the task you want to achieve, and supporting scenarios ranging from human workflow (this is a case-scenario where WF4 is really good, because with workflows that can take days, weeks or even months, its persistence implementation is almost transparent to the developer) to system centric workflows (for example small workflows that can be called as WebServices). Services like workflow monitoring are also easily implemented.
All this with a workflow designer which is implemented natively within VS-2010 and that can be rehosted on any .NET application just like any other UI-Control and which translates said workflows to XAML automatically.
I hope you have perceived the difference of the two concepts because you can't really expect WF4 to follow any Workflow Definition Standard when it is just an API.

Related

Implementing SOA with RESTful service and application APIs?

At the moment we have one huge API which is used by our backoffice, our frontend, and also our public API.
This causes me a lot of headaches because when building new endpoints I find a lot of application specific logic in the code which I don't necessarily want to include in my endpoint. For example, the code to create a user might contain code to send a welcome email, but because that's not needed for the backoffice endpoint I will then need to add a new endpoint without that logic.
I was thinking about a large refactor to break our code base in to a number of smaller highly specific service APIs, then building a set of small application APIs on top of those.
So for example, an application endpoint to create a new user might do something like this after the refactor:
customerService.createCustomer();
paymentService.chargeCard();
emailService.sendWelcomeEmail();
The application and service APIs will be entirely separate code bases (perhaps a separate code base per service), they may also be built using different languages. They will only interact through REST API calls. They will be on the same local network, so latency shouldn't be a huge issue.
Is this a bad idea? I've never seen/worked on a codebase which has separated the two before, so perhaps there is a better architecture to achieve the flexibility and maintainability I'm looking for?
Advise, links, or comments would all be appreciated.
Your idea of making multiple, well-defined services is sound and really it is the best way to approach this. Going with purely micro-services approach however trendy it might seem, proves to be an overkill most often than not. This is why I'd just redesign the existing API/services properly and follow solid and sound SOA design principles below. Good Resources could be found on both serviceorientation.com and soapatterns.org I've always used them as reference in my career.
Consider what types of services you need
(image from serviceorientation.com)
Entity services are generally your Client, Payment services - e.g. services centered around an entity in your domain. They should be business-agnostic, and be able to be reused in all scenarios. They could be called sometimes by clients directly if sufficient for their needs. They could be called by Task services.
Utility services contain logic you're likely to reuse in other services, but are generally not called by the clients directly. Rather, they'd be called by Task and Entity services. An example might be a Transliteration service.
Task services combine and reuse Entity and Utility services into meaningful tasks. Most often they are not that agnostic and they do implement some specific business logic. They have meaningful business operations and they are what clients mostly call.
Principles to follow when redesigning
I strongly recommend going over this cheat sheet and making sure everything there is covered when you do your redesign. It's great help.
In general, you should make sure that:
Each service has a common context and follows the separation of concerns principle. E.g. Clients service is only for clients related operations, etc.
Each of the Entity and Utility services is business-agnostic and basic enough. So it can be reused in multiple scenarios and context without being changed. Contract must be simple - CRUD and only common operations that make sense in most usage scenarios.
Services follow a common data model - make sure all the data structures you use are used uniformly in all services in order to prevent need for integration efforts in the future and promote combination of services for clients to exploit. If you need to receive a customer that another service returns, this should be happening without the need for transformation
OK, but where to put the non-agnostic logic?
Now, you have multiple options for abstracting business logic whenever you have a need for complex business functionality. It depends on your scenario what you're going to chose:
Leave logic to all clients. Let them combine your simplified services
If there is business logic that is commonly implemented in multiple of your applications and has the potential to be reused heavily you can implement a composite service that reuses multiple existing underlying services and exposing the logic.
Service Composability. Concerns on multiple API calls communication overhead.
Well, this is an age-old question - should you make multiple API calls when they will probably create some communication overhead? The answer is - it depends on how complex your scenario is, how much reuse you expect and how flexible you want to be. Also is speed critical? To what extent? In Service Oriented Architecture though, this is a very common approach - to reuse your existing services and combine them in new configurations as needed. Yes, it does add some overhead, but I've seen implementations in very complex environments, for example Telecoms, where thanks to the use of ESB solutions, message queues, etc the overhead is negligible compared to the benefits. Here is a common architecture approach (image from serviceorientation.com):
The mandatory legacy refactoring heads-up
More often than not, changing the established contract for multiple existing client systems is a messy business and could very well lead to lots of refactoring and need for looking for needle-in-a-stack functionality that's somewhere deep in the (possibly) legacy code. Business logic might be dispersed everywhere. So make sure you're ready and have the controls, time and will to lead this battle.
Hope this helps
Is this a bad idea?
No, but this is a big overall question to be able to provide very specific advice.
I'd like to separate this into 3 areas:
Approach
Design
Technology
Working backwards, the Technology is the final and most-specific part, and totally depends on what your current environment is (platforms, skills), and (hopefully) will be reasonable self-evident to you once the other things are in progress.
The Design that you outlined above seems like a good end-state - having multiple, specific, focused APIs, each with their own responsibility. Again, the details of the design will depend on the skills of you and your organization, and the existing platforms that you have. E.g. if you are already using TIBCO (for example) and have a lot invested (licenses, platforms, tools, people) then leveraging some of their published patterns/designs/templates makes sense; but (probably) not if you don't already have TIBCO exposure.
In the abstract, the REST API services seems like a good starting point - there are a lot of tools and platforms at all levels of the system for security, deployment, monitoring, scalability, etc. If you are NGINX users, they have a lot of (platform-independent) thoughts on how to do this also NGINX blog, including some smart thinking on scalability and performance. If you are more adventurous, and have an smart, eager team, a look at Event-driven architecture - see this
Approach (or Process) is the key thing here. Ultimately, this is a refactoring, though your description of "a large refactor" does scare me a little - put that way, it sounds like you are talking about a big-bang change and calling it refactoring. Perhaps it is just language, but what's in my mind would be "an evolution of the 'one huge API' into multiple, specific, focused APIs (by refactoring the architecture)". One place to start is Martin Fowler, while this book is about refactoring software, the principles and approach are the same, just at a higher-level. Indeed, he talks about just this here
IBM talk about refactoring to microservices and make it sound easy to do in one step, but it never is (outside the lab).
You have an existing API, serving multiple internal and external clients. I will suggest that you'll want to keep this interface solid for these clients - separate your refactoring of the implementation from the additional concerns of liaising with and coordinating external systems/groups. My high-level starting approach would be:
identify a small (3-7) number of related methods on the API
ideally if a significant, limited-scope change is needed anyway with these methods, that is good - business value with the code change
design/specify a new stand-alone API specifically for these methods
at first, clone the existing model/naming/style
code a new service just for these
with proper automated CI/CD testing and deployment practices
with associated monitoring
modify the existing API to have calls to these methods re-direct to call the new service
perhaps have a run-time switch to change between the old implementation and the new implementation
remove the old implementation from codebase
capture issues, assumptions and problems along the way
the first pass will involve a lot of learning about what works and doesn't.
then repeat the process over & over, incorporating improvements each time.
At some point in the future, when appropriate due to other business-driven needs, the API published to the back-end, front-end and/or public clients can change, but that is a whole different project.
As you can see, if the API is huge (1,000 methods => 140 releases) this is a many-months process, and having a reasonably frequent release schedule is important. And there may be no value improving code that works reliably and never changes, so a (potentially) large portion of the existing API may remain, just wrapped by a new API.
Other considerations:
public API? Maybe a new version (significant changes) will be needed sooner than the internal APIs
focus on the methods/services used by it
what parts/services change the most (have the most enhancement requests approved)
these are the bits most likely to change, and could benefit most from a better process/architecture
what are future plans for change and where would the API be impacted
e.g. change to user management, change to payment processors, change to fulfilment systems
e.g. new business plans (new products/services)
consider affected methods in the API
Also see:
Using Microservices for Legacy System Modernization
Migrating From a Monolith to APIs and Microservices
Break the Monolith! Loosely Coupled Architecture Brings DevOps Success
From the CEO’s Desk: Application Modernization – Assess, Strategize, Modernize! 9
[Microservices Architecture As A Large-Scale Refactoring Tool 10
Probably the biggest 4 pieces of advice that I can give is:
think refactoring: small changes that don't affect function
think agile: small increments that are valuable, testable, achievable
think continuous: have a vision for where you will (eventually) get to, then work the process continuously
script & automate the processes from code, documentation, testing, deployment, monitoring...
improving it every time!
you have an application/API that works - keep it working!
That is always the first priority (you just need to work to carve-out time/budget for maintenance)
Not a bad idea at all.
Also what are your looking is microservices arch. and with that the question comes is how you break your system into well defined services.
We use Domain Driven Design Arch. to break our system into microservices and lagom framework , which allows every service to be in diff. code base and event driven arch. between microservices.
Now lets look at your problem at low level: you said a service contains code like creating a user and sending a email and one with just creating a user and there might be other code as well.
First we need to understand how many type of code you are writing:
Domain Object Logic (eg: User Object) -- what parameters are valid and all -- this should be independent of service endpoint and should be encapsulated in one Class like user class and we say it an Aggregate in Domain Driven Design terms
Business Reactions -- like on user creation send a email -- using event driven arch. these type of logics are separated into process managers or sagas which could most cases work conditionally like a for user created externally send a mail and for user created internally send a email , by having extra data in the event
Also the current way you are doing it , how are you handling transaction across services???

SOAP for distributed transaction

I have been reading on difference between REST and SOAP. I see in many posts that SOAP is a better choice for distributed transactional resources.
Please give me a practical example of SOAP being used for distributed transaction.
SOAP has been the main player for many years inside enterprise applications simply because there was no alternative. REST came later.
Since SOAP is a protocol it is easier to build tools around it since you know how it behaves always (i.e. as the protocol is defined). For this reason and because it's mature as technology, a lot of other specifications were build around it, to cover any uses one might have for doing something with SOAP. See a list here. There are of course some for transactional semantics also. If you use
SOAP with a technology like Java or C# (which are heavyweight champions in the enterprise applications field) then you can have these transactional specifications already implemented in the framework or libraries and you just use them.
REST on the other hand is an architectural style of building applications. It's harder to limit it to a set of specifications. You can implement it in many ways. It is also going somehow against "the way of the SOAP" by staying away of creating new standards or specifications and instead just reusing the ones of the web. For this reason, there are no specs or tools to help you with transactional RESTful services. You have to build your own.
So when your application is build by self-contained web services, and these services need to cooperate on creating the applications outcome, and you need a distributed transaction to guarantee that outcome is consistent (all operations succeeding or none succeeding) then it's (more) practical to go for the technology that has the better tooling in supporting it.

Enterprise application framework supporting DDD

I spent short time studing Habanero and i found it good approach for making Enterprise Application in a really short period of time.
The pattern witch Habanero use is "Active Record" as it's developers say.
My questions are:
There any similar application like Habanero witch fully support Domain
Driven Design by determining aggregate roots, entities and value objects
Is it right decision to use such tools in big organizations
Does it worth training our team on such a tool
thank you
Framework support for Domain Driven Design is quite different from frameworks supporting data driven applications. Such framework should increase the productivity of developers that works with an ubiquitous language that evolves with the business and that is learned by a domain-expert.
They should not face concepts like aggregates, root, value objects because they are just modelling concepts, conceptual tools, but ways to ease the development process. Thus a framework exposing abstract classes or interfaces named AggregateRoot, Entity or ValueObject is fundamentally broken. It doesn't provides any real value to an application, just useless indirections.
However:
There are a few frameworks designed to support domain driven design, listed here. Moreover, I'm developing one by myself based on previous experiences that worked very well
It depends, obviosly. For example we used all of the Epic's modeling patterns with success.
We used some "home made" framewoks too, and some of them proved to really increase productivity. However, such frameworks (if useful) always have steep learning curves and it depends very much on how much reliable the software have to be and what are the developers skills.
It depends on the framework, on the complexity of the business (if you don't need a domain expert to understand it, you don't need DDD) and on the developers, too. I faced successful stories and huge failures with different frameworks in different contexts. I've also had a conference that faced the topic (you can see the slides here).

Are there any workflow engines in existence that don't use BPMN and BPEL?

Our business is planning on building a rather large business application with about 2000 or so users.
Many objects in the system require a mildly complex series of approvals, notifications, etc.
For various reasons, our company has decided to reject formal use of BPMN or BPEL. What I am looking for is a workflow engine that I can pass these objects to as a means of facilitating, tracking, and managing the state of these objects. We are implementing this project using EJB 3.1 with a WebSphere AS.
Am I correct in my understanding of a workflow engine? Everything seems tied to BPMN or BPEL...am I just missing something here as to why most solutions seem to implement BPMN or BPEL? Some advice would be wonderful!
Workflow engines typically take an active role in an enterprise architecture. They execute a declarative process model, which is basically a directed graph consisting of nodes, which represent activities or tasks and edges, which represent the control flow between these edges. Such edges can be annotated with conditions to allow for expressing conditional branching/merging. There are several modelling languages around, like YAWL, XPDL, jPDL, BPEL and BPMN 2.0, which sit on top of these abstract concepts and some syntatic, visual and functional sugar, but only the latter are official industry standards. This is important to avoid vendor locks, make models interchangeable (at least to a certain extent), supportable by experts and different tools. During runtime, process instances are created based on a process model and are executed according to the control flow defined by the model. So the engine actively navigates from one activity to the next activity and thus "orchestrates" your business logic. The main difference between BPMN 2.0 and BPEL is that BPEL is tightly coupled to Web services, i.e. business functions to be invoked by activities are supposed to be rendered as Web service. So if you want to orchestrated WS-* services, it is still the best choice since BPMN 2.0 lacks well-defined and standardized bindings to concrete service implementations. In any case, I'd strongly recommend to use one of the standardized languages since they are both broadly accepted in industry and well supported by various vendors and open source communities.
I tried to explain that in more details because I was not entirely sure about what you mean by "facilitating, tracking, and managing the state of these objects". This sounds a little bit like you are more interested in passively monitoring an object's state change as opposed to actively controlling state changes using a workflow engine. If this assumption is right, then perhaps a abstract state machine would fit your needs better.
Take a look at jBPM5, it provides a very flexible core that allows you to build your own domain specific language on top of it. Right now the language provided is BPMN2, but you can easily add your own.
Cheers
We are building a product that has a migration path for BPMN 2.0 but does not - internally, use BPMN. We believe checklists are much easier to use in real-time workflows than flowcharts. Is still however, has rules/triggers/conditionals and more - so it's a tool that effectively models processes as "checklists on steroids":
Check it out at http://tallyfy.com

What are the major industry standard Automated Testing Frameworks?

I'm working on establishing automated testing practices and test suites in an organization. A peer is telling me that we "should use a framework". To me, a framework is any set of code and/or other tool that helps you create something.
My peer seems to be suggesting that there are industry standard automated testing frameworks.
I've seen the following patterns in designing Test systems before:
Data Driven
Keyword Driven
Model Driven
Query Driven
My counterpart includes "Modular" as one of these. Because of my background in Software Engineering, I hear the word "Modular" and think of modular programming (as opposed to object-oriented, aspect-oriented or procedural programming)... a way of organizing code, not a methodology or framework type in and of itself.
I've seen the wikipedia definition for "Modular Automation" and it looks the same as the programming paradigm.
What am I missing? What can I read to get on the same page as my counterpart? Is it me or him that doesn't understand something? I have over a decade of software engineering experience, my counterpart has been in QA for nearly that long. He's not able to site references. I've searched the google for 6 hours now trying to learn about this "Modular Framework" and can't find a technical example and nothing more than the standard programming paradigm (e.g. organize code into modules).
It turns out the major industry-standard designs for automated testing are:
Data Driven
Keyword Driven
Model Driven
Query Driven
Additionally, "hybrid" approaches are used. These are approaches in which more than one of the above designs are used.
In a number of places on the web (including wikipedia) "Modularity Driven" test case design is mistakenly referred to as if it were one of the automated test case design strategies listed above. The definition of this mistaken term ("Modularity Driven") appears to have more to do with the organizational aspects of coding than the way in which One drives an automated test. "Modularity Driven" automated testing is a misnomer (or mistaken term altogether). In other words, there is no such thing. The term "modular" describes the programming paradigm being used.
The modular aspect of a test is in its organization, storing code in modules as opposed to other programming paradigms like OOP, or Procedural, etc.
I have heard of Modular Automation also referred to as Component Based Test Case Design. HP is a big player in this space. The came up with a Product that is called Business Process Testing.
It consists of:
•Reusable Business Components
•Business Components converted into Business process test
Business components are reusable units that perform a specific task in a business process. (for example – Add to Cart)
A business process tests is a scenario comprising of business components (for example - Place an Order)
In HP's Quality Center the Business Components module enables you to create and manage reusable business components.
Then the Test Plan module enables you to drag and drop the components into business process tests, and debug the components.