WF4 workflow versions where service contract changes - version-control

I just successfully implemented a WF4 "versioning" system using WCF's Routing Service. I had a version1 workflow service to which I added a new Decision activity and saved it as a version2 service. So now I have 2 endpoints (with identical service contracts, i.e. all Receive activities are the same for both service) and a router that checks the content of a message (a "versionId" string on the object that all of my Receive's accept as an argument) to decide which endpoint to hit.
My question is, while this works fine when no changes are made to the service contract, how to I handle the need to add or remove methods from my service contract and create a version3 service? My original thought was, when I add the service reference to my client, I use the latest workflow service's endpoint to get the latest service contract. Then, in the config file, I change the endpoint I connect to to the router's endpoint. But this won't work if v1 and v2 have a different contract than v3. My proxy will have v3's methods and forget all about v1 and v2.
Any ideas of how to handle this? Should I create an actual service contract interface in my workflow solution (instead of just supplying a ServiceContractName in my Receive activities)?

If the WCF contract changes your client will need to be aware of the additional operations and when to call them. I have used the active bookmarks, it contains the WCF operation, from the persistence store in some applications to have the client app adapt to the workflow dynamically by checking the enabled bookmarks and enabling/disabling UI controls based on that. The client will still have to be updated when new operations are added to a new version of the workflow.

While WCF was young I heard a few voices arguing that endpoint versioning (for web services that is) should be accomplished by using a folder structure. I never got to the point of trying it out myself, but just analyzing the consequences of such a strategy seems to me as a splendid solution. I have no production experience of WCF, but am about to launch a rather comprehensive solution using version 4.0 of .NET (ASP.NET, WCF, WF...) and at this stage I would argue that using a folder structure to separate versions of endpoints would be a good solution.
The essence of such a strategy would be to never change or remove the contract of an endpoint (a specific version) until you are 100% sure that it is not used any more. While your services evolves you would just add new contracts and endpoints. This could lead to code duplication if one is not such a structured developer as one should be. But by introducing a service facade the duplication would be insignificant

I have been through the same situation. You can maintain the version by the help of custom implementation. save the Workflow Service URL in Database. And invoke them as per desire.
You can get the information about calling the WF Service with the URL by the client.
http://rajeevkumarsrivastava.blogspot.in/2014/08/consume-workflow-service-45-from-client.html
Hope this helps

Related

How to record api calls to mock in SwaggerHub?

I want to mock api calls from my application, and host the mock, so my tests can work without calls to real api. There is a service called restbird which does exactly that, but it is far from ideal for me. If you want to collaborate you have to host the service by your self. Also it has some errors like not displaying history of calls, or when it sends server errors for no reason. I want a service more robust than this one.
The only service that I think might be a good fit is SwaggerHub, it seems robust, it has virtual servers, and overall it is very popular. But the only problem is that I cannot find a way to record api calls from my application. So how can I record api calls for SwaggerHub?
There does not currently exist any functionality within SwaggerHub itself to record API calls made from the Swagger UI module within the tool. This is a limitation of the open-source Swagger UI tool.
What I can recommend is you use the Swagger Inspector tool. The Swagger Inspector can be used to make API calls from a client, save both the request and the response, and even generate an OpenAPI file for you based off the request/responses. If you create an account and sign in, you can even save your API calls to a collection to use later.
Swagger Inspector: https://inspector.swagger.io/builder
It may also be worth considering using ReadyAPI's Virtualization module to handle this use case. With ReadyAPI Virtualization you can record transactions from a browser, build mock services from the recorded transaction or an existing API definition, and then host the mock service using VirtServer.
ReadyAPI is a part of SmartBears API lifecycle products, so there are integrations between the two tools. For instance, you can port APIs from Swaggerhub into ReadyAPI directly and you can use mock services built in ReadyAPI to do dynamic mocking in Swaggerhub.
You can find more information about ReadyAPI Virtualization here: https://smartbear.com/product/ready-api/api-virtualization/
I realise this is a very late response to this thread, but hopefully this information comes in handy.

How to manage the logic behind the API versioning?

I want to identify what might be considered as a best practice for URI versioning of the APIs, regarding the logic of the back-end implementation.
Let's say we have a java application with the following API:
http://.../api/v1/user
Request:
{
"first name": "John",
"last name": "Doe"
}
After a while, we need to add 2 more mandatory fields to the user API:
http://.../api/v2/user
Request:
{
"first name": "John",
"last name": "Doe",
"age": 20,
"address": "Some address"
}
We are using separate DTOs for each version, one having 2 fields, and another having 4 fields.
We have only one entity for the application, but my question is how we should handle the logic, as a best practice? Is ok to handle this in only one service?
If those 2 new fields "age" and "address" would not be mandatory, this would not be considered a breaking change, but since they are, I am thinking that there are a few options:
use only one manager/service in the business layer for all user API versions (but the complexity of the code in that only one manager will grow very much in time and will be hard to maintain)
use only one manager for all user API versions and also use a class as a translator so I can make compatible older API versions with the new ones
a new manager/service in the business layer for each user API version
If I use only one manager for all user API versions and put there some constraints/validations, V2 will work, but V1 will throw an exception because those fields are not there.
I know that versioning is a big topic, but I could not find a specific answer on the web until now.
My intuition says that having a single manager for all user API versions will result in a method that has nothing to do with clean code, and also, I am thinking that any change added with a new version must be as loosely coupled as possible, because will be easier to make older methods deprecated and remove them in time.
You are correct in your belief that versioning with APIs is a contentious issue.
You are also making a breaking change and so incrementing the version of your API is the correct decision (w.r.t. semver).
Ideally your backend code will be under version control (eg GitHub). In this case you can safely consider V1 to be a specific commit in your repository. This is the code that has been deployed and is serving traffic for V1. You can then continue making changes to your code as you see fit. At some point you will have added some new breaking changes and decide to mark a specific commit as V2. You can then deploy V2 alongside V1. When you decide to depreciate V1 you can simply stop serving traffic.
You'll need some method of ensuring only V1 traffic goes to the V1 backend and V2 to the V2 backend. Generally this is done by using a Reverse Proxy; popular choices include NGINX and Apache. Any sufficient reverse proxy will allow you to direct requests based on the path such that if the request is prefixed by /api/v1 then redirect that request to Backend1 and if prefixed by /api/v2 to Backend2.
Hopefully this model will help keep your code clean: the master branch in your repository only needs to deal with the most recent API. If you need to make changes to older API versions this can be done with relative ease: branch off the V1 commit, make your changes, and then define the HEAD of that modified branch as the 'new' V1.
A couple of assumptions about your backend have been made for this answer that you should be aware about. Firstly, your backend can be scaled horizontally. For example, this means that if you interact with a database then the multiple versions of your API can all safely access the database concurrently. Secondly, that you have the resources do deploy replica backends.
Hopefully that explanation makes sense; but if not any questions send them my way!
If you're able to/can entertain code changes to your existing API, then you can refer to this link. Also, the link's mentioned at the bottom of the post direct you to respective GitHub source code which can be helpful in case if you think to introduce the code changes after your trial-error.
The mentioned approach(using #JsonView) basically prevents one from introducing multiple DTO's of a single entity for the same/multiple clients. Eventually, one can also refrain from introducing new version APIs each & every time you introduce new fields in your existing API.
spring-rest-jackson-jsonviewjackson-jsonview

What are some methods to document contracts between microservices?

In an HTTP-driven microservices architecture, each service might have a number of public endpoints that return JSON, for example, to a client or an API gateway intermediary. These services could also accept POSTs with JSON bodies of a certain shape, or query strings of a certain shape, etc.
What are some good options for documenting or programmatically keeping track of these "contracts" between services? I.e, if service A's /getThing endpoint has been refactored to return different data, is there a documentation tool or methodology that would facilitate updating the API gateway to adapt to this change?
For programmatically management of contracts, if you using spring-cloud stack then you must look into spring-cloud-contract, by which you can easily keep track of your latest version of contracts for your Rest endpoints and also if any change occurs in your api endpoint, this will help you notify by breaking the contract and failing the test-cases build around it.
Let's say for example, service A's /getThing endpoint has been refactored to return different data then all calling services to this endpoint will fail while build time of your project.
However, this methodology won't facilitate updating the API gateway to adapt to this change as there might different logic you want to perform of every new version of your endpoints.
You can also create Rest Docs snippets using these endpoint contracts. checkout Rest Docs snippets. You can also use swagger for documenting your endpoints.
for NodeJs check here.

ServiceStack/TypeScript: The typescript-ref ignores namespaces (this causing duplicates)

I am learning NativeScript + Angular2 with ServiceStack in C# as the backend.
For the app, I generated TypeScript classes using the typescript-ref command, for use with the JsonServiceClient in ServiceStack:
>typescript-ref http://192.168.0.147:8080 RestApi
It all looked sweet and dandy, until I discovered that it seems to ignore that the ServiceStack Services and Response DTOs are in different namespaces on the .NET side:
I have different branches of services, where the handlers for each service might differ slightly between the branches. This works well in ServiceStack, the Login and handlers work just as expected.
So, on the app/NativeScript/Angular2-side, I used the typescript-ref and generated the restapi.dtos.ts. The problem is it skips the namespace difference and just creates duplicate classes instead (from VSCode):
The backend WS in ServiceStack is built in this "branched" fashion so I don't have to start different services on different ports, but rather gather all of them on one port and keep it simple and clear.
Can my problem be remedied?
You should never rely on .NET namespaces in your public facing Services Contract. It's supported in .NET Clients but not in any other language which requires that each DTO be uniquely named.
In general your DTOs should be unique within your entire SOA boundary so that there's only 1 Test DTO which maps to a single schema definition which ensures that when it's sent through a Service Gateway, resolved through Service Discovery, published to a MQ Server, etc it only maps to a single DTO contract.

why SOAP without WSDL?

Is there a good reason to deploy or consume a SOAP service without using a WSDL "file"?
Explanation:
I'm in a situation where a 3rd-party has created a SOAP service that does not follow the very WSDL file they have also created. I think I am forced to ignore the WSDL file in order to consume this service. Therefore I'm researching how to do this.
What I am really wondering is why it is even possible to do this? What is the intention?
Is it designed so that we can use poor services made by poor programmers? Surely there must be a better reason. I almost wish it wasn't possible. Then I could demand they write it properly.
The WSDL is supposed to be a public document that describes the SOAP service, so describes the signatures of all the methods available in the service.
Of course there may be service providers who want to expose a service to certain consumers, but who don't want to make the signature of the service public, if only to make it a little bit harder for people they don't want using the service to find it or attempt to use it. The signature of the services might expose some private information about the schema of their data for example.
But I don't see any excuse for writing a WSDL that doesn't match the service. I would worry that if they can't get the WSDL right what is the quality of the service going to be like?
To answer the other question yes you can consume the service without the WSDL. If you are using Visual Studio for example you could have VS build a proxy for you based on the incorrect WSDL and then tweak it to match the correct service method signatures. You just need to make sure your data contracts and method contracts in your proxy match the actual service data contracts and method contracts.