consuming REST server methods from BizTalk Server using WCF-WebHttp adapter - rest

I'm using VS 2019 and BTS 2020 developer edition. I need to implement a scenario in which BizTalk sits between the client and the REST server (implemented in APS.NET Core) and the client send request to BizTalk as he/she typically sends to REST server. The aim is to practice BizTalk WCF-WebHttp adapters (for both receive and send). My idea is to handle all the API requests and methods in a single receive location, send port, orchestration. How can I achieve it? The reason I'm using orchestration is to map and do other process on the messages later.
Does this idea wrong? Should we individually create send ports/receive locations for every API method?
Is there any relation between the operation name of logical port in orchestration and operation name in WCF-WebHttp adapter URL mapping (<Operation Name="SomeName" ... />)? (to one single orchestration and handle all methods)
How to design the desired orchestration? (I have tried 'Decide' shape (adding rules like msg_input(BTS.Operation) == "SomeName") to separate different requests identified by URL mapping in the receive location and I was successful in this step, but is it the correct way either? However, I don't have any idea for designing shapes the way to correctly start orchestration. Also, I don't know ho to send requests from rule branches to send port within the orchestration)
I would also appreciate to hear any other suggestions for solving this problem in a different perspective.

Related

Sockets can replace HTTP requests? (sockets vs http)

Creating a user, adding some record to collection in the DB, updating some stuff, etc..
All of these we regularly do with HTTP requests against REST api.
Think about making Event bus as server instead of REST api.
In that method, create user will be an event name: "CreateUser" instead of REST api endpoint: POST /users.
In reflect to any action done in the event bus, it will re-emit a following event telling to any body needed to know about, that the event was done.
If for example someone viewing the vehicles collection and another user just edit one of the columns or add a new vehicle instance, it will be reflected immediately to who views it online.
My question is if there attitudes like I mentioned above, if there some formally names for it, if it a good practice, if you know someone who regularly uses it, a framework or something etc. Does the socket.io server can handle and behave like http server in high workloads?
You can use websockets for this; they provide a bidirectional channel between client and server to send messages across. You will have to catch and parse the messages on each end yourself, as there is no additional protocol on top of them.
They don't hold state, so there is no knowledge of who is looking at what, or who got what. You could send the same update message to all connected clients and leave it to the client to use it or not.
You would have to reprogram your client code and the API endpoints, because it's a different way of doing things, and it can also do server push.
I have no idea about frameworks though, as I always use them without one. Websockets are fast, but server behaviour at high workloads depends on implementation, and I only have experience with the websocket server I wrote myself. I suppose the performance of the socket.io can easily be googled.

Wrap event based system with REST API

I'm designing a system that uses a microservices architecture with event-based communication (using Google Cloud Pub/Sub).
Each of the services is listening and publishing messages so between the services everything is excellent.
On top of that, I want to provide a REST API that users can use without breaking the event-based approach. However, if I have an endpoint that triggers event X, how will I send the response to the user? Does it make sense to create a subscriber for a "ProcessXComplete" event and than return 200 OK?
For example:
I have the following microservices:
Service A
Service B
Frontend Service - REST Endpoints
I'm want to send this request "POST /posts" - this request sent to the frontend service.
The frontend service should trigger "NewPostEvent."
Both Service A and Service B will listen to this event and do something.
So far, so good, but here is where things are starting to get messy for me.
Now I want to return the user that made the request a valid response that the operation completed.
How can I know that all services finished their tasks, and how to create the handler to return this response?
Does it even make sense to go this way or is there a better design to implement both event-based communications between services and providing a REST API
What you're describing is absolutely one of the challenges of event-based programming and how eventual-consistency (and lack of atomicity) coordinates with essentially synchronous UI/UX.
It generally does make sense to have an EventXComplete event. Our microservices publish events on completion of anything that could potentially fail. So, there are lots of ServiceA.EventXSuccess events flowing through the queues. I'm not familiar with Google Cloud PubSub specifically, but in general in Messaging systems there is little extra cost to publishing messages with few (or no) subscribers to require compute power. So, we tend to over-articulate service status by default; it's easy to come back later and tone down messaging as needed. In fact, some of our newer services have Messaging Verbosity configurable via an Admin API.
The Frontend Service (which here is probably considered a Gateway Service or Facade Layer) has taken on the responsibility of being a responsive backing for your UI, so it needs to, in fact, BE responsive. In this example, I'd expect it to persist the User's POST request, return a 200 response and then update its local copy of the request based on events it's subscribed to from ServiceA and ServiceB. It also needs to provide a mechanism (events, email, webhook, gRPC, etc.) to communicate from the Frontend Service back to any UI if failure happens (maybe even if success happens). Which communication you use depends on how important and time-sensitive the notification is. A good example of this is getting an email from Amazon saying billing has failed on an Order you placed. They let you know via email within a few minutes, but they don't make you wait for the ExecuteOrderBilling message to get processed in the UI.
Connecting Microservices to the UI has been one of the most challenging aspects of our particular journey; avoiding tight coupling of models/data structures, UI workflows that are independent of microservice process flows, and perhaps the toughest one for us: authorization. These are the hidden dark-sides of this distributed architecture pattern, but they too can be overcome. Some experimentation with your particular system is likely required.
It really depends on your business case. If the REST svc is dropping message in message queue , then after dropping the message we simply return the reference ID that client can poll to check the progress.
E.g. flight search where your system has to calls 100s of backend services to show you flight deals . Search api will drop the message in the queue and save the same in the database with some reference ID and you return same id to client. Once worker are done with the message they will update the reference in DB with results and meanwhile your client will be polling (or web sockets preferably) to update the UI with results.
The idea is you can't block the request and keep everything async , this will make system scaleable.

Delphi rest client/server (webbroker) + database +simultaneous client requests

I'm new to REST developing and i'm creating a simple rest API to request database values from clients. I have used the "Delphi Web Server Application" project assistant (the one that uses TIdHTTPWebBrokerBridge and the WebModule where you create the different 'Actions'). It works fine and I can make requests from client(s).
The server WebModule contains a FDConnection and some FDQuery components to make the database (MySQL) queries, and each Action executes a specific query with specific params obtained throug request params.
The client app uses TRESTResponse, TRESTRequest, TRESRResponse components to send/receive the data.
For example:
client request to server some values for a specific user, sending "user = user1" and "passwd = ***" as request params.
server executes the query "select * from xxx where user = user1 and passwd = ...…..." and sends the response to the client.
Every query is "user-specific".
Ok, it works, but now I have some misgivings due to my rest/webbroker functioning ignorance.
What if thousands of requests are made at a time? Could the server respond incorrect data because the FDQuery cursor is in another record?
Or does the webbroker create the query for each request with no problem?
Is it better to create the FDQuery at runtime for each request and destroy it after request completion?
I made a simple test yesterday, running three instances of the Client application and sending 300 requests to the server (100 from each client) simultaneously and it worked, receiving correct data, but I don't know if this is enough guarantee.
Is this (Delphi Web Server Application) the correct method to créate the server? What are the differences with DataSnap?
Any advice?
In the Datasnap architecture (there are several flavours, but they all have a common architecture), the "server" makes 1 copy of the ServerMethodsUnit for each client connection. This is with the ServerClass.LifeCycle set to Session. Therefore, each client will be able to execute a servermethod and have the result returned to it, independently of what any other client may be requesting.
In your case, each ServerMethodsUnit will have it's own FdConnection, FdQuery and so on, wether you place design-time components there or instantiate them at runtime, the consequences are the same.
The limit here will be the hardware that the Datasnap/WebBroker application is running on. (Network bandwidth, RAM, Hard drive speed etc)
Datasnap (REST, DBX, Standalone, ISAPI, Apache, Linux), in my opinion is a sound basis for client/server development.

Send data from Fuse, or a Topic, to Jboss BPM Suite

I would like to send all data received from fuse, in a specific Topic, to a Business Process in BPM Studio. Is there any way?
Example:
I send a value to 'testTopic' in Fuse. Then Fuse send this value to a Business Process (or the Business Process retrieve it), then the Business Process do things based on the value recevied, like sending another value to another topic
Is somithing of this kind possible?
Yes it most definitely is possible, although you would need to route from the 'testTopic' to one of the JMS Queues that jBPM can listen on and transform the message to reflect a valid jBPM command. The generic principle is described in the documentation at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e12149. The real power becomes clear when you look at all the jBPM commands you can send in the packages
org.drools.core.command.runtime.process (Maven: org.drools:drools-core)
and
org.jbpm.services.task.commands (Maven: org.jbpm:jbpm-human-task-core).
When talking from the outside world, it would typically be necessary to identify a correlationKey in the process which is basically the "Business Key" that can be used to identify a process uniquely e.g. as 'ApplicationNumber' for an application process. This can be used to then identify which process you may want to signal/abort/etc.
Since you are working in Fuse you should probably also consider routing that message to the jBPM Rest API described at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e10088. This may simplify your code a bit because it is a more synchronous API. The drawback however is the REST over HTTP invocation typically does not respect the local transaction.

Delivery different kind of protocols in a SOA architecture

I have a project that is currently in production delivering some web-services using the REST approach. Right now, I need to delivery some of this web-services in SOAP too (it means that I will need to deliver some of the same web-services in SOAP and others a bit different), so, I ask you:
Should I incorporate to the existent project the SOAP stack (libraries, configuration files, ...), building another layer that deliver the data in envelopes way (some people call it "anti-corruption layer") ?
Should I build another project using just the canonical model in common (become it in a shared-library) ?
... Or how do you proceed in similar situations ?
Please, consider our ideal target a SOA architecture.
Thanks.
In our projects we have a facade layer which exposes the services and maps to business entities, and a business layer where the business logic is run.
So to add a SOAP end point for an existing service, we just create a new facade and call in to the same business logic.
In many cases it is even simpler, since we use WCF we can have a http SOAP endpoint for external clients, and a binary tcpip endpoint for internal clients. The new endpoint can be added by changing the configuration without any need to change the code.
The way I think about an SOA system, you have messages and pub/sub. The message is the interface. Getting those messages into and out of the system is an implementation detail. I create an endpoint that accepts a raw message document (more REST-like, but not really REST) as well as an endpoint that accepts the message as a single parameter to a SOAP call. The code that processes the incoming message is a separate concern from the HTTP endpoint enablement.
You can use an ESB for this. Where ESB receive the soap messages and send the rest request to the back end. WSO2 ESB provides this functionality. Please look at this sample[1].
[1] http://wso2.org/project/esb/java/4.0.0/docs/samples/proxy_samples.html#Sample152