Sending data from SAP S/4HANA to SAP data hub on events trigger - sap-cloud-platform

I want to transfer my data from SAP S/4HANA to SAP data hub whenever lets say a purchase order gets created. these seems like a message queue problem but I am not sure what to use.
So whenever there is a order creation S/4HANA sends notification, which can be received by SAP enterprise messaging queue, but from that queue I am not sure how to send the data to datahub. Can datahub provide a SAP EM consumer who will get the message from the EM queue. I can see datahub has kafka consumer but does it support SAP enterprise messaging ?

Related

Sync data from REST and Websockets

we are building chat application similar to messenger. There is required behavior:
User log in
User should see last N messages, and he should be able to load older messages
New messages should be appended as well
My solution:
I would like to use websockets for this purpose with combination of REST. My idea was that client application decide by message id which messages need. So REST will be used for initial fetching of messages and fetching older messages.
New messages will received by websockets
Possible issue which I should handle:
Application starts subscribing websocket channel for new messages and send request for old messages without initial message id
There is chance that after calling GET request new message come, and will be stored in DB
Client application started subscribing websocket channel so message will received by websockets.
GET request didn't know about this message and fetch last N messages where this new messages will occured and client application will have duplicate record and have to filtered this messages
Can you give me advice if there is some elegant way how to handle this case? Thank you.
I would resolve your task having in mind the following:
The client application should know only about the topic to which to listen. And not the ID of the message starting from which to listen.
It is up to the server to decide what to return (even time should always be tracked server-side).
The WebSocket is used as a transport for STOMP (simply to not reinvent the wheel). The WebSocket connection could be opened once the client application is loaded and not when it is entering the "listen for messages" state. But topic subscription should be performed when necessary.
You can always send GET request and initiate a STOMP subscription simultaneously (almost simultaneously, well with a delay of 1-2 nano-second). And those always should be processed in different promises. But I would align those in the following way: first, the STOMP subscription is initiated, And a specific message on subscription with the initial timestamp of the start of subscription is delivered; second, REST request to get previous 10-100 messages for the TOPIC prior to a specific timestamp (received from STOMP) is performed.
Getting the last 10 messages (which are prior to subscription moment) could be delivered as by REST as by STOMP approach: you can always react to a subscription event on your server-side, and deliver client-specific messages.
Regarding the problem of multiple identical messages from different "data channels", it is easily resolvable: your client (hope that is not jquery, but rather Angular or React or Vue or anything else) will be storing all the data in a single collection in a controller, and filtering and checking by message-id that only unique entries are stored is easy.
BUT if your system will produce hundreds of thousands of messages per second: I guess HTTP-based protocols are not your choice in this case.

How to choose which events are sent to the user on an occasionally connected system using CQRS with event sourcing?

I am building a web app that users can edit and share notes. Users should be connected to notes with roles (owner, read, read-write). This is an occasionally connected system so I chose to do the syncing using CQRS and event sourcing. Following Greg Young's presentation [36:20 - 38:40], the flow would be as follows:
Client does changes while offline.
Client connects to the Internet.
The "store and forward" sends the events that occurred while the client was offline.
Client compares the local events with the received events and does a merge, deciding what commands to keep. Then updates local view model.
Client sends the stored commands (created offline) to the server.
Server executes the commands and generates events that are stored in event store.
"store and forward" holds the events each user is interested in, until the users come back online.
The question is: How does the "store and foreword" decide what events should be sent to each user?
Obviously sending all events would compromise the security of other users.
Since your client knows which aggregates it displays, then it can just tell backend "hey, are there events for aggregateIds [...] since [timestamp]?".
This is how reSolve framework keeps UI reactive - client subscribes to events for particular aggregateId and receives them in real time via websockets.
So one answer to your question could be "let user ask for events (aggregateIds) he is interested in"

Do we have webhooks/Push notification available for successfactor?

Do we have webhook available for SAP successfactor?
Do we have any webhook available where I can get the notification if any operation happened in the entity like any object is inserted in the entity then I will get notification?
Yes, it is an inplace functionality called Intelligent Services (can be found in Successfactors within transaction "Intelligent Service Center (ISC)".
There you can subscribe to different events (only the one's provided in the standard, no custom hooks possible). The subscription results in an integration center flow, where data can be passed via different protocolls to a webservice of your choice.
You can also configure a "business rule" with an intelligent service as a starting point.

What data goes into a message between distributed applications?

I am trying to implement queues into our microservice architecture, to be specific AWS SNS/SQS.
For example I have this scenarion.
After order is created Orders MS raises OrderCreated event and this event publishes message to AWS OrderCreated SNS. SQS queue InvoiceCreate is subscribed to OrderCreated SNS and will get this message.
Evertyhing makes scence so far. If Invoicing MS is listening to InvoiceCreate queueu and retrieves all new messages - Invoicing MS should create an invoice, but my question is with what data?
a) contact Order MS (to order data relevant for creating invoice). If unable to do so, message will be left in queue until Invoicing MS is able to collect the relevant data
b) message published should contain all the relevant data needed to create an invoice.
If choosing A Invoicing MS will not be decoupled and it will be depending on Order MS, but on the other hand it can collect additional data other then the data packed with original message.
If choosing B, since OrderCreated event and OrderCreated SNS doesnt really know who will use message data ie. OrderCreated could be also used to perform different actions, I am confused how to preciselly decide what data should be stuffed in this message
Our architecture is set up more like your option B. To use your example, the Order service would publish it's OrderCreated event and attach - as a payload - most (or even all) of the Order information in the Payload section of the message. We format message and payload as JSON for compatibility, but you can do whatever.
In some cases, we don't publish all info, just specific fields for the Added/Edited entity - it depends on the service and the sensitivity of the information. So long as you only ever add fields to a message (don't remove any), you are honoring the contract and aren't really tightly coupled to it.
Again, to your example, the InvoiceService could get its information from one or more of several options:
Pull it directly from the OrderCreated message if you include everything needed
Pull what it can from the OrderCreated message, publish an InvoiceStarted event that triggers the OrderService (and/or others) to send it an OrderInvoiceComplete message with the rest of the details it needs
Keep a local copy of whatever key data it needs - populated by subscribing to other events - so that it can combine OrderCreated data with some local data to flesh out an invoice
It's best to avoid the InvoiceService responding to a message by making a call directly back to the OrderService - this is a pretty tight coupling that can be avoided by simply messaging back if you have to.
So, there are lots of options. I personally prefer the technique of putting all data that might be useful into the messages when things are created/updated and letting consuming services decide what to use/ignore. For our scenario, that works well but we have only a few well-contained clients accessing our services so there may be more secure ways to do it that aren't relevant for us.

Storing and retrieving messages in Openfire server according to thread id

We are implementing a chat application using Openfire as server. The clients are various mobile platform and web interface for which we are using xmppframework for iOS Smack for Android and Strophe.js for the web.
Following this http://xmpp.org/extensions/xep-0085.html#bizrules-threads we are sending messages with different thread values to start a new thread. But we noticed that while archiving through the Monitor plugin available for Openfire the details like thread is not stored. In server the auto archive mode is on.
As mentioned here the manual archive is not yet implemented in this plugin http://www.igniterealtime.org/projects/openfire/plugins/monitoring/readme.html
Was anybody successful to store the messages according to thread ids in Openfire and also retrieve that information while showing the chat history on the clients ?