Reading old messages from the queue using Informatica Intelligent Cloud Services - queue

I have created a two-process(for ex:- process A, process B). and I have TestQueue, TestTopic.
Process A is to push the messages into the TestQueue.
Process B is to read messages from the TestQueue and send it to TestTopic. This is an event-based process which means whenever a message is entered into the TestQueue then this process is triggered, reads the messages from the TestQueue and pushes it in TestTopic.
Problem statement:
1st I will disable the process B and pushes the message in TestQueue (by using process A) for example, the message is TEST1. Now I will enable the process B and again pushes the message in TestQueue for example, the message is TEST2.
But the process B is able to read only the latest data which is TEST2 and it is unable to load the old data which is TEST1(pushed to testQueue by disabling process B). I want both the messages should be read and pushed to TestTopic.
Does anyone has any idea on how to proceed further?
Note: these queues, topics are created in Azure Service bus and I am using AMQP connection to access them and created all these steps in Cloud Application Integration (Informatica Intelligent Cloud Services).
Design of Process B
Process B start properties
process B is an event whenever a message is loaded into testQueue then this process will automatically be triggered
Assignment step image
the formula in Any is
util:parseJSON(util:toJSON(
<AMQPMessage>
<body>
{
<PO-AMQP>
<Id>{$input.event[1]/body/body/PO-AMQP/Id/text()}</Id>
<Name>{$input.event[1]/body/body/PO-AMQP/Name/text()}</Name>
</PO-AMQP>
}
</body>
</AMQPMessage>))
Service step assignment image

Related

Architecture for ML jobs platform

I'm building a platform to run ML jobs.
Jobs will be started from an interface.
I'm making a service for each type of jobs. Some times, a service S1 might require to first make a request to another service S2 and get its output before running its own job.
Each service is split into 2 Kubernetes deployment:
one that will pull the message from a topic, check it and persist it to a database (D1)
one that will read request from the database, run the actual job, update the request state in the database and then answer to the client (D2)
Here is the flow:
interface generates a PubSub message to a topic T1
D1 pulls message from T1 and persist a request to a database
D2 sees the new request in the database and runs it then update its state in the database and answer to the client
To answer to the client, D2 has 2 options:
push a message to a pubsub topic T2 that will continiously be checked by the client. An id is passed in both request and response so that only the client can pull it from the topic.
use a callback provided by the client to make a POST request
What do you think abouut this architecture ? Does the usage of PubSub makes sense ? Also does it make sense to split each service into 2 deployment (1 that deals with request, 1 that runs the actual job ) ?
interface generates a PubSub message to a topic T1 D1 pulls message
from T1 and persist a request to a database
If there's only one database, I'm not sure I see much advantage in using a topic (implying pub/sub). Another approach would be to use a queue: the interface creates jobs into the queue, then you can have any number of workers processing it. Depending on the situation you may not even need the database at all - if all the data needed can be in the message in the queue.
use a callback provided by the client to make a POST request
That's better if you can do it, on the assumption that there's only one consumer for the event; pub/sub is more for broadcasting out to multiple consumers. Polling works but is really inefficient and has limits on how much it can scale.
Also does it make sense to split each service into 2 deployment (1
that deals with request, 1 that runs the actual job ) ?
Having separate deployables make sense if they are built by different teams and have a different release cadence or if you need to scale them out independently, otherwise it may not be necessary.

MSMQ transactional or recoverable

I have a question about MSMQ. If I use a non-transactional queue and send message to it with recoverable parameter, message is stored on disc and in case of some problem secure. But if I want pull message from non-transactional queue, is there some mechanism to secure messages to stay in queue in case of some problem (server error, db off...)?
For some reasons I don't want to use transactional queue. Thanks a lot for response.
You could implement a peek-then-receive process to simulate a transaction.
Peek message to get content.
Use the content as you wish.
If step 2 completes then Receive message to effectively delete it.
If step 2 fails, execute cleanup code and goto step 1.

Master/Slave pattern on Google Cloud using Pub/Sub

We want to build a master slave pattern on Google Cloud.
We planned to use Pub/Sub for that (similar to JMS pattern) letting each worker to grab a task from the queue and ack when done.
But, it seems like a subscriber can't get messages sent before it started.
And we're not sure how to make sure each message will be processed by a single 'slave'.
Is there a way to do it? Or another mechanism on google cloud for that?
As far as I understand the master slave pattern, the slaves do the tasks in parallel and the master harvest the result. I'd create a topic for queuing the tasks, and a single subscription attached to this topic, so that all the slaves use this subscription to fetch the task.
Also I'd create another topic/sub pair for publishing results from slaves and the master harvest the result. Alternatively the result can be stored into shared datastore like Cloud Datastore.
You can do this by creating 'single' subscription which is than used by all the slaves. pubsub service delivers new message only once to given subscription so you can be sure that given message will be processed only by 1 slave.
You can also adjust acknowledgement deadline appropriately so that delivery retry doesn't happen. If retry happens than it will result in multiple slaves getting same message.

NServiceBus 4.03, when the queue doesnt exist, message is being sent to Transaction Dead Letter Q

I have Distributor/Worker model.
Machine A - Distributor
Machine B - Worker
When the worker B is trying the send message to Distributor on a wrong Q name, its putting the message into Transactional Dead Letter Q.
I was expecting the message to delivered to error q.
This is the correct behavior.
NServiceBus uses the error queue when the processing of an incoming message fails.
This is not the same as trying to send a message to a queue which does not exist.
There maybe an exception to this is if the message send is performed from inside a handler, though I have not tested this scenario.

MSMQ multiple readers

This is my proposed architecture. Process A would create items and add it to a queue A on the local machine and I plan to have multiple instances of windows service( running on different machines) reading from this queue A .Each of these windows service would read a set of messages and then process that batch.
What I want to make sure is that a particular message will not get processed multiple times ( by the different windows service). Does MSMQ by default guarantee single delivery?
Should I make the queue transactional? or would a regular queue suffice.
If you need to make sure that the message is delivered only once, you would want to use a transactional queue. However, when a service reads a message from the queue it is removed from the queue and can only be received once.