I'm looking for a method to broadcast a message using queue in freeRTOS and i come up with different ideas but each one has a different problem.
what i have:
the item type for the queue is a struct with an attribute to indicate if the message is a broadcast or for a specific task.
a broadcast task that will write a message to the queue.
a queue manager task that will peek on the queue if any new message was received and if the message has a destination then it will resume that specific task or resume all tasks if it's an broadcast.
and for the Receiver task i come up with those ideas:
if i used the receive function xQueueReceive only the first task in task-queue will read the message and remove it from queue and with this, the other tasks will not be able to read that broadcast message. in the other hand, it's the perfect why for directed message (message for a specific task).
if i use the peedk function xQueuePeek the message will never be removed from queue unless i use xQueueReceive which is kinda redundant (peek and receive in the same task, meeh, ugly coding) and i can't use any other delete function because it will remove the whole queue. but that will solve the message for a specific task, and to solve the broadcast message i need to set a priority for each receive task and only the task with the lowest priority will use xQueueReceive to remove that message from queue and all receive tasks will suspend themselves after peeking or reading so they don't read again the message (i'm not sure what to do about the queue manager task because i can't suspend it and it will keep notified about a new message in queue until the last task receive it), but the whole system will need to wait for that low priority task to run to remove that message and any new message received in that time, it will not be read in the real time.
i'm still thinking about other methods like using new queue or a queue for each receive task but i'm not sure yet which method is the best one. and i don't know if there any other why to broadcast a message even without using the queue technique.
i need to tell you that this program is not for a specific project. i'm just trying to use the Queue technique in different ways. and i already found other post about broadcasting a message but it was for a specific problem where they solve it without using the queue technique. i just want to send "this is a broadcast message" to the queue and all receiver be able to read it once (just one time).
thank you.
Event groups are the only broadcast mechanism in FreeRTOS. You could use an event group to unblock all tasks that should read from a queue using the queue peek function, then xEventGroupSync() to know when all tasks had read the data so the data should them be removed.
Related
My task is the following:
I am monitoring time synchronization events from a third-party measuring device. This time synchronization is a bit flaky so I want to detect when synchronization stops and issue an alarm.
For this, I am producing the synchronization events to a Kafka topic. I have three different events going on:
Synchronization request
Synchronization successful
Synchronization failed because other device did not respond
So, what I want to do:
When request is received, and nothing is received after a certain amount of time, I want to issue a "timeout" alarm
When request is received, and within the timeout period, a success event arrives, I want to issue a "timeout" if no request arrives after the timeout time
When a failure event arrives, I want to issue the "other device did not respond" alarm
I am currently in the process of setting up a Kafka-Streams application, and I need to store the state in case this application crashes (should not, but I want to be sure), so I set this up the following:
val builder = new StreamsBuilder
val storeBuilder = Stores.
keyValueStoreBuilder(Stores.persistentKeyValueStore("timesync-alarms"),
Serdes.String(),
logEntrySerde)
builder.addStateStore(storeBuilder)
val eventStream = builder.stream(sourceTopic, Consumed.`with`(Serdes.String(), logEntrySerde))
Now, I am stuck. What I basically think I need to do have a flatMap function on the eventStream, that, whenever an event arrives:
Queries the store for the last event that was processed
Decides if an alarm is to be raised
Updates the store with the currently-received event
Produces the alarm, if any
So, how do I achieve steps 1 and 3 here? Or am I conceptually wrong and have to do it differently?
I think you don't need to use state store directly. You can create two streams - one with sync request events, the second one with sync responses (success, fail) and join them:
requestStream.outerJoin(responseStream, (leftVal, rightVal) -> ...,
JoinWindows.of(timeout), ...);
In the case of timeout rightVal is null.
If you want to send alarms to a separate topic, you can simply filter the joined stream and write all failures (error responses and timeouts) to the topic. Otherwise you can use peek() method and trigger some action inside. Here is a simple example: https://github.com/djarza/football-events/blob/master/football-ui/src/main/java/org/djar/football/ui/projection/StatisticsPublisher.java
I want to trigger at the exact same time through message receipt, some processes into different Actors. Considering my Actors possible heavily stacked mailBoxes, what would be the best method to implement this?
I'm assuming you want the actors to read the messages at the same time. This, of course is not possible (while an actor is processing a message he cannot be disturbed).
But you can make sure that your trigger message is the next message they will take from the mailbox. This can be achieved by using a priority mailbox, for example this one: http://doc.akka.io/api/akka/snapshot/index.html#akka.dispatch.UnboundedStablePriorityMailbox
The messages in the mailbox will be sorted by priority. If you give your trigger messages the highest priority, they will be processed first.
So, i built this small example of a ZeroMQ pipeline architecture because i'll end up having to do something similar very soon and i'm trying to grasp the pipeline concept the right way.
https://gist.github.com/2765708
Right now, this is completely asynchronous. The controller dispatches a batch of tasks to various workers, which in their turn, send a message to the sink. The controller and sink are fixed parts of my architecture, while workers are dynamic. That's perfect.
However, i would like to know when the workers have finished working on all their tasks. In that example, i do know the amount of messages, but that won't be true on real-life situations. I might have 100 messages or 10,000. So, how can the sink or the controller know when the workers have finished working on their tasks? I have to perform some actions that depend on the conclusion of the jobs sent to workers.
I wanted to expand on #bjlaub's answer. It started as a comment but I was typing too much. I agree with the concept of acknowledgment, but believe it can originate in multiple places.
There are multiple approaches to this communication and it all depends on the behavior you are after in the system.
First, you can either send out messages from the workers as they finish each task, or from the sink as it receives each task. Right now I am not addressing the type of socket, only the act of communicating. I believe it is much more efficient to send it from the sink as you would only need one connection back to the controller instead of one for each worker. The sink does not need to know how many total tasks there are. Only that it is firing off a message after each result it receives. The controller can determine how many to expect since it was the submission point and new when it had exhausted its submission (the count).
Now regardless of whether you have the message sent from the worker or the sink, you can use different socket types. If you want the controller to completely block until all work is done, then you can have it be a push/pull until it receives X messages (message content can be anything. Its just a trigger).
This may be limiting if the controller wants to be able to do other work while these tasks are happening. If so, you could maybe use pub/sub, and let the controller subscribe to being notified as tasks complete, and asynchronously maintain a count until the total has been satisfied.
And finally, maybe you have the situation where you want the controller to ask the sink for a status when you deem fit. You can have a req/rep pattern for the controller to ask the sink how many requests it has received on demand.
I'm sure one of these patterns will fit your specific needs.
One idea (disclaimer: I have very little experience w/ 0MQ!):
Setup an "acknowledgment" pipeline in the reverse direction. Since the controller presumably knows how many tasks it has dispatched to the workers (e.g. the number of times it called send), it can use a PULL socket to receive a small message (an integer for example) from each worker indicating the completion of the task. The worker process dispatches its completed result to the sink, and at the same time sends the acknowledgement back to the controller. Once the controller collects the right number of acknowledgements, it can do whatever post-processing is necessary before farming out the next set of work.
You could also push this downstream to the sink, but you would need to notify the sink of the total number of work units to expect before farming them out to the workers.
I have a Microsoft Message Queue that gets populated with messages. If there is a problem with the processing of the message, I would like to retry the message, I do not want to retry the message immidiatley.
Is there a way to add a delay to the message in the MSMQ to avoid it being available for a certain amount of time??
The other alternative is to have another queue (A retry queue) and read that queue every 15 minutes, But i would rather not do this.
What you are looking for is "Poison Message Handling" ( even if its not the message fault, but an temporary environment problem ).
There are lots of articles on that. Here are some:
Poison Message Handling in MSMQ 3.0
Poison Message Handling in MSMQ 4.0
Surviving poison messages in MSMQ
In short: you have to move them to a retry queue.
So I've seen some code recently that handles this in the exception logic, the code has a built in retry step that attempts after a delay. It fails, waits for a specific amount of time, then tries again.
Essentially it recursively tries a set number of times (lengthening the delay each time). Fairly neat, no reason to have another queue. There is alot of generics and delegates used to execute the methods. Don't know if something like this could be done or not. I would suspect you would still want to handle the case of the message not being able to be delivered with another queue though.
I'm thinking of creating a generic message queue to handle various inter-process messages. (WCF is not an option at this point.) So, rather than have 10-15 different queues for specific messages I'd have 1 queue that is a 'catch-all'.
Obviously sending messages to this queue is a not a problem. Each recipient would listen to the queue for new messages then 'peek' them, but I'm looking for a clean/efficient way to do this. By clean I mean a method that does not require each and every recipient to read the body of each and every message.
Use System.Messaging.Message.AppSpecific (Integer) to specify a recipient.