Please describe how to use a priority queue to implement a queue - queue

I don't understand this question....
Please describe how to use a priority queue to implement a queue.
Do I simply assign the priority as the time of entrance? and since a queue is fifo I would min prioritize so the oldest time comes first?

Using the time as the priority key is one way to do it. Be careful, though, to use a time that doesn't change externally. You wouldn't want to be using local time when it comes time to set your clocks back an hour during the Daylight Saving Time switch.
You could also start an integer counter at 0, and increment it with every item you add to the queue.
In theory, you could just give every item equal priority, but in practice that might end up acting like a stack. It depends on how your priority queue implementation treats equal items. If the implementation is a binary heap, for example, it could insert the equal item as the new smallest item. So you'd end up with LIFO.

Related

Temporary adjustment of delay time

I have the following problem which I am unable to solve:
I have a situation where a security point (added as delay) holds every half an hour a 15 min break. After the break, the security guards increase their speed till the queue is shorter than 10pp.
I wanted to model this as follows: a state chart with delay.set_capacity(0) after 30 minutes and delay.set_capacity(1) again after the 15 min break. For the increased speed after the break, I added an additional state with condition: queue.size()>10 and now I want to set the action such that the delay function changes the delay time from exponential (1/10) to exponential (1/5) as long as queue.size()>10.
Anyone experience with which function in the action box to use? Or would you suggest a different function?
Since you are using, or at least want to use a statechart I would suggest the following design, where you have composite states inside the working state to indicate if the security agent is working fast or normal and a message transition to let it move from one state to the next.
It is advised to use a message transition and trigger it as needed instead of a conditional state which gets chected for every change inside the agent since this can be a computational expensive exercise.
I assume you already implemented the correct capacity settings for the different on enter actions for working and breaking
Now you simply need to send the message every time an agent enters the queue and every time it exits the delay block, and of course, see the delay time based on the state of the statechart.
Aee screenshot below.

Is there a way to specify infinite allowed lateness in Apache Beam?

I'm using fixed windows to batch data by event time in order to send it to an external API efficiently (batches of 60 seconds), accumulation mode is set to DISCARDING because it doesn't matter if late data is sent to the external API without the previous data.
Is it possible to specify an infinite allowed lateness, so late data is never discarded?
It is definitely possible, you can set allowed lateness to a very high Duration (for instance, Duration.standardDays(36500)). On the other hand , doing so would result in your state growing indefinitely, which might not be what you want. Every open window (every window ever seen) will have at least a timer called a GC timer - a timer set for the end of the window + allowed lateness. Every timer has to be kept in state and therefore, the size of your state will grow over time.
If you do not need batching based on event-time, it might be a better option to use GroupIntoBatches, which should not suffer from this problem (you don't need to set allowed lateness and the size of your state will not grow).

Anylogic Mean rate lost customer

I have a model where the queue's capacity limit is 2. I also set up a "SelectOutput"-block before the queue, as customers who can't enter the queue are lost. So now I want to find out the mean rate at which customers are lost? How can I do that?
Many thanks!
Create a variable leftLastMin of type int.
In the "on exit (false)" code of the SelectOutput, write leftLastMin ++
Now, create a recurring event that triggers every minute. Here, you can easily do something with the number left last minute (traceln, store into dataset or statistics object...).
Also, reset your counter leftLastMin=0 so it is ready for the next minute.

What is the correct operation of a CANopen inhibit timer?

I understand that the operation of a CANopen inhibit timer is to ensure a minimum time between successive transmissions of the same message, but the specification does not make it clear what to do if the data changes during the inhibit time (and the transmission is on change-of-state). Should I buffer the data and transmit it when the inhibit timer expires, or discard it and wait for a change after the timer has expired?
My assumption would be, since it is not clearly defined, I can choose whichever approach I want, but I'd appreciate the input of any experienced architects / developers on this.
Thanks.
You're correct that the inhibit time is simply the minimum time between consecutive CAN frames with the same CAN-ID. The standard does not specify the behavior for multiple events during the inhibit time window, because it depends on the situation.
For services like NMT, EMCY and perhaps LSS, you'd want to buffer the messages and send them later. In this case the inhibit time is simply a means to help slow (or badly programmed) devices to handle short bursts of messages. I've seen devices that could only handle 3 CAN frames at once, so it's often necessary, but you would not want them to miss messages.
For event-driven Transmit-PDOs, it depends on what the PDO represents. If you use it to track state, it might make sense to drop events during the inhibit window. They're invalidated by subsequent events anyway. To ensure you always emit the latest state, you can store the most recent event and transmit it once the inhibit time has elapsed, or use the event-timer to ensure you're never too far behind. I've used this strategy in the past for analog inputs where line noise would sometimes cause event bursts.
If you use PDOs to track events (or state changes), you'd be better of buffering them so no events get lost. However, this can introduce potentially unbounded delays if the event period is shorter than the inhibit time.
For the products we're working on at Lely (dairy farm robots), we actually prefer to use SYNC-driven PDOs instead. It results in a much more predictable CAN bus load. And we don't have to track state at the receiver side because we receive a full update on every SYNC. However, the receiver is always one SYNC period behind the transmitter, so this may not be appropriate for your use case.

Time since a value was zero

I have an application that consumes work to do from an AWS topic. Work is added several times a day and my application quickly consumes it and the queue length goes back to 0. I am able to produce a metric for the length of the queue.
I would like a metric for the time since the length of queue was last zero. Any ideas how to get started?
Assuming a queue_size gauge that records the size of the queue, you can define a recorded rule like this:
# Timestamp of the most recent `queue_size` == 0 sample; else propagate the previous value
- record: last_empty_queue_timestamp
expr: timestamp(queue_size == 0) or last_empty_queue_timestamp
Then you can compute the time since the last time the queue was empty as simply as:
timestamp(queue_size) - last_empty_queue_timestamp
Note however that because this is a gauge (and because of the limitations of sampling), you may end up with weird results. E.g. if one work item is added every minute, your sampling interval is one minute and you sample exactly after the work items have been added, your queue may never (or very rarely) appear empty from the point of view of Prometheus. If that turns out to be an issue (or simply a concern) you may be better off having your application export a metric that is the last timestamp when something was added to an empty queue (basically what the recorded rule attempts to compute).
Similar to Alin's answer; upon revisiting this problem I found this from the Prometheus documentation:
https://prometheus.io/docs/practices/instrumentation/#timestamps,-not-time-since
If you want to track the amount of time since something happened, export the
Unix timestamp at which it happened - not the time since it happened.
With the timestamp exported, you can use the expression time() -
my_timestamp_metric to calculate the time since the event, removing the need for
update logic and protecting you against the update logic getting stuck.