What is differential instructions in PLC? - plc

I'm using Omron CP1L PLC and program with CX-programmer. I have hard time understanding what exactly is "Differential Instructions", from the documentation:
With differentiated instructions, execution results for instructions
are reflected in Condition Flags only when execution condition is met,
and results for a pre- vious rung (rather than execution results for
the differentiated instruction) will be reflected in Condition Flags
in the next cycle. You must therefore be aware of what Condition Flags
will do in the next cycle if execution results for differ- entiated
instructions to be used.
My understanding is: instruction always execute when condition is met, and of course if Condition Flags exist to get its state of ON or OFF from previous rung's instruction, instructions on the next rung would get executed. So I completely do not grasp the point of the explanation in the documentation. And see no difference between the two:
(A) Without using differential
(B) Using differential

What the manual is warning you about is that, in the incorrect case, Instruction A will be executed only once after C becomes true (differentiated instruction) but the execution of Instruction B depends on the state of the condition flag from the instruction executed in A. If A is executed only once then the condition flag is only valid for the current PLC scan. Subsequent PLC scans with C satisfied will NOT execute differential instruction A but MAY execute differential instruction B -- IF a previous rung performs a comparison operation and sets the global condition flag TRUE.
If you understand the danger of global variables, this is basically the same thing. Some flags in PLC logic are global flags used by certain instructions. They remain valid only immediately after the instruction is executed and will change each time it is executed on different data. In the incorrect case an unguarded rung is dangling with a global condition flag for an operation which is NOT guaranteed to execute.
In the Correct case the execution condition is differentiated instead of the instruction. When C becomes true it goes to a [DIFU D]. This makes D true for the next PLC scan ONLY - D will only EVER be true for one PLC scan each time C goes from FALSE to TRUE. This guarantees that Instruction A (which generates the Condition Flag value) is executed only once and, further, that it is guaranteed to execute every time that the condition flag exposing Instruction B is encountered.
Edit : Problematic execution flow - state of CF is RANDOM (more precisely : uncontrolled!) unless we have just performed a comparison operation. All other comparison operations in the entire program will alter its value every time a comparison instruction executes anywhere in the program!
STATE C Instruction A CF(=) InstructionB
Scan #1 : OFF N/E RANDOM N/E
Scan #2 : ON EXECUTES > TRUE TRUE EXECUTES //desired
Scan #3 : ON N/E RANDOM(T) N/E
Scan #4 : ON N/E RANDOM(F) N/E
Scan #5 : ON N/E RANDOM(T) *EXECUTES*!! //UNDESIRED
Here, so long as C remains ON, Instruction B will execute every time the CF switches from FALSE To TRUE due to other comparison operations in other areas of the program. This is not desired - we only want InstructionB to execute if InstructionA has executed and has returned CF= as TRUE.

Related

Is it safe to mark a postgres function as parallel safe if its uses `set local`

I'm using something like this as part of a query in my application in order to authenticate users
set local jwt.claims.sub to 'ffad81a1-cc4e-4370-b4bc-1453975a4e8d';
With the above is it safe to access that value in another procedure that is marked as parallel safe
I'm executing a transaction which first makes the set local call, then performs a select query. The function that I want to mark as parallel safe is actually used as part of my row level security logic, so it executes during the select query and all within the same transaction that started with the set local call.
The documentation states
... should be labeled as parallel restricted if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize in parallel mode (e.g., setseed cannot be executed other than by the group leader because a change made by another process would not be reflected in the leader).
https://www.postgresql.org/docs/12/sql-createfunction.html
I'm wondering if this use of set local is considered as client connection state? Hopefully not as parallel safe is making a dramatic difference for some functions and would love to enable it
A function that uses SET or SET LOCAL should be marked PARALLEL RESTRICTED, otherwise the worker processes that work on the query don't agree on the setting, which can lead to trouble.
You could refactor your code and run SET LOCAL outside the function, before you run the SQL statement that uses the function. As long as they run in the same transaction, it shouldn't make a difference.
Alternatively, you can use
ALTER FUNCTION ... SET parameter = value;
In that case, the changed parameter value is only in effect during function execution and does not leak out of the function, so it should be safe to use PARALLEL SAFE.

Holding agents before a select output element to avoid default port

I've been trying to model a scenario but still cannot find the best way to do it. The scenario is as follows:
Agents arrive at a point where they need to choose one of three paths. Each path is a delay with capacity 1. If the first path already has an agent in it (in the delay block), then the 1st condition is not met and the agent tries the second port. In the second port, if the delay block is available it can proceed, otherwise it checks the third. If all are busy, then the agent should wait in a queue before the select output.
To model this process, I used the following sequence:
Queue > Hold > Select Output 5 > 1 Delay element of capacity 1 after each of the three first output ports of the select output
The condition for the select output is for example "Delay1.size() == 0" then for the second port "Delay2.size() == 0", etc.
Then, I created a function that checks if all delay.size() == 1, then the hold element is set to blocked to avoid having agents going through the select output's default port. The function is tested at every "On Enter" and "On Exit" fields of all the blocks.
Despite the above, agents are still going through the default port which means that the hold element is not working properly.
Is there a more efficient way to model the described scenario? Thank you!
Well, you are not actually blocking your Hold element at all, hence agents will walk through anytime :-)
There are many ways for such a situation.
You could replace the Delay with a Wait element instead. Whenever an agent leaves one of your Delay blocks, you unblock the Hold.
Whenever an agent passes the Hold, you block it, but only if all 3 pathes are currently busy.
Should do the trick

Cloud Dataflow: Once trigger not working

I have a Dataflow pipeline reading from unbounded source. My window size is 10 hours, I am trying to test my trigger using a TestStream. My trigger will emit early result if element count reaches at least 2 for the same key within a Window. I have following trigger to achieve this:
input.apply(Window.into(FixedWindows.of(Duration.standardHours(12))) .triggering(AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterPane.elementCountAtLeast(2)))
.apply(Count.perElement())
We also tried:
Repeatedly.forever(AfterPane.elementCountAtLeast(2)).orFinally(AfterWatermark.pastEndOfWindow())
I expect early firing when asserting the result, however I don't get all the result in
PAssert.that(pipeline).inWindow(..)..
What am I doing wrong? Also running same test repeatedly yields different result meaning different values are returned from the trigger.
Triggering is non-deterministic. It will give you an early firing some time after the trigger condition is satisfied. It will then give you another early firing some time after the trigger condition is satisfied again.
The actual choice to emit after the trigger is determined by the runner. If you are using a batch runner, it may wait until all the data is available. How much input are you expecting for each key/window? Which runner are you using?

Exit from executing the remaining rules in Drools Decision Table

We have a scenario to be implemented in Decision Table to exit from executing the remaining rules if certain rule successfully executes the action part of the rule. Suppose I have 50 rules and 5th rule is something which says insurance claim is invalid then we set claim as invalid to the object, then there is no need to execute remaining rules. How could this can be achieved. Please suggest
You can
retract the fact under evaluation, after setting invalid to true, on that rule's RHS,
throw an exception (ugly, ugly),
run the session using fireUntilHalt and call method halt on the session on that rule's RHS - here you'll need a very low salience rule (added in a .drl file) to call halt in case the fact passes all decision table rules.

Simultaneously incrementing the program counter and loading the Instruction register

In my Computer Architecture lectures, I was told that the IR assignment and PC increment are done in parallel. However surely this has an effect on which instruction is loaded.
If PC = 0, then the IR is loaded and then the PC incremented then the IR will hold the instruction that was at address 0.
However if PC = 0, the PC incremented and then the IR is loaded and then the IR will hold the instruction that was at address 1.
So surely they can't be done simultaneously and the order must be defined?
You're not taking into account the wonders of FlipFlops. The exact implementation depends of course on your specific design, but it's perfectly possible to read the value currently latched on some register or latch, while at the same time preparing a different value to be stored there, as long as you know these values are independent (there's also a possibility of doing a "bypass" in more sophisticated designs, but that's besides the point here).
In this case, you'd be reading the current value of the PC (and using it to fetch the code from memory, or cache, or whatever), while preparing the next value (for e.g. PC+4 or some branch target if you know it). This is how pipelines work.
Generally speaking, you either have enough time to do some work withing the same cycle (incrementing PC and using it for code fetch), in which case they'll fit in the same pipestage, or if you can't make it in time - you just break these serial activities to two pipestages, so that they can be done in "parallel" because one of them belongs to the next operation flowing through the pipe, so there's no longer a dependency (aside from corner cases like branches or bubbles)