How to prevent division by zero in ladder (PLC)? - plc

I have to make a circuit to prevent dividing a number by zero. I tried to put a condition in front of the division block but it did not work. What could I do to solve that? Bellow is the non-functioning circuit that I tried. I am using the RSLogix Micro emulator 500 on my computer to simulate the Allen-Bradley PLCs.

It appears you are testing N7:2 for EQUal to 0? Hence, it will only execute the DIV if N7:2 EQUals 0? Should it be a "NOT EQUAL" box?

As franji1 stated. You must check N7:2 for NOT EQUal to 0!
You could set it up in only one rung this logic, as it's only execute the division if the conditions is satisfied. Just remember, the LAD from Rockwell controllers is read for left to right, top to down.
Regards.

Your original logic looks good. Just change the EQU comparison instruction, in rung #2 to a NEQ instruction. Should work like a champ.
Another preventative rung I add into all my RS Logix 500 programs, at the very last rung to be evaluated, is an OTU s:5/0. This will prevent Math Overflows from Faulting your PLC.

You simply have to add some logic to avoid it. Personally when I divide by zero, I'd usually like to have my output from the function be zero.
So you simply set up opposite cases. If you're divisor is equal to zero, move zero into the result. If your divisor is not zero, then perform the division.

Related

Stopping a run/simulation in Netlogo when the system reaches 95% equilibrium

Is there a way to stop a run in Netlogo when the system reaches equilibrium? The ticks just keep running and I would like my simulation to automatically stop when a near-equilibrium change is reached, i.e. when the system is close to 95% equilibrium.
I have a similar situation where I've taken two approaches. When there is a variable for which stability indicates quasi-equilibrium, I keep a history of that variable in a list and check the standard deviation of the most recent "n" values. If that variance falls below a given value, I stop the simulation. For you, that might be concentration. Of course if the variable is moving smoothly in one direction or another you might get a false positive, so as a check, I've also regressed the most recent values to see if the slope is "close" to zero. I'd be hard put to put a percentage on closeness- that will depend on the situation. But, something like this might do.
Hope this helps,
Charles

Display an Error message when signal is less than a value in Simulink

I want to display an Error message when the signal reaches a certain value or simply when it reaches 0 I've used the ifblock and Relational Operatorbut it does not work for me.
You're most likely checking if the signal is exactly zero, which with floating point arithmetic is almost always a very bad thing to do.
Rather, you want to check that the absolute value of the signal is less than some small tolerance. More than that, you almost certainly need to check if the average of the signal over the past n-time points (where you choose n) is less than the tolerance.
You might also consider using something like the Static Gap block from the Model Verification library.

How do I compare two signals whose edges are almost in the same place?

I am verifying part of a design which generates pulses with precisely timed edges. I have a basic behavioral model which produces an output which is similar, but not exactly the same as the design. The differences between the two are smaller than the precision needed for the design, so my model is good enough. The problem is: how do I do a comparison between these two signals?
I tried:
assert(out1 == out1_behav);
But that fails since the two signals have edges which happen 1ps apart. The design only requires that the edges be placed with 100ps precision, so I want a pass in this situation.
I thought about using a specify block with $delay() timing checks, however this causes me other problems since I need to run with +no_timing_checks to keep my ram models from failing in this RTL sim.
Is there a simple way to check that these edges are "almost" the same?
With the design requirement for the the signals to match within 100ps you could add a compare logic will a 100ps transition delay to act as a filter.
bit match;
assign #100ps match = (out1 == out1_behav);
always #*
assert #0 (match==1);
Verilog has different ways of assigning delay: transition and transport. Transition delays control the rise, fall, and indeterminate/high-Z timing. They can act as a filter if a driving signal gives a pulse less then the delay. Transport delays will always follow the the driving signals with a time shift. When the delays are large transition and transport will look the same.
assign #delay transition = driver; // Transition delay
always #(rhs) transport <= #dealy driver; // Transport delay
example: http://www.edaplayground.com/s/6/878, click the run button to see the waveform.
If you are using Modelsim/Questa, you can still use +notimingchecks, and then use the tcl command tchech_set to turn on individual timing checks, like $fullskew
Otherwise you will have to write a behavioral block that records the timestamps of the rising and falling edges of the two signals and checks the absolute value of the difference.

Getting around floating point error with logarithms?

I'm trying to write a basic digit counter (an integer is inputted and the number of digits of that integer is outputted) for positive integers. This is my general formula:
dig(x) := Math.floor(Math.log(x,10))
I tried implementing the equivalent of dig(x) in Ruby, and found that when I was computing dig(1000) I was getting 2 instead of 3 because Math.log was returning 2.9999999999999996 which would then be truncated down to 2. What is the proper way to handle this problem? (I'm assuming this problem can occur regardless of the language used to implement this approach, but if that's not the case then please explain that in your answer).
To get an exact count of the number of digits in an integer, you can do the usual thing: (in C/C++, assuming n is non-negative)
int digits = 0;
while (n > 0) {
n = n / 10; // integer division, just drops the ones digit and shifts right
digits = digits + 1;
}
I'm not certain but I suspect running a built-in logarithm function won't be faster than this, and this will give you an exact answer.
I thought about it for a minute and couldn't come up with a way to make the logarithm-based approach work with any guarantees, and almost convinced myself that it is probably a doomed pursuit in the first place because of floating point rounding errors, etc.
From The Art of Computer Programming volume 2, we will eliminate one bit of error before the floor function is applied by adding that one bit back in.
Let x be the result of log and then do x += x / 0x10000000 for a single precision floating point number (C's float). Then pass the value into floor.
This is guaranteed to be the fastest (assuming you have the answer in numerical form) because it uses only a few floating point instructions.
Floating point is always subject to roundoff error; that's one of the hazards you need to be aware of, and actively manage, when working with it. The proper way to handle it, if you must use floats is to figure out what the expected amount of accumulated error is and allow for that in comparisons and printouts -- round off appropriately, compare for whether the difference is within that range rather than comparing for equality, etcetera.
There is no exact binary-floating-point representation of simple things like 1/10th, for example.
(As others have noted, you could rewrite the problem to avoid using the floating-point-based solution entirely, but since you asked specifically about working log() I wanted to address that question; apologies if I'm off target. Some of the other answers provide specific suggestions for how you might round off the result. That would "solve" this particular case, but as your floating operations get more complicated you'll have to continue to allow for roundoff accumulating at each step and either deal with the error at each step or deal with the cumulative error -- the latter being the more complicated but more accurate solution.)
If this is a serious problem for an application, folks sometimes use scaled fixed point instead (running financial computations in terms of pennies rather than dollars, for example). Or they use one of the "big number" packages which computes in decimal rather than in binary; those have their own round-off problems, but they round off more the way humans expect them to.

Simulink: PID Controller - difference between back-calculation and clamping for anti-windup?

I need to implement an anti-windup (output limitation) for my PID controller. Simulink is offering two options: back calculation and clamping (documentation) which seem to deliver equal results. I know what back calculation is doing mathematically. It requires to define the back-calculation gain Kb. This gain is dependent on how long my controller is saturated, therefore it is actually a dynamic value (because I may have a high variation of saturation times). Do you see a way to control this value? (in this case it probably would be necessary to build my own PID Controller as shown in the documentation above or in the picture below.
Which brings me to the question, what is clamping actually doing? And what are other differences? Which one is faster, which one is more robust against stiff slopes? Does anybody has experiences using both?
Not sure if this fully answers the question, but the PID Controller documentation page, explains a bit more about clamping:
clamping
Stops integration when the sum of the block components
exceeds the output limits and the integrator output and block input
have the same sign. Resumes integration when the sum of the block
components exceeds the output limits and the integrator output and
block input have opposite sign. The integrator portion of the block
is:
The clamping circuit implements the logic necessary to determine whether integration continues.
If you select the clamping option and look under the mask, you can probably see the details of the clamping circuit.
Additionally to am304's answer there are some more things to consider.
Clamping
Clamping will always work. It detects when there is integrator overflow and sets the integral path of the PID-controller to zero to avoid windup by using a simple switch.
Clamping is a commmonly used anti windup method, especially in case of digital control systems. In serious applications however, there is also forward clamping involved - evaluating the controller input as well. This mechanism must me implemented manually.
Back Calculation
Back Calculation highly depends on the back calculation coefficient Kb. If you don't know how to actually calculate the parameter Kb don't use back-calculation. This method calculates the difference between the actual controller output and the saturated output and subtracts it from the I-Gain path, amplified by Kb.
In most of cases the default value Kb = 1 will lead to worse results than clamping, it is even possible that it has no effect at all. Kb should be calculated based on the sampling time or
in case a D-Gain is involded, based on D- and I-Gain. Appropriate literatur should be consulted to calculate the coefficient. Back calculation with a properly set coeffient enables better dynamics than clamping!