I'm learning SystemVerilog and today my lecturer warned us against accidentally introducing memory into combinational systems. He used the following code as an example of this:
module gate(output logic y, input logic a);
always_comb
if(a)
y = '1;
endmodule
However, I don't understand why this presents a problem. As far as I can see, this is just a simple buffer. In what way does this code introduce memory into the system?
At the beginning of the simulation if a == 0 the value of y will be '0. If later a == 1'b1 then y will become '1. What value do you expect y to have when later on a == 0?
The answer to this question is that it will retain its previous value: '1. That is not the behaviour of combinational logic, whose output by definition only depends on the current state of the inputs, not on their previous states. In order to implement the behaviour you have described, the synthesiser will need a component with state, with storage, with memory. It will fulfill this by using a latch.
Related
I am using the FixedRotation component and get a division by zero error. This happens in a translated expression of the form
var = nominator/fixedRotation.R_rel_inv.T[1,3]
because T[1,3] is 0 for the chosen parameters:
n={0,1,0}
angle=180 deg.
It seems that Openmodelica keeps the symbolic variable and tries to be generic but in this case this leads to division by zero because it chooses to put T[1,3] in the denominator.
What are the modifications in order to tell the compiler that the evaluated values T[1,3] for the compilation shall be considered as if the values were hard coded? R_rel is internally in fixedRotation not defined with Evaluate=true...
Should I use custom version of this block? (when I copy paste the source code to a new model and set the parameters R_rel and R_rel_inv to Evalute=true then the simulation works without division by zero)...
BUT is there a modifier to tell from outside that a parameter shall be Evaluate=true without the need to make a new model?
Any other way to prevent division by zero?
Try propagating the parameter at a higher level and setting annotation(Evaluate=true) on this.
For example:
model A
parameter Real a=1;
end A;
model B
parameter Real aPropagated = 2 annotation(Evaluate=true);
A Ainstance(a=aPropagated);
end B;
I don't understand how the Evaluate annotation should help here. The denominator is obviously zero and this is what shall be in fact treated.
To solve division by zero, there are various possibilities (e.g. to set a particular value for that case or to define a small offset to denominator, you can find examples in the Modelica Standard Library). You can also consider the physical meaning of the equation and handle this accordingly.
Since the denominator depends on a parameter, you can also set an assert() to warn the user there is wrong parameter value.
Btw. R_rel_inv is protected and shall, thus, not be used. Use R_rel instead. Also, to deal with rotation matrices, usage of functions from Modelica.Mechanics.MultiBody.Frames is a preferrable way.
And: to use custom version or own implementation depends on your preferences. Custom version is maintained by the comunity, own version is in your hands.
I have a constrained nonlinear optimization problem, "A". Inside the computation is an om.Group which I'll call "B" that requires a nonlinear solve. Whether "B" finds a solution or crashes seems to depend on its initial conditions. So far I've found that some of the initial conditions given to "B" are inconsistent with the constraints on "A", and that this seems to be contributing to its propensity for crashing. The constraints on "A" can be computed before "B".
If the objective of "A" could be computed before "B" then I would put "A" in its own group and have it pass its known-good solution to "B". However, the objective of "A" can only be computed as a result of the converged solution of "B". Is there a way to tell OpenMDAO or the optimizer (right now I'm using ScipyOptimizerDriver and the SLSQP method) that when it chooses a new point in design-variable space, it should check that the constraints of "A" hold before proceeding to "B"?
A slightly simpler example (without the complication of an initial guess) might be:
There are two design variables 0 < x1 < 1, 0 < x2 < 1.
There is a constraint that x2 >= x1.
Minimize f(sqrt(x2 - x1), x1) where f crashes if given imaginary inputs. How can I make sure that the driver explores the design space without giving f a bad input?
I have two proposed solutions. The best one is highly problem dependent. You can either raise an AnalysisError or use numerical clipping.
import numpy as np
import openmdao.api as om
class SafeComponent(om.ExplicitComponent):
def setup(self):
self.add_input('x1')
self.add_input('x2')
self.add_output('y')
def compute(self, inputs, outputs):
x1 = inputs['x1']
x2 = inputs['x2']
diff = x1 - x2
######################################################
# option 1: raise an error, which causes the
# optimizer line search to backtrack
######################################################
# if (diff < 0):
# raise om.AnalysisError('invalid inputs: x2 > x1')
######################################################
# option 2: use numerical clipping
######################################################
if (diff < 0):
diff = 0.
outputs['y'] = np.sqrt(diff)
# build the model
prob = om.Problem()
prob.model.add_subsystem('sc', SafeComponent(), promotes=['*'])
prob.setup()
prob['x1'] = 10
prob['x2'] = 20
prob.run_model()
print(prob['y'])
Option 1: raise an AnalysisError
Some optimizers are set up to handle this well. Others are not.
As of V3.7.0, the OpenMDAO wrappers for SLSQP from scipy and pyoptsparse, and the SNOPT/IPOPT wrappers from pyoptsparse all handle AnalysisErrors gracefully.
When the error is raised, the execution stops and the optimizer recognizes a failed case. It backtracks on the linesearch a bit to try and get out of the situation. It will usually try a few steps backwards, but at some point it will give up. So the success of this situation depends a bit on why you ended up in the bad part of the space and how much the gradients are pushing you back into it.
This solution works very well with fully analytic derivatives. The reason is that (most) gradient based optimizers will only ever ask for function evaluations along a line search operation. So that means that, as long as a clean point is found, you're always able to be able to compute derivatives at that point as well.
If you're using finite-differences, you could end a line search right near the error condition, but not violating it (e.g. x1=1, x2=.9999999). Then during the FD step to compute derivatives, you might end up tripping the error condition and raising the error. The optimizer is not going to be able to recover from this condition. Errors during FD steps will effectively kill the whole opt.
So, for this reason I never recommend the AnalysisError approach if you're suing FD.
Option 2: Numerical Clipping
If you optimizer wrapper does not have the ability to handle an AnalysisError, you can try some numerical clipping instead. You can add a filter in your calcs to to keep the values numerically safe. However, you obviously need to use this very carefully. You should at least add an additional constraint that forces the optimizer to keep away from the error condition when converged (e.g. x1 >= x2).
One important note: if you provide analytic derivatives, include the clipping in them!
Sometimes the optimizer just wants to pass through this bad region on its way to the answer. In that case, the simple clipping I show here is probably fine. Other times it wants to ride the constraint (be sure you add that constraint!!!) and then you probably want a more smoothly varying type of clipping. In other words don't use a simple if-condition. Smooth the round corner a bit, and maybe make the value asymptotically approach 0 from a very small value. This way you have a c1 continuous function and the derivatives won't got to exactly 0 for these inputs.
I try a simple code on modelica using if:
model thermostat1
parameter Real T0=10;
Real T(start=T0);
equation
if T<73 then
der(T)=-T+80;
else
der(T)=-T+50;
end if;
end thermostat1;
the simulation stops at the moment T reaches 73.
Why can't the simulation continue with the new equation ( der(T)=-T+50 )?
And how can i fix it?
Thank you.
What happens in your model is called chattering. This means, that there are very frequent events happening.
In you case specifically this is caused by the condition T<73 combined with the definitions of the derivatives. Let me try to explain what happens with the model you have provided:
The temperature rises until it reaches 73
Then the temperature starts to fall because the condition in the if-statement turns fall
This causes the if-statement to immediately change to true again, making the temperature rise
This causes the if-statement to change to false again, making the temperature fall
goto 3.
This causes the condition T<73 to change at a very high frequency, in turn creating many events, which have to be handled by the solver. This makes the progress in time very little (what you refer to as "the simulation stops").
There are multiple ways to solve this problem. One is to add a hysteresis to the model. I did that in code below. To describe the hysteresis part I used the code from Modelica.Blocks.Logical.Hysteresis. This will make the boolean variable upperLimit (which replaces your T<73) change only if temperature gets higher than T_highand lower than T_low. I chose this solution as it seems reasonable for a thermostat.
model thermostat1
parameter Real T0=10;
parameter Real T_High=74;
parameter Real T_Low=72;
Boolean upperLimit "Output of hysteresis block";
Real T(start=T0);
equation
upperLimit =not pre(upperLimit) and T > T_High or pre(upperLimit) and T >= T_Low;
if upperLimit then
der(T)=-T+50;
else
der(T)=-T+80;
end if;
end thermostat1;
The result of the simulation then looks like:
More general information can be found e.g. at http://book.xogeny.com/behavior/discrete/decay/
I have read the book "Digital Design and Computers Architecture" by David Harris and I have a question about SystemVerilog examples in this book. After the introduction in the "parameterized construction", which is # (parameter ...), this operator is used almost in every example.
For example, the "subtractor" module from this book:
module subtractor #(parameter N = 8)
(input logic [N - 1:0] a,b,
output logic [N - 1:0] y);
assign y = a - b;
endmodule
What's the reason of using N in this code?
Can't we just write the following?:
input logic [7:0] a,b,
output logic [7:0] y);
Moreover, such parameters are used in almost every example further in the book but, as for me, there is no reason for using it. We can set the number of bits directly in square brackets without using additional "parameters".
So, what is the reason of such form of coding above?
The use of parameters serves a number of purposes.
It is always a better programming practice to use a symbolic name associated with a value than using a literal value directly. DATA_WIDTH instead of N would have been a more appropriate example. This documents the meaning of the value.
When a change to that value is needed, you have a single place to make that change, and less chance that you'll miss a change, or change an unintended value.
The use of parameters allows you to re-use the same code in many different places by creating a template and then overriding the parameter values as needed.
I have three looped operations O1 O2 O3 each with an IF statement and the operation with the largest flag=[F1 F2 F3] value has a higher priority to run.
How can I switch between operations depending on the value of that flag ? The flag value for each operation varies with time.
For simplicity, operation 1 is going to run first, and by the end of it's loop the flag value will be the lowest, hence operation 2 or 3 should run next. So for this example, at t=0 : F1=5 F2=3 and F3=1.
The over-simplified pseudo code for what im trying to achieve :
while 1
find largest flag value using [v index]=max(flag)
Run operation with highest flag value
..loop back..
end
I am not sure how the value of flag will be compared in between operations, and hence why I hope for someone to shed some light on the issue here.
EDIT
Currently, all operations are written in one matlab file, and each is triggered with an IF statement. The operations run systematically one after the other (depending on which one is written first in matlab). I want to avoid that and trigger them depending on the flag value instead.
If your operations are functions (a little hard to tell from the question), then make a cell array of function handles, there fun1 is the name of one of your actual functions.
handles = {#fun1, #fun2, #fun3}
Then you can just use the index returned from your max term to get the correct function from the array. You can pass any arguments to the function using the following syntax.
handles{index}(args)
Using the style above makes the solution scalable, so you don't need a stack of if statements that require maintenance when the number of operations expands. If the functions are really simple you can always use lambdas (or anonymous functions in Matlab speak).
However, if you have a limited number of simple operations that are not likely to expand, you may choose to use a switch statement in your while loop instead. It conveys your intention better than a stack of if statements.
while 1
[~, index]=max(flag);
switch index
case 1
operation1
flag = [x y z]
case 2
operation2
flag = [x y z]
otherwise
operation3
flag = [x y z]
end
end