usage of $past macro in system verilog for a signal to high - system-verilog

I am a starter in system verilog.
I want to check on a falling edge of the signal whether it is high for the past 'n' number of cycles. Usage of ##n cycles doesn't work for me.
logic x,y;
x & y -> ##2 $past(y) -> $fell(y); this doesn't seem like working
with the condition of x & y, what I am checking is at the falling edge of 'y' the signal 'y' is high for past 2 cycles after the condition x& y is met

Hi and welcome to SVA.
In my answer i shall assume that you have defined a clock and are using this in your definition of "falling edge"
There are a few issues with your code and problem description. I only enumerate these to help with issues in the future:
- $past is not a macro but a system function
- You are not using the correct SVA implication operator. "->" is a blocking event trigger. The overlapping implication operator I guess you are after is |->
- ##2 $past(y) will actually insert a delay of 2 cycles, and then check that the past value of y was high. Really, you are checking that y is high one cycle after your initial trigger.
I am also not quite sure what your trigger condition is meant to be - x && y will spawn a property thread if both x and y are high. In fact, it won't trigger on a negedge of y.
In the following code I attempt to code up the SVA to your spec as I understood it. You can use a simple function to ensure that preceding n cycle, y was high. Feel free to replace $fell(y) with any trigger as required.
function bit y_high_preceding_n_cycles(int n)
for(int i = 1; i < n; i++) begin
// check if y wasnt high i cycles ago, just return 0
if (!$past(y, i, , #(posedge clk))) return 0;
end
return 1;
endfunction
prop_label: assert property($fell(y) |-> y_high_preceding_n_cycles(n));
This will check that on detection of $fell(y), y was high the preceding n cycles. Note that the iteration of the for loop i==1 will by definition be redundant (as trigger on $fell(y) i.e. definitely $past(y) == 1 holds assuming no Xs).
Hope this helps.

Related

If a sequence occurs then a subsequence occurs within it in System-Verilog assertions

I want to say:
"if the sequence A occurs then the sequence B occurs within that sequence". How can I do this?
I would have thought I could use the assertion:
assert property (#(posedge clk) (A |-> (B within A));
but this doesn't seem to work for my example.
I've read that:
The linear sequence is said to match along a finite interval of consecutive clock ticks provided the first boolean expression evaluates to true at the first clock tick, the second boolean expression evaluates to true at the second clock tick, and so forth, up to and including the last boolean expression evaluating to true at the last clock tick.
but i suspect that the clock tick passed to the otherside of the |-> the last clock tick when I want it to be the first.
My particular example is an accumulator which I expect to overflow if I add enough positive numbers, so I want A = (input == 1)[*MAX_SIZE] and B = (output == 0), so here B is a sequence of length 1, I don't know if that causes problems.
I'm very new to system-verilog so it maybe that it's some other part of my code which is going wrong but I've not seen this example done anywhere.
You are correct that the consequent in the |-> operator is started once A has already matched. What you want is to look into the past: "once I've seen A, have I seen B within A?".
You could use the triggered property of a sequence to do this:
sequence b_within_a;
#(posedge clk)
B within A;
endsequence
assert property (#(posedge clk) A |-> b_within_a.triggered);
The b_within_a sequence will match exactly at the end of A, of course, if B also happened, which is when the triggered property will evaluate to 1.
Note that the b_within_a sequence has its clock defined specifically. This is a requirement from the LRM, otherwise you won't be allowed to call triggered on it.

Branch prediction and performance

I'm reading a book about computer architecture and I'm on this chapter talking about branch prediction.
There is this little exercise that I'm having a hard time wrapping my head around it.
Consider the following inner for loop
for (j = 0; j < 2; j++)
{
for (i = 10; i > 0; i = i-1)
x[i] = x[i] + s
}
-------> Inner loop:
L.D F0, 0(R1)
ADD.D F4, F0, F2
S.D F4, 0(R1)
DADDUI R1, R1, -8
BNE R1, R3, Loop
Assume register F2 holds the scalar s, R1 holds the address of x[10], and R3 is pre-computed to end the loop when i == 0;
a) How would a predictor that alternates between taken/not taken perform?
---- Since the loop is only executed 2 times, I think that the alternate prediction would harm the performance in this case (?) with 1 miss prediction.
b) Would a 1-bit branch prediction buffer improve performance (compare to a)? Assume the first prediction is "not taken", and no other branches map to this entry.
---- Assuming the first prediction is "not taken", and 1-bit predictor invert the bit if the prediction is wrong. So it will be NT/T/T. Does that make it have the same performance as problem a) ? with 1 miss prediction.
c) Would a 2-bit branch prediction buffer improve performance (compare to a)? Assume the first prediction is "not taken", and no other branches map to this entry.
---- 2-bit branch prediction starting with "not taken". As I remember 2 bit prediction change after it misses twice. So this prediction will go like NT/NT/T/T. Therefore its performance will be worse compare to a). 1 miss prediction
That was my attempt to answer the problems. Can anyone explain to me if my answer is right/wrong in more detail please? Thanks.
Since the loop is only executed 2 times
You mean the outer-loop conditional, the one you didn't show asm for? I'm only answering part of the question for now, in case this confusion was your main issue. Leave a comment if this wasn't what had you confused.
The conditional branch at the bottom of the inner loop is executed 20 times, with this patter: 9xT, 1xNT, 9xT, 1xNT. An alternating predictor there would be wrong about 50% of the time, +/- 20% depending on whether it started right or wrong.
It's the outer loop that only runs twice: T,NT. The whole inner loop runs twice.
The outer loop branch would either be predicted perfectly or terribly, depending on whether the alternating prediction started with T or with NT.

How to extend the range of a variable in idl

I'd like to use k to calculate the times of the for loop executed. It would be billions of times and I tried long64, then after some time k became negative. Is there any other way to do this?
Sorry I think I made a wrong description. My code is a 3-layer for nest block and each of them is calculating 2*256^3 numbers, once the value is equal to 0, I'd like to make k+=k. In the end I set print, 'k=', k and when idl was running, I found k ran from positive to negative. I used a cluster to do the computation so it didn't take quite a long time.
My guess is that you are not really using a long64 for k. The type for the loop variable in a for loop comes from the start value. For example, in this case:
k = 0LL
for k = 0, n - 1 do ...
k is a int (16-bit) because 0 is a int. You probably want something like:
for k = 0LL, n - 1LL do begin ...

Matlab linprog constraints: how to stop charge and discharge storage at same time?

I'm having issues with matlab linprog code. The optimisation function is the overall cost for the 24 period, just considering fuel costs of the boiler.
Purpose of simulation:
Optimisation of the charge/discharge behaviour of a Thermal Energy Storage (TES) for a 24h operation of a system consisting of a boiler, heat demand, and the TES. The price of the gas are time-varying.
Problem:
If the TES is ideal (efficiency=100%), I have no constraint that stops the system from charging and discharging at the same time. I CANNOT use one variable to describe charge and discharge. I do need them separated
At the moment I have the following constraints to describe the min/max charge/discharge rates (and of course some others):
maxChargeThermalTES>=ChargeThermalTES<=0
maxDischargeThermalTES >= DischargeThermalTES <=0
is it possible to realise the following logical rule within the constraints of linprog?
if ChargeThermalTES<0,
DischargeThermalTES=0
end
all approaches, e.g. with a binary variable (to describe if the system is charging or discharging) do not work, as the binary variable always depends on the output of the optimisation.
You cannot enforce such logical rule in linear programming.
However, what you can do is the following :
1\ solve your linear program, without this constraint. Get the optimal cost of your objective function (lets name it OldCost).
2\ Then change your linear program this way :
add a constraint : old objective function should be between OldCost * (1-Epsilon) and OldCost * (1+Epsilon)
the new objective function to minimize is ChargeThermalTES + DischargeThermalTES.
Cheers
Yes, it is possible. You can add your If-Then condition to linprog using one 0-1 binary variable and Big-M.
To realize the Logical rule:
if ChargeThermalTES<0,
DischargeThermalTES=0
end
Condition: If ChargeThermalTES<0, then DischargeThermalTES=0
Let's introduce a binary variable y
So we can rewrite the condition as
ChargeThermalTES - M y < 0
which implies that
if y = 0, then DischargeThermalTES must be = 0
if y=1, DischargeThermalTES can be anything
Let's split the equal to constraint into two inequalities
DischargeThermalTES < M y
DischargeThermalTES > -M y
If y=1, the above two are essentially non-binding.
If y= 0, it will force DischargeThermalTES to become 0.
So with the following constraints combined, you can enforce your logical constraint to the Linear Program.
ChargeThermalTES - M y < 0
DischargeThermalTES < M y
DischargeThermalTES > -M y
y = {0,1} binary, M is a large number.
Hope that helps.

How to generate random matlab vector with these constraints

I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?