What is the benefit of automatic variables? - system-verilog

I'm looking for benefits of "automatic" in Systemverilog.
I have been seeing the "automatic" factorial example. But I can't get though them. Does anyone know why we use "automatic"?

Traditionally, Verilog has been used for modelling hardware at RTL and at Gate level abstractions. Since both RTL and Gate level abstraction are static/fixed (non-dynamic), Verilog supported only static variables. So for example, any reg or wire in Verilog would be instantiated/mapped at the beginning of simulation and would remain mapped in the simulation memory till the end of simulation. As a result, you can take dump of any wire/reg as a waveform, and the reg/wire would have a value from the beginning till the end, since it is always mapped. In a programmers perspective, such variables are termed static. In C/C++ world, to declare such a variable, you will have to use storage class specifier static. In Verilog every variable is implicitly static.
Note that until the advent of SystemVerilog, Verilog supported only static variables. Even though Verilog also supported some constructs for modelling at behavioural abstraction, the support was limited by absence of automatic storage class.
automatic (called auto in software world) storage class variables are mapped on the stack. When a function is called, all the local (non-static) variables declared in the function are mapped to individual locations in the stack. Since such variables exist only on the stack, they cease to exist as soon as the execution of the function is complete and the stack correspondingly shrinks.
Amongst other advantages, one possibility that this storage class enables is recursive functions. In Verilog world, a function can not be re-entrant. Recursive (or re-entrant) functions do not serve any useful purpose in a world where automatic storage class is not available. To understand this, you can imagine a re-entrant function as a function which dynamically makes multiple recursive instantiations of itself. Each instance gets its automatic variables mapped on the stack. As we progress into the recursion, the stack grows and each function gets to make its computations using its own set of variables. When the function calls return the computed values are collated and a final result made available. With only static variables, each function call will store the variable values at the same common locations thus erasing any benefit of having multiple calls (instantiations).
Coming to the factorial algorithm, it is relatively easy to conceptualize factorial as a recursive algorithm. In maths we write factorial(n) = n(factial(n-1))*. So you need to calculate factorial(n-1) in order to know factorial(n). Note that recursion can not be completed without a terminating case, which in case of factorial is n=1.
function automatic int factorial;
input int n;
if (n > 1)
factorial = factorial (n - 1) * n;
else
factorial = 1;
endfunction
Without automatic storage class, since all variables in a function would be mapped to a fixed location, when we call factorial(n-1) from inside factorial(n), the recursive call would overwrite any variable inside the caller context. In the factorial function as defined in the above code snippet, if we do not specify the storage class as automatic, both n and the result factorial would be overwritten by the recursive call to factorial(n-1). As a result the variable n would consecutively be overwritten as n-1, n-2, n-3 and so on till we reach the terminating condition of n = 1. The terminating recursive call to factorial would have a value of 1 assigned to n and when the recursion unwinds, factorial(n-1) * n would evaluate to 1 in each stage.
With automatic storage class, each recursive function call would have its own place in the memory (actually on the stack) to store variable n. As a result consecutive calls to factorial will not overwrite variable n of the caller. As a result when the recursion unwinds, we shall have the right value for factorial(n) as n*(n-1)(n-2) .. *1.
Note that it is possible to define factorial using iteration as well. And that can be done without use of automatic storage class. But in many cases, recursion makes it possible for the user to code algorithms in a more intuitive fashion.

I propose 1 example as below(using fork...join_none):
Ex.1 (non-using automatic) : value output will be "3 3 3 3". because i take the latest value after exit for loop, i is stored in static memory location. This may be a bug in your code.
initial begin
for( int i =0; i<=3 ; i++)
fork
$write ("%d ", i);
join_none
end
Ex.2 (using automatic) : value output will be "0 1 2 3". Because in each loop, value of i is copied to k, and fork..join_none spawn a thread with each value of k ( each loop will locate 1 memory space for k : k0, k1, k2, k3):
initial begin
for( int i =0; i<=3 ; i++)
fork
automatic int k = i;
$write ("%d ", k);
join_none
end

Another example is the use of fork join inside for loop -
Without the use of automatic, the fork join inside the for loop will not work correctly.
for (int i=0; i<`SOME_VALUE ; i++) begin
automatic int id=i;
fork
task/function using the id above ;
...
join_none
end

Related

Calling Julia macro with runtime-dependent argument

I would like to call a macro in a Julia package (#defNLExpr in JuMP) using an argument that is runtime dependent. The argument is an expression that depends on the runtime parameter n. The only way that I can think of doing this is something like the following:
macro macro1(x)
y=length(x.args);
return esc(:(k=$y-1))
end
macro macro2(n)
x="0";
for i=1:n
x="$x+$i"
end
x=parse(x);
return :(#macro1($x))
end
n=rand(1:3)
println(n)
if (n==1)
#macro2(1)
elseif (n==2)
#macro2(2)
elseif (n==3)
#macro2(3)
else
error("expected n in 1:3")
end
println(k)
Here I have assumed that my runtime n will always be in the range 1-3. I use macro2 to build-up all possible expressions for these different possible values of n, and call the external macro (which I have replaced by the simplified macro1 here) for each of them. The calls to macro1 are in if statements, so only the correct one (determined from the value of n at runtime) will actually be executed.
Although this seems to work, is there a more efficient way of achieving this?
Seems that you might be looking for eval? Be aware that it should be used with care though, and that it not very fast since it has to income the compiler each time it is called.
If it's a limitation to you that it evaluates the expression in global scope then, there are some ways to work around that.

How to pass a variable value to a macro in SystemVerilog?

I think the question sums it up pretty well as to what I want: passing the value of a variable to a macro in SystemVerilog.
For example, what I want:
Say, there are 4 signals by the name of abc_X_def and I want to initialize all of them to 0.
So, without macros:
abc_0_def = 4'b0000;
abc_1_def = 4'b0000;
abc_2_def = 4'b0000;
abc_3_def = 4'b0000;
Now, the code that I have written is having a problem:
`define set_value(bit) abc_``bit``_def = 4'b0000
for (int i = 0; i < 4; i++) begin
`set_value(i);
end
The error is that it's trying to find the signal abc_i_def which is obviously wrong. Just wondering if it's possible to pass the actual value of the variable 'i' to the macro.
The preprocessor directives are evaluated by the preprocessor and modify the code that is presented to the compiler.
The for loop is a verilog construct that is evaluated by the compiler.
So your preprocessor is not evaluating the for loop. It sees:
`define `set_value(bit) abc_``bit``_def = 4'b0000
[some verilog]
`set_value(i);
[some verilog]
So 'i' is just i. It doesn't become a number until compilation.
Why don't you use local params with generate, so the local params are created in a loop at elaboration as the for loop is unrolled?
This is one of the many places that macros are a problem. Generate is a problem in other places (like when you want to control port lists).
I dug into this a bit more. Parameters and local parameters inside a generate are created a localparams in the scope of the generate. See here: System Verilog parameters in generate block. I had to get back to work before I could test it.
I would just use code and populate an array. This compiles in VCS:
module veritest
#(
parameter MAX_X = 5,
MAX_Y = 3,
MAX_Z = 2
)
(); // empty port list
logic [4:0] abc_def[1:MAX_X][1:MAX_Y][1:MAX_Z];
always #*
begin
for (integer z=1; z<(MAX_X+1);z=z+1)
for (integer y=1; y<(MAX_Y+1);y=y+1)
for (integer x=1; x<(MAX_X+1);x=x+1)
begin
abc_def[x][y][z] = 4'b0000;
end
end
endmodule
Since you said the naming conventions is out of your control, you should consider using another tool to generate the verilog for you.
You can have the code generated by your preferred programming then use an `include statement in your verilog file. Or you can go with an embedded route.
Perl had EP3 : http://search.cpan.org/~gspivey/Text-EP3-Verilog-1.00/Verilog.pm
Ruby has eRuby : http://www.tutorialspoint.com/ruby/eruby.htm
I'm sure something like it exists for other languages to. Such as Python, Tcl, JavaScript, etc.
Concept is the same, just a difference in embedded language and tool used for conversion.

Is it possible to have a compiler which optimizes a = func(a)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Say I have an object of type A. Consider this case for any function of the type A -> A (i.e. takes object of type A and returns another object of type A):
foo = func(foo)
Here, the simplest case would be to for the result of func(foo) to be copied into foo.
Is it possible to optimize this so that:
foo gets modified inplace in func
There are no constraints on the language used. What I want to know is what constraints and properties the language must have to enable such an optimization. Are there any existing languages which perform such an optimization?
Example(in pseudo code):
type Matrix = List<List<int>>
Matrix rotate90Deg(Matrix x):
Matrix result(x.columns, x.rows) #Assume it has a constructor which takes as args the num of rows, and num of cols.
for (int i = 0; i < x.rows; i++):
for (int j = 0; j < x.columns; j++):
result[i][j] = x[j][i]
return result
Matrix a = [[1,2,3],[4,5,6],[7,8,9]]
a = rotate90Deg(a)
Here, is it possible to optimize the code so that it doesn't allocate memory for a new matrix(result), and instead just modifies the original matrix passed.
First of all, you have to realize that some operations are inherently not possible to be computed in-place. Matrix-matrix multiplication is an example of this, and rotate90Deg would fall under this category since such an operation is actually a matrix multiplication by the appropriate multiplication matrix.
Now as for your example, you actually coded up a matrix transpose function. Matrix transpose can be done in-place since you are swapping pairs of numbers, but I doubt that any compilers can automatically detect this and optimize it for you. Indeed, there are many, many tricks that one can do to optimize matrix transpose in order to be cache-friendly in order to gain huge performance increases. Nevertheless, with an naive implementation, you will almost certainly end up with something very similar to what Aditya Kumar describes in his answer.
As I have foreshadowed by using the word "naive" earlier, programmers can coax the compiler to inline lots and lots of things in extremely optimized ways through advanced templating and other meta-programming techniques. (At least in C++, and maybe other languages that allow you to overload operator =.) For anyone interested in a case study of how this is done and what is involved, take a look at the Eigen matrix library, and how it handles a simple operation like u = v + w; where the three variables are all matrices of floats. Following is a brief overview of the key points.
A naive implementation would overload operator+ to return a temporary and operator= to copy that temporary to the result. Of course, in C++11 it is pretty easy to avoid the final copy during assignment by way of move constructors, but you will still have unnecessary temporaries if you had something a little more complex with multiple operators on the right hand side like u = 3.15f * u.transposed() + 5.0f; since each operator/method would return a temporary, and that temporary would have to be looped over in order to process the next operator.
Long story short, what Eigen does is rather than perform each operation when the corresponding function call occurs, the calls return a templated functor of sorts which merely describes the operation that needs to take place, and all the actual work ends up happening in operator =, thus enabling the compiler to emit a single, inlined loop for traversing the data only once and doing the operation truly in-place.
Yes it is possible, and this optimization is provided by at least C++11 (inlining).
To explain the optimization a little bit.
e.g.
foo_t foo;
foo = func(foo); // #1
foo_t func(foo_t foo1) {
foo_t new_foo;
// operate on new_foo by using foo1
return new_foo;
}
There are three instances of foo_t being made:
foo is copied and passed as foo1 to func
new_foo is created.
new_foo is assigned to foo by copying the contents of new_foo into foo;
All the three copies can be eliminated provided there are some invariants.
foo (the argument to be passed to function is never used later with the same original value. This is equivalent to saying that foo is 'dead' at line #1. This is established here as foo is reassigned.
the scope of object new_foo in function func has its lifetime that does not extend the life of function func. This is also established here as the way new_foo is created, it will be on stack and the lifetime of objects in stack is the same as the lifetime of the function in which the object was created.
In C++ it can be achieved using inlining the function func. After inlining, the code basically will look like this.
`foo_t foo;`
`foo_t new_foo;`
`// operate on new_foo by using foo`
`foo = new_foo;`
Although, C++ provides inlining as a language feature but almost any optimizing compiler do inlining these days.
Now it depends on what kind of operation you perform on new_foo and foo whether this extra new_foo will be optimized away or not. For some data types it is trivial (the compiler can do a 'copy-propagation' followed by 'dead-code elimination' to remove new_foo completely.

What does the term "memoize" imply?

Comparing the terms "memoize" and "cache" and in reading Wikipedia's memoization entry, do people agree that using the term "memoize" implies
The memoized result is kept in the process' memory; in other words, it isn't stored in memcached .
One only "memoizes" functions, as in mathematical functions, e.g. Fibonacci, not values that may change over time, e.g. the number of registered users on the web site?
If you're doing anything else than the above, then one is just caching a result?
I'm not sure, but my understanding is that memoization requires that given a function y = f(u), that f be deterministic (that is, for a given u, the y must always be the same), in order that the results of f may be stored.
Caching to me seems to be more of a problem of determining what pieces of data are accessed frequently, and keeping those data in fast storage.
The former is deterministic while the latter is stochastic.
I believe that memoizing a function allows you to locally cache the result of a function for a given set of arguments. It's almost like:
function f(a, b, c) {
if (a==1 && b==2 && !c) {
return 5;
} else if (a==1 && b==3 && !c) {
return 17.2;
} /* ... etc ... */
// some horribly long/complex/expensive calculation
return result;
}
but with the initial huge "if" block being handled automatically and far more efficiently, and being added to as the function is called with different arguments.
Note that you can only memoize a function that is deterministic and has no side effects. This means that the function's result can only depend on its inputs, and it cannot change anything when it is run.
So in short, memoization is function-local caching in very specific circumstances, and so it's a specialisation of regular caching.
From my understanding, yes, memoization is caching, used to speed up a time-essential program such as one that calculates a number sequence (e.g. Fibonacci sequence).
It is debatable as the terms can loosely be used interchangeably.
To me the sole implication of 'memoize' would be - That the 'caching' is of previous inputs rather than precomputed tables. That is - memoization is a process of a function to remember its own return values.

Constants in MATLAB

I've come into ownership of a bunch of MATLAB code and have noticed a bunch of "magic numbers" scattered about the code. Typically, I like to make those constants in languages like C, Ruby, PHP, etc. When Googling this problem, I found that the "official" way of having constants is to define functions that return the constant value. Seems kludgey, especially because MATLAB can be finicky when allowing more than one function per file.
Is this really the best option?
I'm tempted to use / make something like the C Preprocessor to do this for me. (I found that something called mpp was made by someone else in a similar predicament, but it looks abandoned. The code doesn't compile, and I'm not sure if it would meet my needs.)
Matlab has constants now. The newer (R2008a+) "classdef" style of Matlab OOP lets you define constant class properties. This is probably the best option if you don't require back-compatibility to old Matlabs. (Or, conversely, is a good reason to abandon back-compatibility.)
Define them in a class.
classdef MyConstants
properties (Constant = true)
SECONDS_PER_HOUR = 60*60;
DISTANCE_TO_MOON_KM = 384403;
end
end
Then reference them from any other code using dot-qualification.
>> disp(MyConstants.SECONDS_PER_HOUR)
3600
See the Matlab documentation for "Object-Oriented Programming" under "User Guide" for all the details.
There are a couple minor gotchas. If code accidentally tries to write to a constant, instead of getting an error, it will create a local struct that masks the constants class.
>> MyConstants.SECONDS_PER_HOUR
ans =
3600
>> MyConstants.SECONDS_PER_HOUR = 42
MyConstants =
SECONDS_PER_HOUR: 42
>> whos
Name Size Bytes Class Attributes
MyConstants 1x1 132 struct
ans 1x1 8 double
But the damage is local. And if you want to be thorough, you can protect against it by calling the MyConstants() constructor at the beginning of a function, which forces Matlab to parse it as a class name in that scope. (IMHO this is overkill, but it's there if you want it.)
function broken_constant_use
MyConstants(); % "import" to protect assignment
MyConstants.SECONDS_PER_HOUR = 42 % this bug is a syntax error now
The other gotcha is that classdef properties and methods, especially statics like this, are slow. On my machine, reading this constant is about 100x slower than calling a plain function (22 usec vs. 0.2 usec, see this question). If you're using a constant inside a loop, copy it to a local variable before entering the loop. If for some reason you must use direct access of constants, go with a plain function that returns the value.
For the sake of your sanity, stay away from the preprocessor stuff. Getting that to work inside the Matlab IDE and debugger (which are very useful) would require deep and terrible hacks.
I usually just define a variable with UPPER_CASE and place near the top of the file. But you have to take the responsibly of not changing its value.
Otherwise you can use MATLAB classes to define named constants.
MATLAB doesn't have an exact const equivalent. I recommend NOT using global for constants - for one thing, you need to make sure they are declared everywhere you want to use them. I would create a function that returns the value(s) you want. You might check out this blog post for some ideas.
You might some of these answers How do I create enumerated types in MATLAB? useful. But in short, no there is not a "one-line" way of specifying variables whose value shouldn't change after initial setting in MATLAB.
Any way you do it, it will still be somewhat of a kludge. In past projects, my approach to this was to define all the constants as global variables in one script file, invoke the script at the beginning of program execution to initialize the variables, and include "global MYCONST;" statements at the beginning of any function that needed to use MYCONST. Whether or not this approach is superior to the "official" way of defining a function to return a constant value is a matter of opinion that one could argue either way. Neither way is ideal.
My way of dealing with constants that I want to pass to other functions is to use a struct:
% Define constants
params.PI = 3.1416;
params.SQRT2 = 1.414;
% Call a function which needs one or more of the constants
myFunction( params );
It's not as clean as C header files, but it does the job and avoids MATLAB globals. If you wanted the constants all defined in a separate file (e.g., getConstants.m), that would also be easy:
params = getConstants();
Don't call a constant using myClass.myconst without creating an instance first! Unless speed is not an issue. I was under the impression that the first call to a constant property would create an instance and then all future calls would reference that instance, (Properties with Constant Values), but I no longer believe that to be the case. I created a very basic test function of the form:
tic;
for n = 1:N
a = myObj.field;
end
t = toc;
With classes defined like:
classdef TestObj
properties
field = 10;
end
end
or:
classdef TestHandleObj < handle
properties
field = 10;
end
end
or:
classdef TestConstant
properties (Constant)
field = 10;
end
end
For different cases of objects, handle-objects, nested objects etc (as well as assignment operations). Note that these were all scalars; I didn't investigate arrays, cells or chars. For N = 1,000,000 my results (for total elapsed time) were:
Access(s) Assign(s) Type of object/call
0.0034 0.0042 'myObj.field'
0.0033 0.0042 'myStruct.field'
0.0034 0.0033 'myVar' //Plain old workspace evaluation
0.0033 0.0042 'myNestedObj.obj.field'
0.1581 0.3066 'myHandleObj.field'
0.1694 0.3124 'myNestedHandleObj.handleObj.field'
29.2161 - 'TestConstant.const' //Call directly to class(supposed to be faster)
0.0034 - 'myTestConstant.const' //Create an instance of TestConstant
0.0051 0.0078 'TestObj > methods' //This calls get and set methods that loop internally
0.1574 0.3053 'TestHandleObj > methods' //get and set methods (internal loop)
I also created a Java class and ran a similar test:
12.18 17.53 'jObj.field > in matlab for loop'
0.0043 0.0039 'jObj.get and jObj.set loop N times internally'
The overhead in calling the Java object is high, but within the object, simple access and assign operations happen as fast as regular matlab objects. If you want reference behavior to boot, Java may be the way to go. I did not investigate object calls within nested functions, but I've seen some weird things. Also, the profiler is garbage when it comes to a lot of this stuff, which is why I switched to manually saving the times.
For reference, the Java class used:
public class JtestObj {
public double field = 10;
public double getMe() {
double N = 1000000;
double val = 0;
for (int i = 1; i < N; i++) {
val = this.field;
}
return val;
}
public void setMe(double val) {
double N = 1000000;
for (int i = 1; i < N; i++){
this.field = val;
}
}
}
On a related note, here's a link to a table of NIST constants: ascii table and a matlab function that returns a struct with those listed values: Matlab FileExchange
I use a script with simple constants in capitals and include teh script in other scripts tr=that beed them.
LEFT = 1;
DOWN = 2;
RIGHT = 3; etc.
I do not mind about these being not constant. If I write "LEFT=3" then I wupold be plain stupid and there is no cure against stupidity anyway, so I do not bother.
But I really hate the fact that this method clutters up my workspace with variables that I would never have to inspect. And I also do not like to use sothing like "turn(MyConstants.LEFT)" because this makes longer statements like a zillion chars wide, making my code unreadible.
What I would need is not a variable but a possibility to have real pre-compiler constants. That is: strings that are replaced by values just before executing the code. That is how it should be. A constant should not have to be a variable. It is only meant to make your code more readible and maintainable. MathWorks: PLEASE, PLEASE, PLEASE. It can't be that hard to implement this. . .