How can I produce random single precision floating point numbers in systemverilog? - system-verilog

I'm trying to verify behavior of a floating point multiplier using Universal Verification Methodology and I have an issue.
The problem is that when I want to generate single precision floating point numbers. Basically, This not possible directly and I decided to generate two random 32 bit numbers like following:
rand logic [31:0] A;
rand logic [31:0] B;
The issue is now we may produce many numbers which basically are not valid numbers in IEEE754 notation.
My question is that how can I put correct constraints for these numbers and what are those constraints?

Systemverilog allows randomization in terms of integer/logic/bit vector values (NOT real floating pt). Rationale being! it would slowdown processor/server speeds to generate fractional values. What you may want to do is constrain the random int variable and multiply it to the real type number for your use case.
class temp;
rand int multi_x;
real w, y;
endclass
module mytest;
temp tt = new();
initial begin
tt.w=3.2345;
for(int j=0; j<4;j++)begin
std::randomize(tt);
tt.y= tt.w *(real'(tt.multi_x));
$display("Value tt.y = %f, val tt.w=%f, val tt.multi_x=%f", tt.y, tt.w,tt.multi_x);
end
end
endmodule

Related

How to convert values to N bits of resolution in MATLAB?

My computer uses 32 bits of resolution as default. I'm writing a script that involves taking measurements with a multimeter that has N bits of resolution. How do I convert the values to that?
For example, if I have a RNG that gives 1000 values
nums = randn(1,1000);
and I use an N-bit multimeter to read those values, how would I get the values to reflect that?
I currently have
meas = round(nums,N-1);
but it's giving me N digits, not N bits. The original random numbers are unbounded, but the resolution of the multimeter is the limitation; how to implement the limitation is what I'm looking for.
Edit I: I'm talking about the resolution of measurement, not the bounds of the numbers. The original values are unbounded. The accuracy of the measured values should be limited by the resolution.
Edit II: I revised the question to try to be a bit clearer.
randn doesn’t produce bounded numbers. Let’s say you are producing 32-bit integers instead:
mums = randi([0,2^32-1],1,n);
To drop the bottom 32-N bits, simply divide by an appropriate value and round (or take the floor):
nums = round(nums/(2^(32-N)));
Do note that we only use floating-point arithmetic here, numbers are integer-valued, but not actually integers. You can do a similar operation using actual integers if you need that.
Also, obviously, N should be lower than 32. You cannot invent new bits. If N is larger, the code above will add zero bits at the bottom of the number.
With a multimeter, it is likely that the range is something like -M V to M V with a a constant resolution, and you can configure the M selecting the range.
This is fixed point math. My answer will not use it because I don't have the toolbox available, if you have it you could use it to have simpler code.
You can generate the integer values with the intended resolution, then rescale it to the intended range.
F=2^N-1 %Maximum integer value
X=randi([0,F],100,1)
X*2*M/F-M %Rescale, divide by the integer range, multiply by the intended range. Then offset by intended minimum.

Easier method to compute minimal perfect hash?

I have smallish(?) sets (ranging in count from 0 to 100) of unsigned 32 bit integers. For a given set, I want to come up with minimal parameters to describe a minimal(istic) perfect hash of the given set. High level of the code I used to experiment with the idea ended up something like:
def murmur(key, seed=0x0):
// Implements 32bit murmur3 hash...
return theHashedKey
sampleInput = [18874481, 186646817, 201248225, 201248705, 201251025, 201251137, 201251185, 184472337, 186649073, 201248625, 201248721, 201251041, 201251153, 184473505, 186649089, 201248657, 201251009, 201251057, 201251169, 186646818, 201248226, 201248706, 201251026, 201251138, 201251186, 186649074, 201248626, 201248722, 201251042, 201251154, 186649090, 201248658, 201251010, 201251058, 201251170]
for seed in range(11111): // arbitrary upper seed limit
for modulus in range(10000):
hashSet = set((murmur(x, seed=seed) % modulus for x in sampleInput))
if len(hashSet) >= len(allValves):
print('minimal modulus', modulus, 'for seed', seed)
break
This is just basic pseudo code for a 2 axis brute force search. I add lines by keeping track of the different values, I can find seed and modulus values that give a perfect hash and then select the one with the smallest modulus.
It seems to me that there should be a more elegant/deterministic way to come up with these values? But that's where my math skills overflow.
I'm experimenting in Python right now, but ultimately want to implement something in C on a small embedded platform.

what is the meaning of point(.) when division operation in matlab

what is this code meaning?
k = round(Q/12. + Q/123.)-1;
I couldn't understand why that point(.) needed.
That code is from RSA code. Part of calculating coprime number.
The decimal point does not do anything here. It is probably the result of someone porting the code from another language with different data type conventions.
As hbaderts said, in Matlab the default numeric type is double precision; other numeric types must be explicitly set. You can test this yourself:
>> x = 123;
>> whos x
Name Size Bytes Class Attributes
x 1x1 8 double
You will often see the dot (.) preceding the division, multiplication, or power sign; there it means an elementwise operation.

fortran90 reading array with real numbers

I have a list of real data in a file. The real data looks like this..
25.935
25.550
24.274
29.936
23.122
27.360
28.154
24.320
28.613
27.601
29.948
29.367
I write fortran90 code to read this data into an array as below:
PROGRAM autocorr
implicit none
INTEGER, PARAMETER :: TRUN=4000,TCOR=1800
real,dimension(TRUN) :: angle
real :: temp, temp2, average1, average2
integer :: i, j, p, q, k, count1, t, count2
REAL, DIMENSION(0:TCOR) :: ACF
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
open(100, file="fort.64",status="old")
do k = 1,TRUN
read(100,*) angle(k)
end do
Then, when I print again to see the values, I get
25.934999
25.549999
24.274000
29.936001
23.122000
27.360001
28.153999
24.320000
28.613001
27.601000
29.948000
29.367001
32.122002
33.818001
21.837000
29.283001
26.489000
24.010000
27.698000
30.799999
36.157001
29.034000
34.700001
26.058001
29.114000
24.177000
25.209000
25.820999
26.620001
29.761000
May I know why the values are now 6 decimal points?
How to avoid this effect so that it doesn't affect the calculation results?
Appreciate any help.
Thanks
You don't show the statement you use to write the values out again. I suspect, therefore, that you've used Fortran's list-directed output, something like this
write(output_unit,*) angle(k)
If you have done this you have surrendered the control of how many digits the program displays to the compiler. That's what the use of * in place of an explicit format means, the standard says that the compiler can use any reasonable representation of the number.
What you are seeing, therefore, is your numbers displayed with 8 sf which is about what single-precision floating-point numbers provide. If you wanted to display the numbers with only 3 digits after the decimal point you could write
write(output_unit,'(f8.3)') angle(k)
or some variation thereof.
You've declared angle to be of type real; unless you've overwritten the default with a compiler flag, this means that you are using single-precision IEEE754 floating-point numbers (on anything other than an exotic computer). Bear in mind too that most real (in the mathematical sense) numbers do not have an exact representation in floating-point and that the single-precision decimal approximation to the exact number 25.935 is likely to be 25.934999; the other numbers you print seem to be the floating-point approximations to the numbers your program reads.
If you really want to compute your results with a lower precision, then you are going to have to employ some clever programming techniques.

Questions about using Accelerate.framework

I had a few questions about Accelerate framework.
What is the difference between Single Precision Float, Single-Precision Complex, Double-Precision Float, and Double-Precision Complex? And what should I be using for a simple struct like:
struct vector
{
float x;
float y;
float z;
};
Also can someone explain what each of the arguments to this sample function mean?
void cblas_cdotc_sub (
const int N,
const void *X,
const int incX,
const void *Y,
const int incY,
void *dotc
);
Apple's descriptions are a little unclear to me. What do they mean by the length for N? Is that the size of the vector in bytes? Or the actual spatial length of the vector?
Complex variables are 2 dimensional quantities, normally treated as the real and imaginary parts of complex numbers in arithmetic/math operations.
IEEE single and double floats allow differing amounts of binary precision (roughly the amount of significant digits without rounding error), very approximately 7 or so digits for single, around double that for double, plus a wider exponent range as well.
But single float arithmetic runs a lot faster on current iOS devices than does double (unlike the Simulator, where they may both run more the same speed.)
Apple's descriptions may require some basic knowledge of C data types, arrays and structures, and the mathematical theory of complex variables. I'd start by reading some books on basic C programming and numerical algorithms in C.