Any Software to convert float to any-precision FPU? [or Matlab solution] - matlab

I want to convert a lot of float numbers to multiple-precision FPU like '0x4049000000.....', perform calculations, change the precision and then again perform calculations and so on...
I know the theory as here but I want a Software, Online tool or Matlab solution to convert 6000+ numbers to FPU format (like IEEE single precision) in 0.0 ~ 51.0 range.
Any Suggestions?
Note: I need custom precision where I can describe number of digits of Mentissa and Exponent.
EDIT: It is also called Radix-Independent Floating-Point as described here and here
2nd EDIT: IEEE single precision convert and IEEE double Precision convert are examples. You enter any float number e.g. 3.1454, you will get its IEEE (single or double precision) float value in binary/hex. #rick

A quick look at the VHDL-2008 floating point library float_pkg shows that it instantiates a generic package with generic parameters set to match IEEE single precision floats.
package float_pkg is new IEEE.float_generic_pkg (...)
You should find this library as part of your simulator installation, wherever you usually look for the standard libraries such as numeric_std. On my system it is at /opt/ghdl-0.32/lib/gcc/x86_64-unknown-linux-gnu/4.9.2/vhdl/src/ieee2008/float_pkg.vhdl - if you can't find it on your system, it's available online, search for "VHDL 2008 float package" ought to get you there.
You can instantiate this generic package (using float_pkg as an example) for any precision you like (within reasonable limits).
A quick look at IEEE.float_generic_pkg shows that it declares functions to_float and to_real, I'd like to think they have the obvious behaviour.
So the answer is ... yes.
-- real to float
function to_float (
arg : REAL;
size_res : UNRESOLVED_float;
constant round_style : round_type := float_round_style; -- rounding option
constant denormalize : BOOLEAN := float_denormalize) -- Use IEEE extended FP
return UNRESOLVED_float;
and
-- float to real
function to_real (
arg : UNRESOLVED_float; -- floating point input
constant check_error : BOOLEAN := float_check_error; -- check for errors
constant denormalize : BOOLEAN := float_denormalize) -- Use IEEE extended FP
return REAL;

float_pkg should be all you need. It gives you some predefined floating-point types (e.g., IEEE single and double precision), plus the ability to define custom floating-point values with arbitrary number of bits for the fraction and the exponent.
Based on your numeric example, here is some code to convert from a real value to single, double, and quadruple precision floats.
library ieee;
use ieee.float_pkg.all;
entity floating_point_demo is
end;
architecture example of floating_point_demo is
begin
process
variable real_input_value: real := 49.0215463456;
variable single_precision_float: float32;
variable double_precision_float: float64;
variable quadruple_precision_float: float128;
begin
single_precision_float := to_float(real_input_value, single_precision_float);
report to_string(single_precision_float);
report to_hstring(single_precision_float);
double_precision_float := to_float(real_input_value, double_precision_float);
report to_string(double_precision_float);
report to_hstring(double_precision_float);
quadruple_precision_float := to_float(real_input_value, quadruple_precision_float);
report to_string(quadruple_precision_float);
report to_hstring(quadruple_precision_float);
wait;
end process;
end;
The example above uses types float32, float64, and float128 from float_pkg. However, you can achieve the same effect using objects of type float, whose size can be defined at their declarations:
variable single_precision_float: float(8 downto -23);
variable double_precision_float: float(11 downto -52);
variable quadruple_precision_float: float(15 downto -112);
To convert from a float to a real value, you can use the to_real() function:
-- To print the real value of a float object:
report to_string(to_real(quadruple_precision_float));
-- To convert from a float and assign to a real:
real_value := to_real(quadruple_precision_float);

Related

Why doesn't 'd0 extend the full width of the signal (as '0 does)?

Using SystemVerilog and Modelsim SE 2020.1, I was surprised to see a behavior:
bus_address is a 64b signal input logic [63:0] bus_address
Using '0
.bus_address ('0),
Using 'd0
.bus_address ('d0),
Riviera-Pro 2020.04 (too buggy, we gave up using it and we are in a dispute with Aldec)
'd0:
'0:
Investigation/Answer:
11.3.3 Using integer literals in expressions: An unsized, based integer (e.g., 'd12 , 'sd12 )
5.7.1 Integer literal constants:
The number of bits that make up an unsized number (which is a simple
decimal number or a number with a base specifier but no size
specification) shall be at least 32. Unsized unsigned literal
constants where the high-order bit is unknown ( X or x ) or
three-state ( Z or z ) shall be extended to the size of the expression
containing the literal constant.
That was tricky and I thought it would set 0 all the other bits as '0 does.
I hope specs' authors will think more when defining such non-sense behaviors.
This problem has more with port connections with mismatched sizes than anything to do with numeric literals. It's just that the issue does not present itself when using the fill literals. This is because the fill literal automatically sizes itself eliminating port width mismatch.
The problem you see exists whether you use literals or other signals like in this example:
module top;
wire [31:0] a = 0;
dut d(a);
endmodule
module dut(input wire [63:0] p1);
initial $strobeb(p1);
endmodule
According to section 23.3.3.7 Port connections with dissimilar net types (net and port collapsing), the nets a and p1 might get merged into a single 64-bit net, but only the lower 32-bits remain driven, or 64'hzzzzzzzz00000000.
If you change the port connection to a sized literal, dut d(32'b0);, you see the same behavior 64'hzzzzzzzz00000000.
Now let's get back to the unsized numeric literal 'd0. Unsized is a misnomer—all numbers have a size. It's just that the size is implicit and never the size you want it to be. 😮 How many people write {'b1,'b0,'b1,'b0} thinking they've just wrote the same thing as 4'b1010? This is actually illegal in the LRM, but some tools silently interpret it as {32'b1,32'b0,32'b1,32'b0}.
Just never use an unsized literal.

Function of $clog2(N) in Mojo IDE

I am a beginner in this, but I was wondering what exactly is the function of $clog2(N) in general? Some websites say that it is the number of address bits needed for a memory of size N and not the number of bits needed to express the value N. What does that mean?
IEEE Std 1800-2012 § 20.8.1 Integer math functions
The system function $clog2 shall return the ceiling of the log base 2 of the argument (the log rounded up to an integer value). The argument can be an integer or an arbitrary sized vector value. The argument shall be treated as an unsigned value, and an argument value of 0 shall produce a result of 0.
This system function can be used to compute the minimum address width necessary to address a memory of a given size or the minimum vector width necessary to represent a given number of states.
For example:
integer result;
result = $clog2(n);

fortran90 reading array with real numbers

I have a list of real data in a file. The real data looks like this..
25.935
25.550
24.274
29.936
23.122
27.360
28.154
24.320
28.613
27.601
29.948
29.367
I write fortran90 code to read this data into an array as below:
PROGRAM autocorr
implicit none
INTEGER, PARAMETER :: TRUN=4000,TCOR=1800
real,dimension(TRUN) :: angle
real :: temp, temp2, average1, average2
integer :: i, j, p, q, k, count1, t, count2
REAL, DIMENSION(0:TCOR) :: ACF
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
open(100, file="fort.64",status="old")
do k = 1,TRUN
read(100,*) angle(k)
end do
Then, when I print again to see the values, I get
25.934999
25.549999
24.274000
29.936001
23.122000
27.360001
28.153999
24.320000
28.613001
27.601000
29.948000
29.367001
32.122002
33.818001
21.837000
29.283001
26.489000
24.010000
27.698000
30.799999
36.157001
29.034000
34.700001
26.058001
29.114000
24.177000
25.209000
25.820999
26.620001
29.761000
May I know why the values are now 6 decimal points?
How to avoid this effect so that it doesn't affect the calculation results?
Appreciate any help.
Thanks
You don't show the statement you use to write the values out again. I suspect, therefore, that you've used Fortran's list-directed output, something like this
write(output_unit,*) angle(k)
If you have done this you have surrendered the control of how many digits the program displays to the compiler. That's what the use of * in place of an explicit format means, the standard says that the compiler can use any reasonable representation of the number.
What you are seeing, therefore, is your numbers displayed with 8 sf which is about what single-precision floating-point numbers provide. If you wanted to display the numbers with only 3 digits after the decimal point you could write
write(output_unit,'(f8.3)') angle(k)
or some variation thereof.
You've declared angle to be of type real; unless you've overwritten the default with a compiler flag, this means that you are using single-precision IEEE754 floating-point numbers (on anything other than an exotic computer). Bear in mind too that most real (in the mathematical sense) numbers do not have an exact representation in floating-point and that the single-precision decimal approximation to the exact number 25.935 is likely to be 25.934999; the other numbers you print seem to be the floating-point approximations to the numbers your program reads.
If you really want to compute your results with a lower precision, then you are going to have to employ some clever programming techniques.

Getting double precision in fortran 90 using intel 11.1 compiler

I have a very large code that sets up and iteratively solves a system of non-linear partial differential equation, written in fortran. I need all variables to be double precision. In the additional module that I have written for the code, I declare all variables as the double precision type, but my module still uses variables from the old source code that are declared as type real. So my question is, what happens when a single-precision variable is multiplied by a double precision variable in fortran? Is the result double precision if the variable used to store the value is declared as double precision? And what if a double precision value is multiplied by a constant without the "D0" at the end? Can I just set a compiler option in Intel 11.1 to make all real/double precision/constants of double precision?
So my question is, what happens when a single-precision variable is multiplied by a double precision variable in fortran? The single precision is promote to double precision and the operation is done in double precision.
Is the result double precision if the variable used to store the value is declared as double precision? Not necessarily. The right-hand side is an expression that doesn't "know" about the precision of the variable on the left hand side, in to which it will be stored. If you have Double = SingleA * SingleB (using names to indicate the types), the calculation will be performed in single precision, then converted to double for storage. This will NOT gain extra precision for the calculation!
And what if a double precision value is multiplied by a constant without the "D0" at the end? This is just like the first question, the constant will be promoted to double precision and the calculation done in double precision. However, the constant is still single precision and even if you wrote down many digits as for a double-precision constant, the internal storage is single precision and cannot represent that accuracy. For example, DoubleVar * 3.14159265359 will be calculated in double precision, but will be something approximating DoubleVar * 3.14159 done in double precision.
If you want to have the compiler retain many digits in a constant, you must specific the precision of a constant. The Fortran 90 way to do this is to define your own real type with whatever precision that you need, e.g., to require at least 14 decimal digits:
integer, parameter :: DoubleReal_K = selected_real_kind (14)
real (DoubleReal_K) :: A
A = 5.0_DoubleReal_K
A = A * 3.14159265359_DoubleReal_K
The Fortran standard is very specific about this; other languages are like this, too, and it's really what you'd expect. If an expression contains an operation on two floating-point variables of different precisions, then the expression is of the type of the higher-precision operand. eg,
(real variable) + (double variable) -> (double)
(double variable)*(real variable) -> (double)
(double variable)*(real constant) -> (double)
etc.
Now, if you are storing the result in a lower-precision floating point variable, it'll get down-converted again. But if you are storing it in a variable of the higher precision, it'll maintain it's precision.
If there's any cases where you're concerned that a single-precision floating point variable is causing a problem, you can force it to be converted to double precision
using the DBLE() intrinsic:
DBLE(real variable) -> double
If you write numbers in the form 0.1D0 it will treat it as double precision number, otherwise if you write 0.1, the precision will be lost in the conversion.
Here is an example:
program main
implicit none
real(8) a,b,c
a=0.2D0
b=0.2
c=0.1*a
print *,a,b,c
end program
When compiled with
ifort main.f90
I get results:
0.200000000000000 0.200000002980232 2.000000029802322E-002
When compiled with
ifort -r8 main.f90
I get results:
0.200000000000000 0.200000000000000 2.000000000000000E-002
If you use the IBM XLF compiler, the equivalence is
xlf -qautodbl=dbl4 main.f90
Jonathan Dursi's answer is correct - the other part of your question was if there was a way to make all real variables double precision.
You can accomplish this with the ifort compiler by using the -i8 (for integers) and -r8 (for reals) options. I'm not sure if there is a way to force the compiler to interpret literals as double-precision without specifying them as such (e.g. by changing 3.14159265359 to 3.14159265359D0) - we ran into this issue a while back.

iPhone/Obj C: Why does convert float to int: (int) float * 100 does not work?

In my code, I am using float to do currency calculation but the rounding has yielded undesired results so I am trying to convert it all to int. With as little change to the infrastructure as possible, in my init functions, I did this:
-(id)initWithPrice:(float)p;
{
[self setPrice:(int)(p*100)];
}
I multiply by 100 b/c in the main section, the values are given as .xx to 2 decimals. I abnormally I notice is that for float 1.18, the int rounds it to 117. Does anyone know it does that? The float leaves it as 1.18. I expect it to be 118 for the int equiv in cents.
Thanks.
Floating point is always a little imprecise. With IEEE floating point encoding, powers of two can be represented exactly (like 4,2,1,0.5,0.25,0.125,0.0625,...) , but numbers like 0.1 are always an approximation (just try representing it as a sum of powers of 2).
Your (int) cast will truncate whatever comes in, so if p*100 is resolving to 117.9999995 due to this imprecision , that will become 1.17 instead of 1.18.
Better solution is to use something like roundf on p*100. Even better would be if you can go upstream and fully convert to fixed-point math using integers in the entire program.