Integer to real conversion function - type-conversion

Is there a common conversion function to convert a integer type object to a real type in VHDL?
This is for a testbench so synthesizability is a non-issue.

You can convert integer to real as follows:
signal i: integer;
signal R: Real;
...
R <= Real(i);

Related

"operator symbol not allowed for generic subprogram" from Ada

I want to make subprogram for adding array's elements with Ada.
subprogram "Add_Data" have 3 parameters-
first parameter = generic type array (array of INTEGER or array of REAL)
second parameter = INTEGER (size of array)
third parameter = generic type sum (array of INTEGER -> sum will be INTEGER, array of REAL -> sum will be REAL)
I programmed it from ideone.com.
(I want to see just result by array of INTEGER. After that, I will test by array of REAL)
With Ada.Text_IO; Use Ada.Text_IO;
With Ada.Integer_Text_IO; Use Ada.Integer_Text_IO;
procedure test is
generic
type T is private;
type Tarr is array (INTEGER range <>) of T;
--function "+" (A,B : T) return T;
--function "+" (A, B : T) return T is
--begin
-- return (A+B);
--end "+";
procedure Add_Data(X : in Tarr; Y : in INTEGER; Z : in out T);
procedure Add_Data(X : in Tarr; Y : in INTEGER; Z : in out T) is
temp : T;
count : INTEGER;
begin
count := 1;
loop
temp :=temp+ X(count); //<-This is problem.
count := count + 1;
if count > Y then
exit;
end if;
end loop;
Z:=temp;
end Add_Data;
type intArray is array (INTEGER range <>) of INTEGER;
intArr : intArray := (1=>2, 2=>10, 3=>20, 4=>30, 5=>8);
sum : INTEGER;
procedure intAdd is new Add_Data(Tarr=>intArray, T=>INTEGER);
begin
sum := 0;
intAdd(intArr, 5, sum);
put (sum);
end test;
when I don't overload operator "+", It makes error.
"There is no applicable operator "+" for private type "T" defined."
What can I do for this?
If a generic’s formal type is private, then nothing in the generic can assume anything about the type except that it can be assigned (:=) and that it can be compared for equality (=) and inequality (/=). In particular, no other operators (e.g. +) are available in the generic unless you provide them.
The way to do that is
generic
type T is private;
with function "+" (L, R : T) return T is <>;
This tells the compiler that (a) there is a function "+" which takes two T’s and returns a T; and (b) if the actual T has an operator "+" which matches that profile, to allow it as the default.
So, you could say
procedure intAdd is new Add_Data (T => Integer, ...
or, if you didn’t feel like using the default,
procedure intAdd is new Add_Data (T => Integer, "+" => "+", ...
In addition to not knowing how to declare a generic formal subprogram (Wright has shown how to do this for functions), your code has a number of other issues that, if addressed, might help you move from someone who thinks in another language and translates it into Ada into someone who actually uses Ada. Presuming that you want to become such a person, I will point some of these out.
You declare your array types using Integer range <>. It's more common in Ada to use Positive range <>, because people usually refer to positions starting from 1: 1st, 2nd, 3rd, ...
Generics are used for code reuse, and in real life, such code is often used by people other than the original author. It is good practice not to make unstated assumptions about the values clients will pass to your operations. You assume that, for Y > 0, for all I in 1 .. Y => I in X'range and for Y < 1, 1 in X'range. While this is true for the values you use, it's unlikely to be true for all uses of the procedure. For example, when an array is used as a sequence, as it is here, the indices are immaterial, so it's more natural to write your array aggreate as (2, 10, 20, 30, 8). If I do that, Intarr'First = Integer'First and Intarr'Last = Integer'First + 4, both of which are negative. Attempting to index this with 1 will raise Constraint_Error.
Y is declared as Integer, which means that zero and negative values are acceptable. What does it mean to pass -12 to Y? Ada's subtypes help here; if you declare Y as Positive, trying to pass non-positive values to it will fail.
Z is declared mode in out, but the input value is not referenced. This would be better as mode out.
Y is not needed. Ada has real arrays; they carry their bounds around with them as X'First, X'Last, and X'Length. Trying to index an array outside its bounds is an error (no buffer overflow vulnerabilities are possible). The usual way to iterate over an array is with the 'range attribute:
for I in X'range loop
This ensures that I is always a valid index into X.
Temp is not initialized, so it will normally be initialized to "stack junk". You should expect to get different results for different calls with the same inputs.
Instead of
if count > Y then
exit;
end if;
it's more usual to write exit when Count > Y;
Since your procedure produces a single, scalar output, it would be more natural for it to be a function:
generic -- Sum
type T is private;
Zero : T;
type T_List is array (Positive range <>) of T;
with function "+" (Left : T; Right : T) return T is <>;
function Sum (X : T_List) return T;
function Sum (X : T_List) return T is
Result : T := Zero;
begin -- Sum
Add_All : for I in X'range loop
Result := Result + X (I);
end loop Add_All;
return Result;
end Sum;
HTH

why chisel UInt(32.W) can not take a unsigned number which bit[32] happens to be 1?

It is defined that UInt is the type of unsigned integer. But in such case it seems like the MSB is still a sign. e.g., the most relative QA is Chisel UInt negative value error which works out a workaround but no why. Could you enlight me about the 'why'?
The UInt seems to be defined in chisel3/chiselFrontend/src/main/scala/chisel3/core/Bits.scala but I cannot understand the details. Is the UInt is derived from Bits and Bits is derived from Int of scala?
The simple answer is that this is due to how Scala evaluates things.
Consider an example like
val x = 0xFFFFFFFF.U
This statement causes an error.
UInt literal are represented internally by BigInts, but the 0xFFFFFFFF is an specifying an Int value. 0xFFFFFFFF is equivalent to the Int value -1.
The -1 Int value is converted to BigInt -1 and -1.U is illegal because the .U literal creation method will not accept negative values.
Adding the L fixes this because 0xFFFFFFFL is a positive Long value.
The issue is that Scala only has signed integers, it does not have an unsigned integer type. From the REPL
scala> 0x9456789a
res1: Int = -1806272358
Thus, Chisel only sees the negative number. UInts obviously cannot be negative so Chisel reports an error.
You can always cast from an SInt to a UInt if you want the raw 2's complement representation of a negative number interpreted as a UInt. eg.
val a = -1.S(32.W).asUInt
assert(a === "xffffffff".U)

Passing Delphi 2D dynamic array to C DLL

I am trying to use a DLL written in C, wrapping a Matlab DLL.
The function is defined in C as:
int wmlfLevel1(double* input2D, int size, char* message, double** output2d)
and my Delphi function is defined as:
func: function (input2d: pDouble; size: integer; messag: PAnsiChar; output2d: ppDouble): integer; cdecl;
I have defined these types to pass a matrix to the DLL:
type
TDynArrayOfDouble = array of double;
type
T2DDynArrayOfDouble = array of TDynArrayOfDouble;
type
ppDouble = ^pDouble;
and am calling the DLL function like this:
var
in2d: T2DDynArrayOfDouble;
size: integer;
msg: PAnsiChar;
out2d: T2DDynArrayOfDouble;
begin
size:= 3;
SetLength(in2d, size, size);
SetLength(out2d, size, size);
in2d[0][0]:= 1; ... // init matrix values
func(pDouble(in2d), size, msg, ppdouble(out2d));
The problem is the output matrix contains a huge amount of incorrect values (should contains the input matrix multiplied by 2).
What have I missed?
I can successfully call this DLL function using a static array with the following code:
type
T2DArrayOfDouble = array [0..2, 0..2] of double;
type
pT2DArrayOfDouble = ^T2DArrayOfDouble;
type
ppT2DArrayOfDouble = ^pT2DArrayOfDouble;
func: function (input2d: pT2DArrayOfDouble; size: integer; messag: PAnsiChar; output2d: ppT2DArrayOfDouble): integer; cdecl;
...
var
in2d: T2DArrayOfDouble;
size: integer;
msg: PAnsiChar;
out2d: pT2DArrayOfDouble;
begin
size:= High(in2d) + 1;
in2d[0][0]:= 1; ... // init matrix values
func(#in2d, size, msg, #out2d);
double* input2D
That's a pointer to an array of double. But you pass it
in2d: T2DArrayOfDouble
which is a pointer to an array of pointer to array of double.
In other words you are passing double** to a parameter that expects double*.
Solve the problem by changing
in2d: T2DArrayOfDouble
to
in2d: TDynArrayOfDouble
Obviously you'll have to change the SetLength to match, and the code that populated the array. Although one wonders if there isn't a deeper problem given that the argument name suggests that, contrary to the type, it is a two dimensional array.

How to pass mulitiple data types in parameters to function?

F.e. I want to implement inc function:
FUNCTION inc RETURNS INT (INPUT-OUTPUT i AS INT, AddExpression AS INT):
i = i + AddExpression.
END FUNCTION.
to use it like this:
inc(tt-data.qty,1).
I didn't found how to override my function for DEC data type or how to combine both in one. If it possible I also want my function to deal with CHAR - kind of ADD-ENTRY. Maybe this basic functions are already implemented by someone? Something like this STLib on OEHive.
Plain old user-defined functions can only have a single signature. Your function definition is a bit "off". You are using an input-output parameter (which isn't "wrong" but it is odd) and you aren't returning a value -- which is wrong. It should look like this:
function inc returns integer ( input-output i as integer, addExpression as integer ):
i = i + addExpression.
return i.
end.
Procedures have somewhat more relaxed data-type rules and will do some type conversions automatically (such as an implied decimal to integer conversion). This would, for example, support passing a decimal that gets automatically rounded:
procedure z:
define input-output parameter i as integer no-undo.
define input parameter x as integer.
i = i + x.
return.
end.
You can overload method signatures if you create your function as a method of a class.
Something along these lines (untested):
class X:
method public integer inc( input-output i as integer, input addExpression as integer ):
i = i + addExpression.
return i.
end.
method public integer inc( input-output i as integer, input addExpression as character ):
i = i + integer( addExpression ).
return i.
end.
end.

How to read/write full 32 or 64 bits of an int or bigint bitmasked field in TSQL

Setting the 32nd and 64th bits is tricky.
32-bit Solution:
I got it to work for 32-bit fields. The trick is to cast the return value of the POWER function to binary(4) before casting it to int. If you try to cast directly to int, without first casting to binary(4), you will get an arithmetic overflow exception when operating on the 32nd bit (index 31). Also, you must ensure the expression passed to POWER is of a sufficiently large type (e.g. bigint) to store the maximum return value (2^31), or the POWER function will throw an arithmetic overflow exception.
CREATE FUNCTION [dbo].[SetIntBit]
(
#bitfieldvalue int,
#bitindex int, --(0 to 31)
#bit bit --(0 or 1)
)
RETURNS int
AS
BEGIN
DECLARE #bitmask int = CAST(CAST(POWER(CAST(2 as bigint),#bitindex) as binary(4)) as int);
RETURN
CASE
WHEN #bit = 1 THEN (#bitfieldvalue | #bitmask)
WHEN #bit = 0 THEN (#bitfieldvalue & ~#bitmask)
ELSE #bitfieldvalue --NO CHANGE
END
END
64-bit Problem:
I was going to use a similar approach for 64-bit fields, however I'm finding that the POWER function is returning inaccurate values, despite using the decimal(38) type for the expression/return value.
For example: "select POWER(CAST(2 as decimal(38)), 64)" returns 18446744073709552000 (only the first 16 digits are accurate) rather than the correct value of 18446744073709551616. And even though I'd only raise 2 to the 63rd power, that result is still inaccurate.
The documentation of the POWER function indicates that "Internal conversion to float can cause loss of precision if either the money or numeric data types are used." (note that numeric type is functionally equivalent to decimal type).
I think the only way to handle 64-bit fields properly is to operate on their 32-bit halves, but that involves an extra check on the #bitindex property to see which half I need to operate on. Are there any built-in function or better ways to explicitly set those final bits in 32-bit and 64-bit bitmasked fields in TSQL?
64-bit Solution:
So far, the simplest solution I can come up with to my own question is to add an exceptional case for problematic computation of the bitmask for the 64th bit (i.e. 2^63), where the bitmask value is hardcoded so that it does not have to be computed by POWER. POWER computes 2^62 and smaller values accurately as far as I can see.
CREATE FUNCTION [dbo].[SetBigIntBit]
(
#bitfieldvalue bigint,
#bitindex int, --(0 to 63)
#bit bit --(0 or 1)
)
RETURNS bigint
AS
BEGIN
DECLARE #bitmask bigint = case WHEN #bitindex = 63 THEN CAST(0x8000000000000000 as bigint)
ELSE CAST(CAST(POWER(CAST(2 as bigint),#bitindex) as binary(8)) as bigint)
RETURN
CASE
WHEN #bit = 1 THEN (#bitfieldvalue | #bitmask)
WHEN #bit = 0 THEN (#bitfieldvalue & ~#bitmask)
ELSE #bitfieldvalue --NO CHANGE
END
END
EDIT: Here's some code to test the above function...
declare #bitfield bigint = 0;
print #bitfield;
declare #bitindex int;
set #bitindex = 0;
while #bitindex < 64
begin
set #bitfield = tutor.dbo.SetBigIntBit(#bitfield,#bitindex,1);
print #bitfield;
set #bitindex = #bitindex + 1;
end
set #bitindex = 0;
while #bitindex < 64
begin
set #bitfield = tutor.dbo.SetBigIntBit(#bitfield,#bitindex,0);
print #bitfield;
set #bitindex = #bitindex + 1;
end