Internal representation of Enumeration Types in NuSMV/NuXMV - smt

why is there a significant performance dropdown when representing a 16-bit signed integer variable as an intervall (-32768..32767) in comparison to fixed length bit arrays?
Inspecting the pre-processed NuSMV/NuXMV model one can observe that the interval types are converted to enumerations.
The statistics of the BDD however does not show any relevant informaton.

Related

SystemVerilog: Data types and display of default size of data type

How can I display the size of a 'real' (or 'float') in system verilog?
$bits can display size of int, shortint, longint, time, integer, etc. but cannot do the same for a real.
You cannot select individual bits of a real number, nor is there any other construct that requires to know the number of bits in a real number. So SystemVerilog does not need to provide a way to tell you.
real is not a real verilog type. It is intended for testbench or for analog calculations, not for design. Therefore it has no bit size associated with it.
However from lrm:
The real data type is the same as a C double. The shortreal data type is the same as a C float. The
realtime declarations shall be treated synonymously with real declarations and can be used
interchangeably. Variables of these three types are collectively referred to as real variables.
And there is a function which converts real to bits:
$realtobits converts values from a real type to a 64-bit vector representation of the real number.
and corresponding
$bitstoreal converts a bit pattern created by $realtobits to a value of the real type
So, you can assume that the size of real is 64 bits after conversion to bits.

Portability of auto kind/type conversions in numerical operations in Fortran

According to the Fortran standard, if the operands of a numeric operation have different data kind/types, then the resulting value has a kind/type determined by the operand with greater decimal precision. Before the operation is evaluated, the operand with the lower decimal precision is first converted to the higher-precision kind/type.
Now, the use of a high-precision data kind/type implies there is accuracy to a certain level of significant digits, but kind/type conversion does not seem to guarantee such things1. For this reason, I avoid mixing single- and double-precision reals.
But does this mean that automatic kind/type conversions should be avoided at all costs? For example, I would not hesitate to write x = y**2 where both x and y are reals (of the same kind), but the exponent is an integer.
Let us limit the scope of this question to the result of a single operation between two operands. We are not considering the outcome of equations with operations between multiple values where other issues might creep in.
Let us also assume we are using a portable type/kind system. For example, in the code below selected_real_kind is used to define the kind assigned to double-precision real values.
Then, I have two questions regarding numerical expressions with type/kind conversions between two operands:
Is it "portable", in practice? Can we expect the same result for an operation that uses automatic type/kind conversion from different compilers?
Is it "accurate" (and "portable") if the lower-precision operands are limited to integers or whole-number reals? To make this clear, can we always assume that 0==0.0d0, 1==1.0d0, 2==2.0d0, ... , for all compilers? And if so, then can we always assume that simple expressions such as (1 - 0.1230d0) == (1.0d0 - 0.1230d0) are true, and therefore the conversion is both accurate and portable?
To provide a simple example, would automatic conversion from an integer to a double-precision real like shown in the code below be accurate and/or portable?
program main
implicit none
integer, parameter :: dp = selected_real_kind(p=15)
print *, ((42 - 0.10_dp) == (42.0_dp - 0.10_dp))
end program
I have tested with gfortran and ifort, using different operands and operations, but have yet to see anything to cause concern as long as I limit the conversions to integers or whole-number reals. Am I missing anything here, or just revealing my non-CS background?
1According to these Intel Fortran docs (for example), integers converted to a real type have decimals filled with zeros. For the conversion of a single-precision real to higher-precision real, the additional decimal places are filled by first setting the low-order bits of the converted higher-precision operand to zero. So, for example, when a single-precision real operand with a non-zero fractional part (such as 1.2) is converted to a double, the conversion does not automatically increase the accuracy of the value - for example, 1.2 does not become 1.2000000000000000d0 but instead becomes something like 1.200000047683758d0. How much this actually matters probably depends on the application.

Simulink data types

I'm reading an IMU on the arduino board with a s-function block in simulink by double or single data types though I just need 2 decimals precision as ("xyz.ab").I want to improve the performance with changing data types and wonder that;
is there a way to decrease the precision to 2 decimals in s-function block or by adding/using any other conversion blocks/codes in the simulink aside from using fixed-point tool?
For true fixed point transfer, fixed-point toolbox is the most general answer, as stated in Phil's comment.
However, to avoid toolbox use, you could also devise your own fix-point integer format and add a block that takes a floating point input and convert it into an integer format (and vice versa on the output).
E.g. If you know the range is 327.68 < var < 327.67 you could just define your float as an int16 divided by 10. In a matlab function block you would then just say
y=int16(u*100.0);
to convert the input to the S-function.
On the output it would be a reversal
y=double(u)/100.0;
(Eml/matlab function code can be avoided by using multiply, divide and convert blocks.)
However, be mindful of the bits available and that the scaling (*,/) operations is done on the floating point rather than the integer.
2^(nrOfBits-1)-1 shows you what range you can represent including signeage. For unsigned types uint8/16/32 the range is 2^(nrOfBits)-1. Then you use the scaling to fit the representable bit into your used floating point range. The scaled range divided by 2^nrOfBits will tell you what the resolution will be (how large are the steps).
You will need to scale the variables correspondingly on the Arduino side as well when you go to an integer interface of this type. (I'm assuming you have access to that code - if not it'd be hard to use any other interface than what is already provided)
Note that the intXX(doubleVar*scale) will always truncate the values to integer. If you need proper rounding you should also include the round function, e.g.:
int16(round(doubleVar*scale));
You don't need to use a base 10 scale, any scaling and offsets can be used, but it's easier to make out numbers manually if you keep to base 10 (i.e. 0.1 10.0 100.0 1000.0 etc.).
As a final note, if the Arduino code interface is floating point (single/double) and can't be changed to integer type; you will not get any speedup from rounding decimals since the full floating point is what will be is transferred anyway. Even if you do manage to reduce the data a bit using integers I suspect this might not give a huge speedup unless you transfer large amounts of data. The interface code will have a comparatively large overhead anyway.
Good luck with your project!

Rationale for CBOR negative integers

I am confused as to why CBOR chooses to encode negative integers as unsigned binary numbers with the value defined as -1 minus the unsigned value, instead of e.g. regular two's complement representation. Is there an obvious advantage that I'm missing, apart from increased negative range (which, IMO, is of questionable value weighed against increased complexity)?
Advantages:
There's only one allowed encoding type for each integer value, so all encoders will emit consistent output. If the encoders use the shortest encoding for each value as recommended by the spec, they'll emit identical output.
Picking the shortest numeric field is easier for non-negative numbers than for signed negative numbers, and CBOR aims for tiny IOT devices to readily transmit data.
It fits twice as many values into each integer encoding field width, thus making the data more compact. (It'd be yet more compact if the integer encodings didn't overlap, but that'd be notably more complicated.)
It can handle twice as large a negative value before needing the bignum extension.

How Can I decide what data type i must use in any programming language?

My English is not good so i apologize for it.
i experienced little about java and C++. But there is a problem. I only use integer for integer numbers and double for decimal numbers. There are many types like float, long int etc. Is there a specific way to decide what i must use?
It purely depends on the size of the data and of course the type of it. For example if you have a very large number that cannot fit within the size of a machine word (typically mapped to an int[eger] type) then you would choose long, and so forth.
For a small number I would go with char (since it occupies one byte in C/C++), or short if the number is greater than 255 but less than 65535, etc.
And all of these again depend on the programming language.
Be sure to check your programming language reference for the limits.
Hope that helps.
Different numerical data types are used for different value ranges. What range applies to what data type depends on the language you are using and the operating system, where the program is compiled/run.
For example, byte data type uses 1 byte of storage and can store numbers from 0 to 255. word data type usually uses 2 bytes of storage and can store numbers from 0 to 65,536. Then you get int - here the number of bytes vary, but often it would be 4 bytes with values of -2^31 to 2^31-1 - and so on. In C/C++ there also qualifiers signed and unsigned, which are not present in Java.
With float/double, not only the range of numbers, but also the precision (the number of decimal places that can be stored) will be one of the deciding factors. With double you can store a lot more decimal places than with single.
On the whole, the decision will be based on what data you need to store in it, how much memory you're willing to allocate and what platform you're running on. Check your language documentation for more details. For example, this page describes primitive data types in java.
You must check first for the type of data you want to store with the reference of data types provided in that programming language. Then very important you must check for the range of that data type...