signed avx512 vnni multiplication - x86-64

I am trying to use AVX512 VNNI instructions to perform signed int8 multiplication. It appears that the instructions only support signed int8 multiplied with an unsigned int8 operand. I wonder what's the best way to multiply two signed int8 operands. Shifting one of the int8 to unsigned and then shifting the result back works but for the purpose of this question doesn't count.

Related

Kademlia XOR Distance as an Integer

In the Kademlia paper it mentions using the XOR of the NodeID interpreted as an integer. Let's pretend my NodeID1 is aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d and my NodeID2 is ab4d8d2a5f480a137067da17100271cd176607a1. What's the appropriate way to interpret this as an integer for comparison of NodeID1 and NodeID2? Would I convert these into BigInt and XOR those two BigInts? I saw that in one implementation. Could I also just convert each NodeID into decimal and XOR those values?
I found this question but I'm trying to better understand exactly how this works.
Note: This isn't for implementation, I'm just trying to understand how the integer interpretation works.
For a basic kademlia implementation you only need 2 bit arithmetic operations on the IDs: xor and comparison. For both cases the ID conceptually is a 160bit unsigned integer with overflow, i.e. modulo 2^160 arithmetic. It can be decomposed into a 20bytes or 5×u32 array, assuming correct endianness conversion in the latter case. The most common endianness for network protocols is big-endian, so byte 0 will contain the most significant 8 bits out of 160.
Then the xor or comparisons can be applied on a subunit by subunit basis. I.e. xor is just an xor for all the bytes, the comparison is a binary array comparison.
Using bigint library functions are probably sufficient for implementation but not optimal because they have size and signedness overhead compared to implementing the necessary bit-twiddling on fixed-sized arrays.
A more complete implementation may also need some additional arithmetic and utility functions.
Could I also just convert each NodeID into decimal and XOR those values?
Considering the size of the numbers decimal representation is not particularly useful. For the human reader heaxadecimal or the individual bits are more useful and computers operates on binary and practically never on decimal.

Rationale for CBOR negative integers

I am confused as to why CBOR chooses to encode negative integers as unsigned binary numbers with the value defined as -1 minus the unsigned value, instead of e.g. regular two's complement representation. Is there an obvious advantage that I'm missing, apart from increased negative range (which, IMO, is of questionable value weighed against increased complexity)?
Advantages:
There's only one allowed encoding type for each integer value, so all encoders will emit consistent output. If the encoders use the shortest encoding for each value as recommended by the spec, they'll emit identical output.
Picking the shortest numeric field is easier for non-negative numbers than for signed negative numbers, and CBOR aims for tiny IOT devices to readily transmit data.
It fits twice as many values into each integer encoding field width, thus making the data more compact. (It'd be yet more compact if the integer encodings didn't overlap, but that'd be notably more complicated.)
It can handle twice as large a negative value before needing the bignum extension.

LReal Vs Real Data Types

In PLC Structure Text what is main difference between a LReal Vs a Real data type? Which would you use when replacing a double or float when converting from a C based language to structure text with a PLC
LReal is a double precision real, float, or floating point variables that is a 64 bit signed value rather then a real is a single precision real, float, or floating point that is made from a 32 bit signed value. So it stores more in a LReal which makes LReal closer to a Double and a Float. The other thing to keep in mind is depending on the PLC it will convert a Real into a LReal for calculations. Plus a LReal is limited to 15 decimal places rather than a Real is 9 decimal places. So if you need more then 9 decimal places I would recommend LReal but if you need less I would stick with Real because with LReals you have to convert from a Integer to a Real to a LReal. So it would save you a step.

is float16 supported in matlab?

Does MATLAB support float16 operations? If so, how to convert a double matrix to float16? I am doing an arithmetic operation on a large matrix where 16-bit floating representation is sufficient for my representation. Representing by a double datatype takes 4 times more memory.
Is your matrix full? Otherwise, try sparse -- saves a lot of memory if there's lots of zero-valued elements.
AFAIK, float16 is not supported. Lowest you can go in float-datatype is with single, which is a 32-bit datatype:
A = single( rand(50) );
You could multiply by a constant and cast to int16, but you'd lose precision.
The numeric classes, which Matlab supports out of the box are the following:
int8
int16
int32
int64
uint8
uint16
uint32
uint64
single (32-bit float)
double (64-bit float)
plus the complex data type. So no 16-bit floats, unfortunately.
On Mathworks file exchange, there seems to be a half-precision float library. It requires MEX, however.
This might be an old question, but I found it in search for a similiar problem (half precision in matlab).
Things seemed to have changed in time:
https://www.mathworks.com/help/fixedpoint/ref/half.html
Half-precision seems to be supported nativeley by matlab now.

NSInteger types

I would like to know what is the differecnce between Integer 16, Integer 32 and Integer 64, and the difference between a signed integer and an unsigned integer(NSInteger and NSUInteger)
I'm not sure exactly what types you mean by "Integer 16", "Integer 32", and "Integer 64", but normally, those numbers refer to the size in bits of the integer type.
The difference between a signed and an unsigned integer is the range of values it can represent. For example, a two's-complement signed 16-bit integer can represent numbers between -32,768 and 32,767. An unsigned 16-bit integer can represent values between 0 and 65,535.
For most computers in use today, a signed integer of width n can represent the values [-2n-1,2n-1) and an unsigned integer of width n can represent values [0,2n).
NSInteger and NSUInteger are Apple's custom integer data types. The first is signed while the latter is unsigned. On 32-bit builds NSInteger is typedef'd as an int while on 64-bit builds it's typedef'd as a long. NSUInteger is typedef'd as an unsigned int for 32-bit and an unsigned long for 64-bit. Signed types cover the range [-2^(n-1), 2^(n-1)] where n is the bit value and unsigned types cover the range [0, 2^n].
When coding for a single, self-contained program, using NSInteger or NSUInteger are considered the best practice for future-proofing against platform bit changes. It is not the best practice when dealing with fixed-size data needs, such as with binary file formats or networking, because the required field widths are defined previously and constant regardless of the platform bit level. This is where the fixed-size types defined in stdint.h (i.e., uint8_t, uint16_t, uint32_t, etc) come into use.
Unsigned vs signed integer -
Unsigned is usually used where variables aren't allowed to take negative numbers. For example, while looping through an array, its always useful/readable if the array subscript variable is unsigned int and loop through until the length of the array.
On the other hand, if variable can have negative numbers too then declare the variable as signed int. Integer variables are signed by default.
Have a look at the Foundation Data types. NInteger and NSUInteger and typedef for int and unsigned int.
From wikipedia
In computing, signed number
representations are required to encode
negative numbers in binary number
systems
which means that you normally have to use a bit to encode the sign thus reducing the range in of number you can represent.