NSInteger types - iphone

I would like to know what is the differecnce between Integer 16, Integer 32 and Integer 64, and the difference between a signed integer and an unsigned integer(NSInteger and NSUInteger)

I'm not sure exactly what types you mean by "Integer 16", "Integer 32", and "Integer 64", but normally, those numbers refer to the size in bits of the integer type.
The difference between a signed and an unsigned integer is the range of values it can represent. For example, a two's-complement signed 16-bit integer can represent numbers between -32,768 and 32,767. An unsigned 16-bit integer can represent values between 0 and 65,535.
For most computers in use today, a signed integer of width n can represent the values [-2n-1,2n-1) and an unsigned integer of width n can represent values [0,2n).

NSInteger and NSUInteger are Apple's custom integer data types. The first is signed while the latter is unsigned. On 32-bit builds NSInteger is typedef'd as an int while on 64-bit builds it's typedef'd as a long. NSUInteger is typedef'd as an unsigned int for 32-bit and an unsigned long for 64-bit. Signed types cover the range [-2^(n-1), 2^(n-1)] where n is the bit value and unsigned types cover the range [0, 2^n].
When coding for a single, self-contained program, using NSInteger or NSUInteger are considered the best practice for future-proofing against platform bit changes. It is not the best practice when dealing with fixed-size data needs, such as with binary file formats or networking, because the required field widths are defined previously and constant regardless of the platform bit level. This is where the fixed-size types defined in stdint.h (i.e., uint8_t, uint16_t, uint32_t, etc) come into use.

Unsigned vs signed integer -
Unsigned is usually used where variables aren't allowed to take negative numbers. For example, while looping through an array, its always useful/readable if the array subscript variable is unsigned int and loop through until the length of the array.
On the other hand, if variable can have negative numbers too then declare the variable as signed int. Integer variables are signed by default.

Have a look at the Foundation Data types. NInteger and NSUInteger and typedef for int and unsigned int.
From wikipedia
In computing, signed number
representations are required to encode
negative numbers in binary number
systems
which means that you normally have to use a bit to encode the sign thus reducing the range in of number you can represent.

Related

Decoding Arbitrary-Length Values Using a Fixed Block Size?

Background
In the past I've written an encoder/decoder for converting an integer to/from a string using an arbitrary alphabet; namely this one:
abcdefghjkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ23456789
Lookalike characters are excluded, so 1, I, l, O, and 0 are not present in this alphabet. This was done for user convenience and to make it easier to read and to type out a value.
As mentioned above, my previous project, python-ipminify converts a 32-bit IPv4 address to a string using an alphabet similar to the above, but excluding upper-case characters. In my current undertaking, I don't have the constraint of excluding upper-case characters.
I wrote my own Python for this project using the excellent question and answer here on how to build a URL-shortener.
I have published a stand-alone example of the logic here as a Gist.
Problem
I'm now writing a performance-critical implementation of this in a compiled language, most likely Rust, but I'd need to port it to other languages as well.. I'm also having to accept an arbitrary-length array of bytes, rather than an arbitrary-width integer, as is the case in Python.
I suppose that as long as I use an unsigned integer and use consistent endianness, I could treat the byte array as one long arbitrary-precision unsigned integer and do division over it, though I'm not sure how performance will scale with that. I'd hope that arbitrary-precision unsigned integer libraries would try to use vector instructions where possible, but I'm not sure how this would work when the input length does not match a specific instruction length, i.e. when the input size in bits is not evenly divisible by supported instructions, e.g. 8, 16, 32, 64, 128, 256, 512 bits.
I have also considered breaking up the byte array into 256-bit (32 byte) blocks and using SIMD instructions (I only need to support x86_64 on recent CPUs) directly to operate on larger unsigned integers, but I'm not exactly sure how to deal with size % 32 != 0 blocks; I'd probably need to zero-pad, but I'm not clear on how I would know when to do this during decoding, i.e. when I don't know the underlying length of the source value, only that of the decoded value.
Question
If I'm going the arbitrary unsigned integer width route, I'd essentially be at the mercy of the library author, which is probably fine; I'd imagine that these libraries would be fairly optimized to vectorize as much as possible.
If I try to go the block route, I'd probably zero-pad any remaining bits in the block if the input length was not divisible by the block size during encoding. However, would it even be possible to decode such a value without knowing the decoded value size?

signed avx512 vnni multiplication

I am trying to use AVX512 VNNI instructions to perform signed int8 multiplication. It appears that the instructions only support signed int8 multiplied with an unsigned int8 operand. I wonder what's the best way to multiply two signed int8 operands. Shifting one of the int8 to unsigned and then shifting the result back works but for the purpose of this question doesn't count.

Signed number representations in verilog

What methods exist for signed number representation?
How do you know which signed number representation is used for the application?
e.g.
IEEE 754 allows you to represent 1.3444E-15 and 1.3444E+15... implying very large number & a very small number simply based on 1 signed representation of exponent. IEEE 754 exponent field uses biased exponent representation see page 7. Similarly , which other methods exist.
For Verilog, integers use 2's complement. Real numberes use IEEE-754. This is for declaring constants that you use to initialize a reg or assign to a wire and for built-in operands. Actual regs/wires are only a bunch of bits, and it's your design which determines what format numbers are stored with.
Forget about the types. Just use bit vector and interpret the bits as float.
wire unsigned [31:0] bits;
wire unsigned sign;
wire unsigned [7:0] exp;
wire unsigned [22:0] mantissa;
assign sign = bits[31];
assign exp = bits[30:23];
assign matissa = bits[22:0];

LReal Vs Real Data Types

In PLC Structure Text what is main difference between a LReal Vs a Real data type? Which would you use when replacing a double or float when converting from a C based language to structure text with a PLC
LReal is a double precision real, float, or floating point variables that is a 64 bit signed value rather then a real is a single precision real, float, or floating point that is made from a 32 bit signed value. So it stores more in a LReal which makes LReal closer to a Double and a Float. The other thing to keep in mind is depending on the PLC it will convert a Real into a LReal for calculations. Plus a LReal is limited to 15 decimal places rather than a Real is 9 decimal places. So if you need more then 9 decimal places I would recommend LReal but if you need less I would stick with Real because with LReals you have to convert from a Integer to a Real to a LReal. So it would save you a step.

Is there unsigned Double in Swift?

Forgive me if this is a silly question, I a self-taught programmer. If there is unsigned Int for large whole number storage, there should be an unsigned Double for storing large floating point number like Double right?
According to the Swift Standard Library Reference, if a Double (aka Float64) does not give you enough precision, you can use a Float80. But I have to wonder what it is that are you trying to store that exceeds the capabilities of a Double.
It's because floating point formats, like Double is one, don't support unsigned numbers. Also just because you've got the possibility to use unsigned on some types like Int doesn't mean, that it has to work with others too.