Does converting UInt8(or similar types) to Int counter the purpose of UInt8? - swift

I'm storing many of the integers in my program as UInt8, having a 0 - 255 range of values. Now later on I will be summing many of them to get a result that will be able to be stored into an Int. Does this conversion I have to do before I add the values from UInt8 to Int defeat the purpose of me using a smaller datatype to begin with? I feel it would be faster to just use just Int, but suffer larger a memory footprint. But why go for UInt8 when I have to face many conversions reducing of speed and increasing memory as well. Is there something I'm missing, or should smaller datatypes be really only used with other small datatypes?

You are talking a few bytes per variable when storing as UInt8 instead of Int. These data types were conceived very early on in the history of computing, when memory was measured in the low KBs. Even the Apple Watch has 512MB.
Here's what Apple says in the Swift Book:
Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
I use UInt8, UInt16 and UInt32 mainly in code that deals with C. And yes, converting back and forth is a pain in the neck.

Related

Decoding Arbitrary-Length Values Using a Fixed Block Size?

Background
In the past I've written an encoder/decoder for converting an integer to/from a string using an arbitrary alphabet; namely this one:
abcdefghjkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ23456789
Lookalike characters are excluded, so 1, I, l, O, and 0 are not present in this alphabet. This was done for user convenience and to make it easier to read and to type out a value.
As mentioned above, my previous project, python-ipminify converts a 32-bit IPv4 address to a string using an alphabet similar to the above, but excluding upper-case characters. In my current undertaking, I don't have the constraint of excluding upper-case characters.
I wrote my own Python for this project using the excellent question and answer here on how to build a URL-shortener.
I have published a stand-alone example of the logic here as a Gist.
Problem
I'm now writing a performance-critical implementation of this in a compiled language, most likely Rust, but I'd need to port it to other languages as well.. I'm also having to accept an arbitrary-length array of bytes, rather than an arbitrary-width integer, as is the case in Python.
I suppose that as long as I use an unsigned integer and use consistent endianness, I could treat the byte array as one long arbitrary-precision unsigned integer and do division over it, though I'm not sure how performance will scale with that. I'd hope that arbitrary-precision unsigned integer libraries would try to use vector instructions where possible, but I'm not sure how this would work when the input length does not match a specific instruction length, i.e. when the input size in bits is not evenly divisible by supported instructions, e.g. 8, 16, 32, 64, 128, 256, 512 bits.
I have also considered breaking up the byte array into 256-bit (32 byte) blocks and using SIMD instructions (I only need to support x86_64 on recent CPUs) directly to operate on larger unsigned integers, but I'm not exactly sure how to deal with size % 32 != 0 blocks; I'd probably need to zero-pad, but I'm not clear on how I would know when to do this during decoding, i.e. when I don't know the underlying length of the source value, only that of the decoded value.
Question
If I'm going the arbitrary unsigned integer width route, I'd essentially be at the mercy of the library author, which is probably fine; I'd imagine that these libraries would be fairly optimized to vectorize as much as possible.
If I try to go the block route, I'd probably zero-pad any remaining bits in the block if the input length was not divisible by the block size during encoding. However, would it even be possible to decode such a value without knowing the decoded value size?

Float and Double network byte order

The Swift library includes the function bigEndian that can be used on integer types (such as Int, UInt, UInt8, UInt64, Int64, etc) to convert them from host order (which might presumably be anything, but realistically will be big or little endian) to network byte order (which is big endian). There're some good SO answers referring to this, and a particularly complete one is here.
However, I've not found a good resource that covers arranging a Float (32 bit) or Double (64 bit) type in to network byte order. Given that these types don't have a bigEndian method, I'm wondering if there is some subtlety involved? (The linked question does discuss floating point types, but I'm not sure it is definitely covering all details that might be relevant).
Specifically, I want to handle the 64 bit Double floating point type. I'd like a solution that will work on any platform where Swift is available.
Thank you.

Simulink data types

I'm reading an IMU on the arduino board with a s-function block in simulink by double or single data types though I just need 2 decimals precision as ("xyz.ab").I want to improve the performance with changing data types and wonder that;
is there a way to decrease the precision to 2 decimals in s-function block or by adding/using any other conversion blocks/codes in the simulink aside from using fixed-point tool?
For true fixed point transfer, fixed-point toolbox is the most general answer, as stated in Phil's comment.
However, to avoid toolbox use, you could also devise your own fix-point integer format and add a block that takes a floating point input and convert it into an integer format (and vice versa on the output).
E.g. If you know the range is 327.68 < var < 327.67 you could just define your float as an int16 divided by 10. In a matlab function block you would then just say
y=int16(u*100.0);
to convert the input to the S-function.
On the output it would be a reversal
y=double(u)/100.0;
(Eml/matlab function code can be avoided by using multiply, divide and convert blocks.)
However, be mindful of the bits available and that the scaling (*,/) operations is done on the floating point rather than the integer.
2^(nrOfBits-1)-1 shows you what range you can represent including signeage. For unsigned types uint8/16/32 the range is 2^(nrOfBits)-1. Then you use the scaling to fit the representable bit into your used floating point range. The scaled range divided by 2^nrOfBits will tell you what the resolution will be (how large are the steps).
You will need to scale the variables correspondingly on the Arduino side as well when you go to an integer interface of this type. (I'm assuming you have access to that code - if not it'd be hard to use any other interface than what is already provided)
Note that the intXX(doubleVar*scale) will always truncate the values to integer. If you need proper rounding you should also include the round function, e.g.:
int16(round(doubleVar*scale));
You don't need to use a base 10 scale, any scaling and offsets can be used, but it's easier to make out numbers manually if you keep to base 10 (i.e. 0.1 10.0 100.0 1000.0 etc.).
As a final note, if the Arduino code interface is floating point (single/double) and can't be changed to integer type; you will not get any speedup from rounding decimals since the full floating point is what will be is transferred anyway. Even if you do manage to reduce the data a bit using integers I suspect this might not give a huge speedup unless you transfer large amounts of data. The interface code will have a comparatively large overhead anyway.
Good luck with your project!

What's the correct number type for financial variables in Swift?

I am used to programming in Java, where the BigDecimal type is the best for storing financial values, since there are manners to specify rounding rules over the calculations.
In the latest swift version (2.1 at the time this post is written), which native type better supports correct calculations and rounding for financial values? Is there any equivalent to java's BigDecimal? Or anything similar?
You can use NSDecimal or NSDecimalNumber for arbitrary precision numbers.
See more on NSDecimalNumbers's reference page.
If you are concerned about storing for example $1.23 in a float or double, and the potential inaccuracies you will get from floating point precision errors, that is if you actually want to stick to integer amounts of cents or pence (or whatever else). Then use an integer to store your value and use the pence/cent as your unit instead of pounds/dollars. You will then be 100% accurate when dealing in integer amounts of pence/cents, and it's easier than using a class like NSDecimalNumber. The display of that value is then purely a presentation issue.
If however you need to deal with fractions of a pence/cent, then NSDecimalNumber is probably what you want.
I recommend looking into how classes like this actually work, and how floating point numbers work too, because having an understanding of this will help you to see why precision errors arise and just what the precision limits are of a class like NSDecimalNumber, why it's better for storing decimal numbers, why floats are good at storing numbers like 17/262144 (i.e. where the denominator is a power of two) but can't store 1/100, etc.

How Can I decide what data type i must use in any programming language?

My English is not good so i apologize for it.
i experienced little about java and C++. But there is a problem. I only use integer for integer numbers and double for decimal numbers. There are many types like float, long int etc. Is there a specific way to decide what i must use?
It purely depends on the size of the data and of course the type of it. For example if you have a very large number that cannot fit within the size of a machine word (typically mapped to an int[eger] type) then you would choose long, and so forth.
For a small number I would go with char (since it occupies one byte in C/C++), or short if the number is greater than 255 but less than 65535, etc.
And all of these again depend on the programming language.
Be sure to check your programming language reference for the limits.
Hope that helps.
Different numerical data types are used for different value ranges. What range applies to what data type depends on the language you are using and the operating system, where the program is compiled/run.
For example, byte data type uses 1 byte of storage and can store numbers from 0 to 255. word data type usually uses 2 bytes of storage and can store numbers from 0 to 65,536. Then you get int - here the number of bytes vary, but often it would be 4 bytes with values of -2^31 to 2^31-1 - and so on. In C/C++ there also qualifiers signed and unsigned, which are not present in Java.
With float/double, not only the range of numbers, but also the precision (the number of decimal places that can be stored) will be one of the deciding factors. With double you can store a lot more decimal places than with single.
On the whole, the decision will be based on what data you need to store in it, how much memory you're willing to allocate and what platform you're running on. Check your language documentation for more details. For example, this page describes primitive data types in java.
You must check first for the type of data you want to store with the reference of data types provided in that programming language. Then very important you must check for the range of that data type...