I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.
Related
I am in the process of converting some iOS code(Swift) to Android(Kotlin) that is used to control a Bluetooth device (BLE).
I believe there are some differences between Swift and Kotlin and Unsigned Ints etc but I can't seem to get the same output.
iOS code : Outputs 13365
print ("IMPORTANT \(fourthData as NSData)") // Value is 0x3534
var fourth = Int32()
_ = Swift.withUnsafeMutableBytes(of: &fourth, { fourthData.copyBytes(to: $0) } )
print ("IMPORTANT \(fourth)") // 13365
Android code : Output is 13620
#ExperimentalUnsignedTypes // just to make it clear that the experimental unsigned types are used
fun ByteArray.toHexString() = asUByteArray().joinToString("") { it.toString(16).padStart(2, '0') }
Log.i("Decode", fourthData.toHexString()) // 3534
Log.i("Decode", "${fourthData.toHexString().toUInt(16)}") //13620
I have tried Int, UInt BigInteger and Long. What am I missing
As commenters already pointed out, values 13620 and 13365 are 0x3534 and 0x3435 respectively. In other words, the values differ by the byte ordering.
Decadic 32-bit number 13620 is equal to 0x00003534 in hexadecimal, therefore it will be represented by four bytes 00-00-35-34.
However, computers usually don't represent the value in that order. Two commonly used representations are Big Endian and Little Endian. The Big Endian will represent bytes in the natural order 00-00-35-34 but Little Endian will have bytes swapped to 00-00-34-35.
Java (Kotlin) always uses Big Endian representation for everything. On the other hand, if you just get the memory layout of an Int in Swift, you get the machine representation. The machine representation is usually Little Endian but that can differ between architectures. You should always be careful when directly reinterpreting numeric values as bytes or viceversa.
In this specific case, if you are sure the Data contains a Big Endian integer, you can use Int32(bigEndian: fourth) to swap the values from Big Endian to machine representation.
The CODESYS documentation says
The result of the difference between two pointers is of type DWORD, even on 64-bit platforms, when the pointers are 64-bit pointers.
From this, I guessed that pointers in codesys are 32-bit on x86 platforms, and 64-bit on x64 platforms. Is this true?
I tried running CODESYS_Control_Win_V3 and CODESYS_Control_Win_V3 x64 in simulation mode (CODESYS 3.5 SP16) and in both cases the pointers were 64-bit, but I don't have a real x84 PLC (only x64) so I can't verify this on a real device. Could somebody with an x86 PLC test this and give me their results?
EDIT: Strangely enough, I have 2 separate projects open, and in both I tried ptr := ptr + {some DINT variable};, and on one I get the warning Implicit conversion from signed Type 'DINT' to unsigned Type 'LWORD' while on the other one I get Implicit conversion from signed Type 'DINT' to unsigned Type 'DWORD':
EDIT2: I tried this in a test project:
p: POINTER TO STRING := ADR(str);
pp: POINTER TO POINTER TO STRING := ADR(p);
sizep: DINT := SIZEOF(p); // evaluates to 8
sizepp: DINT := SIZEOF(pp); // evaluates to 8
Does that mean they are always 8 bytes?
The size of a pointer is 4 Bytes on a 32bits and 8 Bytes on a 64bits runtime.
The sentence you found in the documentation just says that the compiler expects a DWORD when you do the difference of 2 pointers.
Meaning, you will get that warning when you try to do something like this:
diTest := pTest - pTest2;
diTest beeing a DINT and pTest and pTest2 beeing two pointers.
Also meaning you may lose some information if you use a DWORD as a result assignment of the difference of 2 pointers on 64bit systems.
In fact you will lose 4 bytes.
DWORD are 4 bytes long and pointers on 64 bit systems are 8 bytes long.
In order to store the addresses of your pointers in a way that is cross platform use the PVOID type, which is 4 bytes on 32 bit and 8 bytes on 64 bit systems. PVOID is available in the CAA Types library.
Alternatively, you can use __XWORD, as PVOID is an alias of __XWORD, which is converted into LWORD on 64-bit platforms and DWORD on 32-bit platforms.platforms.
This question already has answers here:
How do I declare 64bit unsigned int in dart/flutter?
(2 answers)
Closed 3 years ago.
What is the equivalent of java's long datatype in dart ? should int or long be used?
In Java:
long: The long data type is a 64-bit two's complement integer. The signed long has a minimum value of -2^63 and a maximum value of 2^63-1. In Java SE 8 and later, you can use the long data type to represent an unsigned 64-bit long, which has a minimum value of 0 and a maximum value of 2^64-1. Use this data type when you need a range of values wider than those provided by int. The Long class also contains methods like compareUnsigned, divideUnsigned etc to support arithmetic operations for unsigned long.
In Dart:
int
Integer values no larger than 64 bits, depending on the platform. On the Dart VM, values can be from -2^63 to 2^63 - 1. Dart that’s compiled to JavaScript uses JavaScript numbers, allowing values from -2^53 to 2^53 - 1.
So, you can exactly use int in Dart for the equivalent of long in Java. But beware of the caveat when compiled to JavaScript.
On a system where both long and int is 4 bytes which is the best and why?
typedef unsigned long u32;
or
typedef unsigned int u32;
note: uint32_t is not an option
Nowadays every platform has stdint.h or its C++ equivalent cstdint which define uint32_t. Please use the standard type rather than creating your own.
http://pubs.opengroup.org/onlinepubs/7999959899/basedefs/stdint.h.html
http://www.cplusplus.com/reference/cstdint/
http://msdn.microsoft.com/en-us/library/hh874765.aspx
The size will be the same between both, so it depend only on your use.
If you need to store decimal values, use long.
A better and complete answer here:
https://stackoverflow.com/questions/271076/what-is-the-difference-between-an-int-and-a-long-in-c/271132
Edit: I'm not sure about decimal with long, if someone can confirm, thanks.
Since you said the standard uint32_t is not an option, using long and int are both correct on 32-bit machines, I'll say
typedef unsigned int u32;
is a little better, because on two popular 64-bit machine data models (LLP64 and LP64), int is still 32-bit, while long could be 32-bit or 64-bit. See 64-bit data models
I'm working on an iPhone app. I want to parse a series of numbers from a string. However, intValue is acting really really strange.
I have a string with the value 1304287200000.
I then place the intValue of that into an NSInteger, and lo and behold, my integer is for some reason assigned the value of 2147483647.
What gives?
The datatype int is a 32bit numeric value, with a range of approximately ±2 billion. 1304287200000 is by a margin outside of that range.
You need to skip int for long long that is a 64bit type and covers your need. A more human readable and explicit name for the 64bit type is int64_t.
What you are getting back is INT32_MAX, because the parsed value overflows the int type. This is explained in the NSString documentation. Try longLongValue instead, LLONG_MAX should be big enough.
int is 32-bit, so the maximum value it can hold is 2,147,483,647. Try longLongValue instead.
Your number exceeding integer limits