Best pratice for typedef of uint32 - typedef

On a system where both long and int is 4 bytes which is the best and why?
typedef unsigned long u32;
or
typedef unsigned int u32;
note: uint32_t is not an option

Nowadays every platform has stdint.h or its C++ equivalent cstdint which define uint32_t. Please use the standard type rather than creating your own.
http://pubs.opengroup.org/onlinepubs/7999959899/basedefs/stdint.h.html
http://www.cplusplus.com/reference/cstdint/
http://msdn.microsoft.com/en-us/library/hh874765.aspx

The size will be the same between both, so it depend only on your use.
If you need to store decimal values, use long.
A better and complete answer here:
https://stackoverflow.com/questions/271076/what-is-the-difference-between-an-int-and-a-long-in-c/271132
Edit: I'm not sure about decimal with long, if someone can confirm, thanks.

Since you said the standard uint32_t is not an option, using long and int are both correct on 32-bit machines, I'll say
typedef unsigned int u32;
is a little better, because on two popular 64-bit machine data models (LLP64 and LP64), int is still 32-bit, while long could be 32-bit or 64-bit. See 64-bit data models

Related

NSCoder for unsigned integers with swift

In my swift app, I really need to store a UInt32 value and retrieve it using NSCoder.
The issue is, there are methods for signed integers :
coder.decodeInteger()
coder.decodeInt32()
coder.decodeInt64()
but not for unsigned integers.
Also, casting an Int32 of negative value in an UInt32 does not seem to work.
Am I missing some point?
The tool you want is init(bitPattern:). This is the same as C-style casting on integers (which is how you use these methods in ObjC when you need unsigned ints). It just reinterprets the bits.
UInt32(bitPattern: coder.decodeInt32())

Why Swift's malloc/MemoryLayout.size take/return signed integers?

public func malloc(_ __size: Int) -> UnsafeMutableRawPointer!
#frozen public enum MemoryLayout<T> {
public static func size(ofValue value: T) -> Int
...
When in C malloc/sizeof take/return size_t which is unsigned?
Isn't Swift calling libc under the hood?
EDIT: is this the reason why? https://qr.ae/pvFOQ6
They are basically trying to get away from C's legacy?
Yes, it's calling the libc functions under the hood.
The StdlibRationales.rst document in the Swift repo explains why it imports size_t as Int:
Converging APIs to use Int as the default integer type allows users to write fewer explicit type conversions.
Importing size_t as a signed Int type would not be a problem for 64-bit platforms. The only concern is about 32-bit platforms, and only about operating on array-like data structures that span more than half of the address space. Even today, in 2015, there are enough 32-bit platforms that are still interesting, and x32 ABIs for 64-bit CPUs are also important. We agree that 32-bit platforms are important, but the usecase for an unsigned size_t on 32-bit platforms is pretty marginal, and for code that nevertheless needs to do that there is always the option of doing a bitcast to UInt or using C.

POINTER bit size

The CODESYS documentation says
The result of the difference between two pointers is of type DWORD, even on 64-bit platforms, when the pointers are 64-bit pointers.
From this, I guessed that pointers in codesys are 32-bit on x86 platforms, and 64-bit on x64 platforms. Is this true?
I tried running CODESYS_Control_Win_V3 and CODESYS_Control_Win_V3 x64 in simulation mode (CODESYS 3.5 SP16) and in both cases the pointers were 64-bit, but I don't have a real x84 PLC (only x64) so I can't verify this on a real device. Could somebody with an x86 PLC test this and give me their results?
EDIT: Strangely enough, I have 2 separate projects open, and in both I tried ptr := ptr + {some DINT variable};, and on one I get the warning Implicit conversion from signed Type 'DINT' to unsigned Type 'LWORD' while on the other one I get Implicit conversion from signed Type 'DINT' to unsigned Type 'DWORD':
EDIT2: I tried this in a test project:
p: POINTER TO STRING := ADR(str);
pp: POINTER TO POINTER TO STRING := ADR(p);
sizep: DINT := SIZEOF(p); // evaluates to 8
sizepp: DINT := SIZEOF(pp); // evaluates to 8
Does that mean they are always 8 bytes?
The size of a pointer is 4 Bytes on a 32bits and 8 Bytes on a 64bits runtime.
The sentence you found in the documentation just says that the compiler expects a DWORD when you do the difference of 2 pointers.
Meaning, you will get that warning when you try to do something like this:
diTest := pTest - pTest2;
diTest beeing a DINT and pTest and pTest2 beeing two pointers.
Also meaning you may lose some information if you use a DWORD as a result assignment of the difference of 2 pointers on 64bit systems.
In fact you will lose 4 bytes.
DWORD are 4 bytes long and pointers on 64 bit systems are 8 bytes long.
In order to store the addresses of your pointers in a way that is cross platform use the PVOID type, which is 4 bytes on 32 bit and 8 bytes on 64 bit systems. PVOID is available in the CAA Types library.
Alternatively, you can use __XWORD, as PVOID is an alias of __XWORD, which is converted into LWORD on 64-bit platforms and DWORD on 32-bit platforms.platforms.

About using Bob Jenkins' perfect hash library

When i use Bob Jenkins's perfect hash package, After build "perfect" binary, i could not even pass the example by "./perfect < sample_input" , it always warn me that "fatal error: Cannot perfect hash: cannot build tab[]", Has anyone met this issue before ? Are there any other steady perfect hash concerned library or package, Thanks in advanced!
Quote Jenkins' perfect hash library link as below:
http://burtleburtle.net/bob/hash/perfect.html
I faced the same problem with gcc and clang under 64 bit Linux and found the reason:
The type definitions of the 4 byte types ub4 and sb4 must be changed in standard.h from
typedef unsigned long int ub4;
typedef signed long int sb4;
to
typedef unsigned int ub4;
typedef signed int sb4;
or might be defined as aliases to types from stdint.h (uint32_t and int32_t).

How should I declare a long in Objective-C? Is NSInteger appropriate?

I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.