POINTER bit size - codesys

The CODESYS documentation says
The result of the difference between two pointers is of type DWORD, even on 64-bit platforms, when the pointers are 64-bit pointers.
From this, I guessed that pointers in codesys are 32-bit on x86 platforms, and 64-bit on x64 platforms. Is this true?
I tried running CODESYS_Control_Win_V3 and CODESYS_Control_Win_V3 x64 in simulation mode (CODESYS 3.5 SP16) and in both cases the pointers were 64-bit, but I don't have a real x84 PLC (only x64) so I can't verify this on a real device. Could somebody with an x86 PLC test this and give me their results?
EDIT: Strangely enough, I have 2 separate projects open, and in both I tried ptr := ptr + {some DINT variable};, and on one I get the warning Implicit conversion from signed Type 'DINT' to unsigned Type 'LWORD' while on the other one I get Implicit conversion from signed Type 'DINT' to unsigned Type 'DWORD':
EDIT2: I tried this in a test project:
p: POINTER TO STRING := ADR(str);
pp: POINTER TO POINTER TO STRING := ADR(p);
sizep: DINT := SIZEOF(p); // evaluates to 8
sizepp: DINT := SIZEOF(pp); // evaluates to 8
Does that mean they are always 8 bytes?

The size of a pointer is 4 Bytes on a 32bits and 8 Bytes on a 64bits runtime.
The sentence you found in the documentation just says that the compiler expects a DWORD when you do the difference of 2 pointers.
Meaning, you will get that warning when you try to do something like this:
diTest := pTest - pTest2;
diTest beeing a DINT and pTest and pTest2 beeing two pointers.
Also meaning you may lose some information if you use a DWORD as a result assignment of the difference of 2 pointers on 64bit systems.
In fact you will lose 4 bytes.
DWORD are 4 bytes long and pointers on 64 bit systems are 8 bytes long.
In order to store the addresses of your pointers in a way that is cross platform use the PVOID type, which is 4 bytes on 32 bit and 8 bytes on 64 bit systems. PVOID is available in the CAA Types library.
Alternatively, you can use __XWORD, as PVOID is an alias of __XWORD, which is converted into LWORD on 64-bit platforms and DWORD on 32-bit platforms.platforms.

Related

Swift to Kotlin puzzle

I am in the process of converting some iOS code(Swift) to Android(Kotlin) that is used to control a Bluetooth device (BLE).
I believe there are some differences between Swift and Kotlin and Unsigned Ints etc but I can't seem to get the same output.
iOS code : Outputs 13365
print ("IMPORTANT \(fourthData as NSData)") // Value is 0x3534
var fourth = Int32()
_ = Swift.withUnsafeMutableBytes(of: &fourth, { fourthData.copyBytes(to: $0) } )
print ("IMPORTANT \(fourth)") // 13365
Android code : Output is 13620
#ExperimentalUnsignedTypes // just to make it clear that the experimental unsigned types are used
fun ByteArray.toHexString() = asUByteArray().joinToString("") { it.toString(16).padStart(2, '0') }
Log.i("Decode", fourthData.toHexString()) // 3534
Log.i("Decode", "${fourthData.toHexString().toUInt(16)}") //13620
I have tried Int, UInt BigInteger and Long. What am I missing
As commenters already pointed out, values 13620 and 13365 are 0x3534 and 0x3435 respectively. In other words, the values differ by the byte ordering.
Decadic 32-bit number 13620 is equal to 0x00003534 in hexadecimal, therefore it will be represented by four bytes 00-00-35-34.
However, computers usually don't represent the value in that order. Two commonly used representations are Big Endian and Little Endian. The Big Endian will represent bytes in the natural order 00-00-35-34 but Little Endian will have bytes swapped to 00-00-34-35.
Java (Kotlin) always uses Big Endian representation for everything. On the other hand, if you just get the memory layout of an Int in Swift, you get the machine representation. The machine representation is usually Little Endian but that can differ between architectures. You should always be careful when directly reinterpreting numeric values as bytes or viceversa.
In this specific case, if you are sure the Data contains a Big Endian integer, you can use Int32(bigEndian: fourth) to swap the values from Big Endian to machine representation.

IEC61131-3 directly represented variables: data width and datatype

Directly represented variables (DRV) in IEC61131-3 languages include in their "addresses" a data-width specifier: X for 1 bit, B for byte, W for word, D for dword, etc.
Furthermore, when a DRV is declared, a IEC data type is specified, as any variable (BYTE, WORD, INT, REAL...).
I'm not sure about how these things are related. Are they orthogonal or not? Can one define a REAL variable with a W (byte) address? What would be the expected result?
A book says:
Assigning a data type to a flag or I/O address enables the programming
system to check whether the variable is being accessed correctly. For
example, a variable declared by AT %QD3 : DINT; cannot be
inadvertently accessed with UINT or REAL.
which does not make things clearer for me. Take for example this fragment (recall that W means Word, i.e., 16 bits - and both DINT and REAL correspond to 32 bits)
X AT %MW3 : DINT;
Y AT %MD4.1 : DINT;
Z AT %MD4.1 : REAL;
The first line maps a 32-bits IEC var to a 16-bits location. Is this legal? would the write/read be equivalent to a "cast" or what?
The other lines declare two 32-bits IEC variables of different type that points to the same address (I guess this should be legal). What is the expected result when reading or writing?
Like everything in PLC world, its all vendor and model specific, unfortunately.
Siemens compiler would not let you declare Real address with bit component like MD4.1, it allowed only MD4 and data length had to be double word, MB4 was not allowed.
Reading would not be equivalent to cast. For example you declare MW2 as integer and copy some value there. PLC stores integer as, lets say in twos complement format. Later in program you read MD2 as real. The PLC does not try to convert integer to real, it just blindly reads the bytes and treats it as real regardless what was saved there or what was declared there. There was no automatic casting.
This is how things worked in Siemens S7 plc-s. But you have to be very careful since each vendor does things its own way.

Best pratice for typedef of uint32

On a system where both long and int is 4 bytes which is the best and why?
typedef unsigned long u32;
or
typedef unsigned int u32;
note: uint32_t is not an option
Nowadays every platform has stdint.h or its C++ equivalent cstdint which define uint32_t. Please use the standard type rather than creating your own.
http://pubs.opengroup.org/onlinepubs/7999959899/basedefs/stdint.h.html
http://www.cplusplus.com/reference/cstdint/
http://msdn.microsoft.com/en-us/library/hh874765.aspx
The size will be the same between both, so it depend only on your use.
If you need to store decimal values, use long.
A better and complete answer here:
https://stackoverflow.com/questions/271076/what-is-the-difference-between-an-int-and-a-long-in-c/271132
Edit: I'm not sure about decimal with long, if someone can confirm, thanks.
Since you said the standard uint32_t is not an option, using long and int are both correct on 32-bit machines, I'll say
typedef unsigned int u32;
is a little better, because on two popular 64-bit machine data models (LLP64 and LP64), int is still 32-bit, while long could be 32-bit or 64-bit. See 64-bit data models

count(*) type compatibility error with Database.PostgreSQL.Simple?

The error is
*** Exception: Incompatible {errSQLType = "int8", errHaskellType = "Int", errMessage = "types incompatible"}
It looks like any value returned by count(*) in the query must be converted into Integer rather than Int. If I change those specific variables to type Integer, the queries work.
But this error wasn't being raised on another machine with the same exact code. The first machine was 32 bit and this other one 64-bit. That's the only difference I could discern.
Does anyone have any insight into what is going on?
The PostgreSQL count() functions returns a Bigint type, see
http://www.postgresql.org/docs/9.2/static/functions-aggregate.html
Bigint are 8 bytes
see http://www.postgresql.org/docs/9.2/static/datatype-numeric.html
Haskell int is ~ 2**29 which implies it a 4 byte integer.
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Int.html
Then its normal that PostgreSQL or its API will not do an implicit downwards conversion in precision.
So use a Haskell int64 type or cast count(*) to integer.
As documented in the FromField module, postgresql-simple will only do client-side conversions between numerical types when there isn't any possibility of overflow or loss of precision. Note especially the list of types in the haddocks for the instance FromField Int: "int2, int4, and if compiled as 64-bit code, int8 as well. This library was compiled as 32-bit code." The latter part of that comment is of course specific to the build that hackage itself performs.
On 32-bit platforms, Int is a 32-bit integer, and on 64-bit platforms, Int is a 64-bit integer. If you use Int32 you'll get the same exception. You can use Int64 or the arbitrary-precision Integer type to avoid this problem on both kinds of platform.

How should I declare a long in Objective-C? Is NSInteger appropriate?

I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.