I'm having trouble communicating position coordinates to a set of motors I have connected on the network. I can send a string just fine, and receive text back from the motor, but I can't seem to send it an int value.
Using NSlog I have determined that the actual value I'm sending is correct, however I suspect my method of sending it via the output stream is wrong. Any ideas?
My code for sending a 64bit int value:
uint64_t rawInt = m1;
rawInt <<= 16;
rawInt |= m2;
NSData *buffer = [NSData dataWithBytes: &rawInt length:8];
[outputStream write: [buffer bytes] maxLength:[buffer length]];
Well, your codes looks kind of funny. Anyway, for sending a binary integer over the network, you have to know the endianess: The receiver either expects little or big endian integers.
Here's the code for both:
uint64_t myInt = 42;
uint64_t netInt = CFSwapInt64HostToLittle(myInt); // use this for little endian
uint64_t netInt = CFSwapInt64HostToBig(myInt); // use this for big endian
[outputStream write:(uint8_t*)&netInt maxLength:sizeof(netInt)];
You're probably seeing an endianness problem. Intel CPUs are little-endian, meaning that the least significant byte in a multi-byte value is stored first, the most significant byte last. Reading an 8-byte number from memory and putting it on the network without doing anything will cause the bytes to go out in that order.
Most network protocols (including all internet protocols) expect big-endian data, so the most significant byte appears on the network first and the least significant byte last. Does your network protocol expect that?
Related
I am in the process of converting some iOS code(Swift) to Android(Kotlin) that is used to control a Bluetooth device (BLE).
I believe there are some differences between Swift and Kotlin and Unsigned Ints etc but I can't seem to get the same output.
iOS code : Outputs 13365
print ("IMPORTANT \(fourthData as NSData)") // Value is 0x3534
var fourth = Int32()
_ = Swift.withUnsafeMutableBytes(of: &fourth, { fourthData.copyBytes(to: $0) } )
print ("IMPORTANT \(fourth)") // 13365
Android code : Output is 13620
#ExperimentalUnsignedTypes // just to make it clear that the experimental unsigned types are used
fun ByteArray.toHexString() = asUByteArray().joinToString("") { it.toString(16).padStart(2, '0') }
Log.i("Decode", fourthData.toHexString()) // 3534
Log.i("Decode", "${fourthData.toHexString().toUInt(16)}") //13620
I have tried Int, UInt BigInteger and Long. What am I missing
As commenters already pointed out, values 13620 and 13365 are 0x3534 and 0x3435 respectively. In other words, the values differ by the byte ordering.
Decadic 32-bit number 13620 is equal to 0x00003534 in hexadecimal, therefore it will be represented by four bytes 00-00-35-34.
However, computers usually don't represent the value in that order. Two commonly used representations are Big Endian and Little Endian. The Big Endian will represent bytes in the natural order 00-00-35-34 but Little Endian will have bytes swapped to 00-00-34-35.
Java (Kotlin) always uses Big Endian representation for everything. On the other hand, if you just get the memory layout of an Int in Swift, you get the machine representation. The machine representation is usually Little Endian but that can differ between architectures. You should always be careful when directly reinterpreting numeric values as bytes or viceversa.
In this specific case, if you are sure the Data contains a Big Endian integer, you can use Int32(bigEndian: fourth) to swap the values from Big Endian to machine representation.
Academically Natured Question:
How can the byte data of an Int32 with the value of -1 be cast into an UInt32? (can SWIFT do that?)
Understanding:
I know that -1 isn't a value that can be represented by an Unsigned Integer, as UInts only contain Integers above -1
However I also know that both Int32 and UInt32 take the same amount of space in Bytes (4*8=32). That byte space should be able to be used for either type, regardless if it represents the same value... which it obviously wouldn't.
Conclusion:
There should be some simple way to take the raw bit data of an Int32 and use it for a UInt32...
Cast the variable via bitPattern (thanks Jthora). Plenty of help here on SO for this:
Converting signed to unsigned in Swift
Int to UInt (and vice versa) bit casting in Swift
For 32-bits, 0xffffffff=> -1 when signed or 4294967295 unsigned.
I am struggling with very simple thing:
I receive some ids by http request as a string. I know they represent 64-bit integer id numbers.
How can I convert them to the 64-bit Integers (NSNumber or NSInteger)?
Functions like:
[nsstring integerValue],
[nsstring intValue]
seems to be 32bit limited (max value:2147483647).
Any Hints?
Data model with 64bit integer properties compiles fine so it means iPhone supports such a numbers for a god sake.
It must be some simple conversion method. It is so popular in http connection based devices.
have you tried longLongValue ?
if they have a fixed length, you could cut them on each 9th number, make the intvalue of the cutted parts and add the values with the 10^9-factor. surely not the most elegant way, but it should work
I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.
We have a JavaEE server and servlets providing data to mobile clients (first JavaME, now soon iPhone). The servlet writes out data using the following code:
DataOutputStream dos = new DataOutputStream(out);
dos.writeInt(someInt);
dos.writeUTF(someString);
... and so on
This data is returned to the client as bytes in the HTTP response body, to reduce the number of bytes transferred.
In the iPhone app, the response payload is loaded into NSData object. Now, after spending hours and hours trying to figure out how to read the data out in the Objective-C application, I'm almost ready to give up, as I haven't found any good way to read the data into NSInteger and NSString (as corresponding to above protocol)
Would anyone have any pointers how to read stuff out from a binary protocol written by a java app? Any help is greatly appreciated!
Thanks!
You'll have to do the demarshalling yourself; fortunately, it's fairly straightforward. Java's DataOutputStream class writes integers in big-endian (network) format. So, to demarshall the integer, we grab 4 bytes and unpack them into a 4-byte integer.
For UTF-8 strings, DataOutputStream first writes a 2-byte value indicating the number of bytes that follow. We read that in, and then read the subsequent bytes. Then, to decode the string, we can use the NSString method initWithBytes:length:encoding: as so:
NSData *data = ...; // this comes from the HTTP request
int length = [data length];
const uint8_t *bytes = (const uint8_t *)[data bytes];
if(length < 4)
; // oops, handle error
// demarshall the big-endian integer from 4 bytes
uint32_t myInt = (bytes[0] << 24) | (bytes[1] << 16) | (bytes[2] << 8) | (bytes[3]);
// convert from (n)etwork endianness to (h)ost endianness (may be a no-op)
// ntohl is defined in <arpa/inet.h>
myInt = ntohl(myInt);
// advance to next datum
bytes += 4;
length -= 4;
// demarshall the string length
if(length < 2)
; // oops, handle error
uint16_t myStringLen = (bytes[0] << 8) | (bytes[1]);
// convert from network to host endianness
myStringLen = ntohs(myStringLen);
bytes += 2;
length -= 2;
// make sure we actually have as much data as we say we have
if(myStringLen > length)
myStringLen = (uint16_t)length;
// demarshall the string
NSString *myString = [[NSString alloc] initWithBytes:bytes length:myStringLen encoding:NSUTF8StringEncoding];
bytes += myStringLen;
length -= myStringLen;
You can (and probably should) write functions to demarshall, so that you don't have to repeat this code for every field you want to demarshall. Also, be extra careful about buffer overflows. You're handling data sent over the network, which you should always distrust. Always verify your data, and always check your buffer lengths.
The main thing is to understand the binary data format itself. It doesn't matter what's written it, so long as you know what the bytes mean.
As such, the docs for DataOutputStream are your best bet. They specify everything (hopefully) about what the binary data will look like.
Next, I would try to basically come up with a class on the iPhone which will read the same format into appropriate data structure. I don't know Objective C at all, but I'm sure that it can't be too hard to read 4 bytes, know that the first byte is the most significant (etc) and do appropriate bit-twiddling to get the right kind of integer. (Basically read a byte, shift it left 8, read the next byte and add it into the result, shift the whole lot left 8 bits, etc.) There may well be more efficient ways of doing it, but get something that works first. When you've got unit tests around it all, you can move onto optimising it.
Don't forget that Objective-C is just C in a pretty dress--and C excels at this kind of bit-grovelling. To a large extent, you should be able to just define a C struct that looks like your data and cast the pointer to your data into a pointer to that struct. Now, exactly which types to use, and if you need to byte-swap anything, will depend on how Java constructs this stream; that's what you'll need to spend time with Java's documentation for.
Fundamentally, though, this is a design smell. You're having this problem because you made assumptions about your client platform that are no longer valid. If it's an option, I'd recommend you offer a second, more portable interface to the same functions (just adding "WithXML" wrappers or something should suffice). This will save you time if you ever end up porting to another platform that doesn't use Java.
Carl, if you really can't change what the server provides have a look at this class. It should be the pointer you are looking for. That said: The idea to use native java serialization as a transport format does not sound like a good idea. My first choice would have been JSON. If that's still too big I would probably rather use something like Thrift or Protobuffers. They also provide binary serialization - but in a cross language manner. (There is also the oldie ASN1 - but that's painful)