ObjC getting 64 bit number from stream - iphone

I am successfully passing a 64 bit number from a objC client to a java client, but am unable to send to an objC client.
Java Code
/*
* Retrieve a double (64-bit) number from the stream.
*/
private double getDouble() throws IOException
{
byte[] buffer = getBytes(8);
long bits =
((long)buffer[0] & 0x0ff) |
(((long)buffer[1] & 0x0ff) << 8) |
(((long)buffer[2] & 0x0ff) << 16) |
(((long)buffer[3] & 0x0ff) << 24) |
(((long)buffer[4] & 0x0ff) << 32) |
(((long)buffer[5] & 0x0ff) << 40) |
(((long)buffer[6] & 0x0ff) << 48) |
(((long)buffer[7] & 0x0ff) << 56);
return Double.longBitsToDouble(bits);
}
objC code
/*
* Retrieve a double (64-bit) number from the stream.
*/
- (double)getDouble
{
NSRange dblRange = NSMakeRange(0, 8);
char buffer[8];
[stream getBytes:buffer length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
long long bits =
((long long)buffer[0] & 0x0ff) |
(((long long)buffer[1] & 0x0ff) << 8) |
(((long long)buffer[2] & 0x0ff) << 16) |
(((long long)buffer[3] & 0x0ff) << 24) |
(((long long)buffer[4] & 0x0ff) << 32) |
(((long long)buffer[5] & 0x0ff) << 40) |
(((long long)buffer[6] & 0x0ff) << 48) |
(((long long)buffer[7] & 0x0ff) << 56);
NSNumber *tempNum = [NSNumber numberWithLongLong:bits];
NSLog(#"\n***********\nsizeof long long %d \n tempNum: %#\nbits %lld",sizeof(long long), tempNum, bits);
return [tempNum doubleValue];
}
the result of NSLog is
sizeof long long 8
tempNum: -4616134021117358511
bits -4616134021117358511
the number should be : -1.012345
The problem is that I am trying to convert Java to objC in the getDouble func. My middleware takes into account the endian issues. The simple solution is if the target is little endian
- (double)getDouble
NSRange dblRange = NSMakeRange(0, 8);
double swapped;
[stream getBytes:&swapped length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
return swapped;
Thanks all for input - got a lot of experience and a little understanding from this exercise.

A double and a long long are not the same thing. A long represents an integer, which has no fractional portion, and a double represents a floating-point number, which has a fractional portion. These two types have completely different ways of representing their values in memory. That is to say, if you were to look at the bits for a long long representing the number 4000 and compare those to the bits for a double representing the number 4000, they would be different.
So as Wevah notes, the first step is for you to use the proper double type, and the correct %f formatter in your call to NSLog().
I would add, though, that you also need to be careful to get your bytes in the native order for the machine your C code is running on. For a detailed description of what I'm referring to, see http://en.wikipedia.org/wiki/Endianness The short version is that different processors may represent numbers in different ways in memory, and you need to ensure in your code that once you get a pile of bytes from the network, you are putting the bytes in the right order for your processor before you attempt to interpret it as a number.
Luckily, this is a solved issue, and is easily accounted for by using the CFConvertFloat64SwappedToHost() function from CoreFoundation:
[stream getBytes:buffer length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
double myDouble = CFConvertFloat64SwappedToHost(*((double*)buffer));
NSNumber *tempNum = [NSNumber numberWithDouble:myDouble];
NSLog(#"\n***********\nsizeof double %d \n tempNum: %#\nbits %f",sizeof(double), tempNum, myDouble);
return [tempNum doubleValue];

You probably want to convert it to a double (possibly/probably via a union; see Jonathan's comment) and use the %f specifier.

Related

EditorGuiLayout.MaskField issue with large enums

I'm working on an input system that would allow the user to translate input mappings between different input devices and operating systems and potentially define their own.
I'm trying to create a MaskField for an editor window where the user can select from a list of RuntimePlatforms, but selecting individual values results in multiple values being selected.
Mainly for debugging I set it up to generate an equivalent enum RuntimePlatformFlags that it uses instead of RuntimePlatform:
[System.Flags]
public enum RuntimePlatformFlags: long
{
OSXEditor=(0<<0),
OSXPlayer=(0<<1),
WindowsPlayer=(0<<2),
OSXWebPlayer=(0<<3),
OSXDashboardPlayer=(0<<4),
WindowsWebPlayer=(0<<5),
WindowsEditor=(0<<6),
IPhonePlayer=(0<<7),
PS3=(0<<8),
XBOX360=(0<<9),
Android=(0<<10),
NaCl=(0<<11),
LinuxPlayer=(0<<12),
FlashPlayer=(0<<13),
LinuxEditor=(0<<14),
WebGLPlayer=(0<<15),
WSAPlayerX86=(0<<16),
MetroPlayerX86=(0<<17),
MetroPlayerX64=(0<<18),
WSAPlayerX64=(0<<19),
MetroPlayerARM=(0<<20),
WSAPlayerARM=(0<<21),
WP8Player=(0<<22),
BB10Player=(0<<23),
BlackBerryPlayer=(0<<24),
TizenPlayer=(0<<25),
PSP2=(0<<26),
PS4=(0<<27),
PSM=(0<<28),
XboxOne=(0<<29),
SamsungTVPlayer=(0<<30),
WiiU=(0<<31),
tvOS=(0<<32),
Switch=(0<<33),
Lumin=(0<<34),
BJM=(0<<35),
}
In this linked screenshot, only the first 4 options were selected. The integer next to "Platforms: " is the mask itself.
I'm not a bitwise wizard by a large margin, but my assumption is that this occurs because EditorGUILayout.MaskField returns a 32bit int value, and there are over 32 enum options. Are there any workarounds for this or is something else causing the issue?
First thing I've noticed is that all values inside that Enum is the same because you are shifting 0 bits to left. You can observe this by logging your values with this script.
// Shifts 0 bits to the left, printing "0" 36 times.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((0 << i), 2));
}
// Shifts 1 bits to the left, printing values up to 2^35.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((1 << i), 2));
}
The reason inheriting from long does not work alone, is because of bit shifting. Check out this example I found about the issue:
UInt32 x = ....;
UInt32 y = ....;
UInt64 result = (x << 32) + y;
The programmer intended to form a 64-bit value from two 32-bit ones by shifting 'x' by 32 bits and adding the most significant and the least significant parts. However, as 'x' is a 32-bit value at the moment when the shift operation is performed, shifting by 32 bits will be equivalent to shifting by 0 bits, which will lead to an incorrect result.
So you should also cast the shifting bits. Like this:
public enum RuntimePlatformFlags : long {
OSXEditor = (1 << 0),
OSXPlayer = (1 << 1),
WindowsPlayer = (1 << 2),
OSXWebPlayer = (1 << 3),
// With literals.
tvOS = (1L << 32),
Switch = (1L << 33),
// Or with casts.
Lumin = ((long)1 << 34),
BJM = ((long)1 << 35),
}

Bitshifting and sign

Let me start with the problem:
def word(byte1 : Byte, byte2 : Byte, byte3 : Byte, byte4: Byte) : Int = {
((byte4 << 0)) | ((byte3 << 8)) | ((byte2 << 16)) | ((byte1 << 24))
}
The goal here is pretty simple. Given 4 bytes, pack them in to an Int.
The code above does not work because it appears the shift operator tries to preserve the sign. For example, this:
word(0xFA.toByte, 0xFB.toByte, 0xFC.toByte, 0xFD.toByte).formatted("%02X")
Produces FFFFFFFD when I would have expected FAFBFCFD.
Making the problem smaller:
0xFE.toByte << 8
Produces -2 in two's complement, not 0xFE00.
How can I do a shift without the sign issues?
AND the bytes with 0xFF to undo the effects of sign extension before the shift:
((byte4 & 0xFF) << 0) | ((byte3 & 0xFF) << 8) | ...
Your suspicion is correct and #user2357112 answers your question.
Now, you can use ByteBuffer as a clean alternate:
def word(byte1 : Byte, byte2 : Byte, byte3 : Byte, byte4: Byte) : Int =
ByteBuffer.wrap(Array(byte1, byte2, byte3, byte4)).getInt

Is there a better way to detect endianness in .NET than BitConverter.IsLittleEndian?

It would be nice if the .NET framework just gave functions/methods from the BitConverter class that just explicitly returned an array of bytes in the proper requested endianness.
I've done some functions like this in other code, but is there a shorter more direct way? (efficiency is key since this concept is used a TON in various crypto and password derivation contexts, including PBKDF2, Skein, HMAC, BLAKE2, AES and others)
// convert an unsigned int into an array of bytes BIG ENDIEN
// per the spec section 5.2 step 3 for PBKDF2 RFC2898
static internal byte[] IntToBytes(uint i)
{
byte[] bytes = BitConverter.GetBytes(i);
if (!BitConverter.IsLittleEndian)
{
return bytes;
}
else
{
Array.Reverse(bytes);
return bytes;
}
}
I also see that others struggle with this question, and I haven't seen a good answer yet :( How to deal with 'Endianness'
The way I convert between integers and byte[] is by using bitshifts with fixed endianness. You don't need to worry about host endianness with such code. When you care that much about performance, you should avoid allocating a new array each time.
In my crypto library I use:
public static UInt32 LoadLittleEndian32(byte[] buf, int offset)
{
return
(UInt32)(buf[offset + 0])
| (((UInt32)(buf[offset + 1])) << 8)
| (((UInt32)(buf[offset + 2])) << 16)
| (((UInt32)(buf[offset + 3])) << 24);
}
public static void StoreLittleEndian32(byte[] buf, int offset, UInt32 value)
{
buf[offset + 0] = (byte)value;
buf[offset + 1] = (byte)(value >> 8);
buf[offset + 2] = (byte)(value >> 16);
buf[offset + 3] = (byte)(value >> 24);
}
With big endian you just need to change the shift amounts or the offsets:
public static void StoreBigEndian32(byte[] buf, int offset, UInt32 value)
{
buf[offset + 3] = (byte)value;
buf[offset + 2] = (byte)(value >> 8);
buf[offset + 1] = (byte)(value >> 16);
buf[offset + 0] = (byte)(value >> 24);
}
If you're targetting .net 4.5 it can be useful to mark these methods with [MethodImpl(MethodImplOptions.AggressiveInlining)].
Another performance tip for crypto is avoiding arrays as much as possible. Load the data from the array at the beginning of the function, then run everything using local variables and only in the very end you copy back to the array.

UILabel Convert Unicode(Japanese) and display

After hours of research I gave up.
I receive text data from a WebService. For some case, the text is inJapanese, and the WS returns its Unicoded version. For example: \U00e3\U0082\U008f
I know that this is a Japanese char.
I am trying to display this Unicode char or string inside a UILabel.
Since the simple setText method does'nt display the correct chars, I used this (copied) routine:
unichar unicodeValue = (unichar) strtol([[[p innerData] valueForKey:#"title"] UTF8String], NULL, 16);
char buffer[2];
int len = 1;
if (unicodeValue > 127) {
buffer[0] = (unicodeValue >> 8) & (1 << 8) - 1;
buffer[1] = unicodeValue & (1 << 8) - 1;
len = 2;
} else {
buffer[0] = unicodeValue;
}
[[cell title] setText:[[NSString alloc] initWithBytes:buffer length:len encoding:NSUTF8StringEncoding] ];
But no success: the UILabel is empty.
I know that one way could be convert the chars to hex and then from hex to String...is there a simpler way?
SOLVED
First you must be sure that your server is sending UTF8 and not UNICODE CODE POINTS. The only way I found is to json_encode strings which contain UNICODE chars.
Then, in iOS user unescaping following this link Using Objective C/Cocoa to unescape unicode characters, ie \u1234

Convert 16bit colour to 32bit

I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?
The input format contains each colour in a single short (2 bytes).
The output format is 32bit RGB. This means each pixel has 3 bytes I believe?
I need to convert the short value into RGB colours.
Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.
Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:
red8bit = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit = (blue5bit << 3) | (blue5bit >> 2);
To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31. We want to translate that into a new fraction eightbit/255. Some simple arithmetic:
fivebit eightbit
------- = --------
31 255
Yields:
eightbit = fivebit * 8.226
Or closely (note the squiggly ≈):
eightbit ≈ (fivebit * 8) + (fivebit * 0.25)
That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:
eightbit = (fivebit << 3) | (fivebit >> 2);
The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:
five bit space eight bit space error
00000 00000000 0%
11111 11111111 0%
10101 10101010 0.02%
00111 00111001 -1.01%
Common formats include BGR0,
RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this
is easy, just modify the shift amounts in the last line.
unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */
/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);
/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */
r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */
/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;
/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}
Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)
You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)
Update This only applies to colors that use 4 bits for each channel and have an alpha channel.
There some questions about the model like is it HSV, RGB?
If you wanna ready, fire, aim I'd try this first.
#include <stdint.h>
uint32_t convert(uint16_t _pixel)
{
uint32_t pixel;
pixel = (uint32_t)_pixel;
return ((pixel & 0xF000) << 16)
| ((pixel & 0x0F00) << 12)
| ((pixel & 0x00F0) << 8)
| ((pixel & 0x000F) << 4);
}
This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.
I'm here long after the fight, but I actually had the same problem with ARGB color instead, and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:
AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15.
However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x
Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101).
Now that you have this, you can compute your whole number by doing:
a = v & 0b1111000000000000
r = v & 0b111100000000
g = v & 0b11110000
b = v & 0b1111
return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16
Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:
return b << 4 | g << 8 | r << 12 | a << 16
(All the left shifts values are strange because we did not bother doing a right shift before)