Convert 16bit colour to 32bit - iphone

I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?
The input format contains each colour in a single short (2 bytes).
The output format is 32bit RGB. This means each pixel has 3 bytes I believe?
I need to convert the short value into RGB colours.
Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.

Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:
red8bit = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit = (blue5bit << 3) | (blue5bit >> 2);
To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31. We want to translate that into a new fraction eightbit/255. Some simple arithmetic:
fivebit eightbit
------- = --------
31 255
Yields:
eightbit = fivebit * 8.226
Or closely (note the squiggly ≈):
eightbit ≈ (fivebit * 8) + (fivebit * 0.25)
That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:
eightbit = (fivebit << 3) | (fivebit >> 2);
The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:
five bit space eight bit space error
00000 00000000 0%
11111 11111111 0%
10101 10101010 0.02%
00111 00111001 -1.01%

Common formats include BGR0,
RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this
is easy, just modify the shift amounts in the last line.
unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */
/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);
/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */
r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */
/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;
/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}

Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)
You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)
Update This only applies to colors that use 4 bits for each channel and have an alpha channel.

There some questions about the model like is it HSV, RGB?
If you wanna ready, fire, aim I'd try this first.
#include <stdint.h>
uint32_t convert(uint16_t _pixel)
{
uint32_t pixel;
pixel = (uint32_t)_pixel;
return ((pixel & 0xF000) << 16)
| ((pixel & 0x0F00) << 12)
| ((pixel & 0x00F0) << 8)
| ((pixel & 0x000F) << 4);
}
This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.

I'm here long after the fight, but I actually had the same problem with ARGB color instead, and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:
AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15.
However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x
Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101).
Now that you have this, you can compute your whole number by doing:
a = v & 0b1111000000000000
r = v & 0b111100000000
g = v & 0b11110000
b = v & 0b1111
return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16
Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:
return b << 4 | g << 8 | r << 12 | a << 16
(All the left shifts values are strange because we did not bother doing a right shift before)

Related

EditorGuiLayout.MaskField issue with large enums

I'm working on an input system that would allow the user to translate input mappings between different input devices and operating systems and potentially define their own.
I'm trying to create a MaskField for an editor window where the user can select from a list of RuntimePlatforms, but selecting individual values results in multiple values being selected.
Mainly for debugging I set it up to generate an equivalent enum RuntimePlatformFlags that it uses instead of RuntimePlatform:
[System.Flags]
public enum RuntimePlatformFlags: long
{
OSXEditor=(0<<0),
OSXPlayer=(0<<1),
WindowsPlayer=(0<<2),
OSXWebPlayer=(0<<3),
OSXDashboardPlayer=(0<<4),
WindowsWebPlayer=(0<<5),
WindowsEditor=(0<<6),
IPhonePlayer=(0<<7),
PS3=(0<<8),
XBOX360=(0<<9),
Android=(0<<10),
NaCl=(0<<11),
LinuxPlayer=(0<<12),
FlashPlayer=(0<<13),
LinuxEditor=(0<<14),
WebGLPlayer=(0<<15),
WSAPlayerX86=(0<<16),
MetroPlayerX86=(0<<17),
MetroPlayerX64=(0<<18),
WSAPlayerX64=(0<<19),
MetroPlayerARM=(0<<20),
WSAPlayerARM=(0<<21),
WP8Player=(0<<22),
BB10Player=(0<<23),
BlackBerryPlayer=(0<<24),
TizenPlayer=(0<<25),
PSP2=(0<<26),
PS4=(0<<27),
PSM=(0<<28),
XboxOne=(0<<29),
SamsungTVPlayer=(0<<30),
WiiU=(0<<31),
tvOS=(0<<32),
Switch=(0<<33),
Lumin=(0<<34),
BJM=(0<<35),
}
In this linked screenshot, only the first 4 options were selected. The integer next to "Platforms: " is the mask itself.
I'm not a bitwise wizard by a large margin, but my assumption is that this occurs because EditorGUILayout.MaskField returns a 32bit int value, and there are over 32 enum options. Are there any workarounds for this or is something else causing the issue?
First thing I've noticed is that all values inside that Enum is the same because you are shifting 0 bits to left. You can observe this by logging your values with this script.
// Shifts 0 bits to the left, printing "0" 36 times.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((0 << i), 2));
}
// Shifts 1 bits to the left, printing values up to 2^35.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((1 << i), 2));
}
The reason inheriting from long does not work alone, is because of bit shifting. Check out this example I found about the issue:
UInt32 x = ....;
UInt32 y = ....;
UInt64 result = (x << 32) + y;
The programmer intended to form a 64-bit value from two 32-bit ones by shifting 'x' by 32 bits and adding the most significant and the least significant parts. However, as 'x' is a 32-bit value at the moment when the shift operation is performed, shifting by 32 bits will be equivalent to shifting by 0 bits, which will lead to an incorrect result.
So you should also cast the shifting bits. Like this:
public enum RuntimePlatformFlags : long {
OSXEditor = (1 << 0),
OSXPlayer = (1 << 1),
WindowsPlayer = (1 << 2),
OSXWebPlayer = (1 << 3),
// With literals.
tvOS = (1L << 32),
Switch = (1L << 33),
// Or with casts.
Lumin = ((long)1 << 34),
BJM = ((long)1 << 35),
}

Find most significant bit in Swift

I need to find the value (or position) of the most significant bit (MSB) of an integer in Swift.
Eg:
Input number: 9
Input as binary: 1001
MS value as binary: 1000 -> (which is 8 in decimal)
MS position as decimal: 3 (because 1<<3 == 1000)
Many processors (Intel, AMD, ARM) have instructions for this. In c, these are exposed. Are these instructions similarly available in Swift through a library function, or would I need to implement some bit twiddling?
The value is more useful in my case.
If a position is returned, then the value can be easily derived by a single shift.
Conversely, computing position from value is not so easy unless a fast Hamming Weight / pop count function is available.
You can use the flsl() function ("find last set bit, long"):
let x = 9
let p = flsl(x)
print(p) // 4
The result is 4 because flsl() and the related functions number the bits starting at 1, the least significant bit.
On Intel platforms you can use the _bit_scan_reverse intrinsic,
in my test in a macOS application this translated to a BSR
instruction.
import _Builtin_intrinsics.intel
let x: Int32 = 9
let p = _bit_scan_reverse(x)
print(p) // 3
You can use the the properties leadingZeroBitCount and trailingZeroBitCount to find the Most Significant Bit and Least Significant Bit.
For example,
let i: Int = 95
let lsb = i.trailingZeroBitCount
let msb = Int.bitWidth - 1 - i.leadingZeroBitCount
print("i: \(i) = \(String(i, radix: 2))") // i: 95 = 1011111
print("lsb: \(lsb) = \(String(1 << lsb, radix: 2))") // lsb: 0 = 1
print("msb: \(msb) = \(String(1 << msb, radix: 2))") // msb: 6 = 1000000
If you look at the disassembly(ARM Mac) in LLDB for the Least Significant Bit code, it uses a single instruction, clz, to count the zeroed bits. (ARM Reference)
** 15 let lsb = i.trailingZeroBitCount
0x100ed947c <+188>: rbit x9, x8
0x100ed9480 <+192>: clz x9, x9
0x100ed9484 <+196>: mov x10, x9
0x100ed9488 <+200>: str x10, [sp, #0x2d8]

Bitwise operations in Swift, reading values from beacon data

I need some help interpreting a formula. This is from the documentation of a beacon I am experimenting with. I have written it in Swift but I can't get it to work. No matter the values the temperature variable ends up as 0.
From documentation:
*The major ID broadcasts the most significant 8 bits of the humidity and the most significant 8 bits of the temperature, and the
minor ID broadcasts the next 2 bits of temperature (for a total of the 10 most significant bits) and the 14 least significant bits
of the minor ID as the really Minor configured by user.
So the humidity is 8 bits in total, and the temperature is 10 bits in total.
Example:
So the humidity:
uint16_t Humidity = Major(As Hex value) & 0xFF00;
The temperature:
uint16_t temperature = ((Major(As Hex value) & 0x00FF) << 8 ) & ((Minor(As Hex value) &
0xC000) >> 8);
The really Minor:
uint16_t Real Minor = Minor(As Hex value) & 0x03FF;
This is what I came up with and it seems correct but the result from the last bitwise AND returns 0*
let majorAnd = UInt16(beacon.major) & 0x00FF
let majorShift = majorAnd << 8
let minorAnd = UInt16(beacon.minor) & 0xC000
let minorShift = minorAnd >> 8
let temperatureResult = majorShift & minorShift
Your problem is here:
let temperatureResult = majorShift & minorShift
replace it with:
let temperatureResult = majorShift | minorShift
Bitwise AND & is only going to give a result when there are bits in common between the two operands. In your case, they are mutually exclusive, You should combine them with bitwise OR |.
There is also a problem with the way you are shifting the values. Here is the corrected solution:
let majorAnd = UInt16(beacon.major) & 0x00FF
let majorShift = majorAnd << 2 // make space for the last 2 bits
let minorAnd = UInt16(beacon.minor) & 0xC000
let minorShift = minorAnd >> 14 // shift off the unwanted 14 bits
let temperatureResult = majorShift | minorShift
You'll need to shift your humidity as well:
let humidity = UInt16(beacon.major) & 0xFF00 >> 8
In the two shift right >> cases above, as a shortcut, you can skip the masking because those bits are being tossed anyway:
let minorShift = UInt16(beacon.minor) >> 14
let humidity = UInt16(beacon.major) >> 8

iPhone SDK << meaning?

Hi another silly simple question. I have noticed that in certain typedefs in Apple's frameworks use the symbols "<<" can anyone tell me what that means?:
enum {
UIViewAutoresizingNone = 0,
UIViewAutoresizingFlexibleLeftMargin = 1 << 0,
UIViewAutoresizingFlexibleWidth = 1 << 1,
UIViewAutoresizingFlexibleRightMargin = 1 << 2,
UIViewAutoresizingFlexibleTopMargin = 1 << 3,
UIViewAutoresizingFlexibleHeight = 1 << 4,
UIViewAutoresizingFlexibleBottomMargin = 1 << 5
};
typedef NSUInteger UIViewAutoresizing;
Edit: Alright so I now understand how and why you would use the left bit-shift, my next question is how would I test to see if the the value had a certain trait using and if/then statement or a switch/case method?
This is a way to create constants that would be easy to mix. For example you can have an API to order an ice cream and you can choose any of vanilla, chocolate and strawberry flavours. You could use booleans, but that’s a bit heavy:
- (void) iceCreamWithVanilla: (BOOL) v chocolate: (BOOL) ch strawerry: (BOOL) st;
A nice trick to solve this is using numbers, where you can mix the flavours using simple adding. Let’s say 1 for vanilla, 2 for chocolate and 4 for strawberry:
- (void) iceCreamWithFlavours: (NSUInteger) flavours;
Now if the number has its rightmost bit set, it’s got vanilla flavour in it, another bit stands for chocolate and the third bit from right is strawberry. For example vanilla + chocolate would be 1+2=3 decimal (011 in binary).
The bitshift operator x << y takes the left number (x) and shifts its bits y times. It’s a good tool to create numeric constants:
1 << 0 = 001 // vanilla
1 << 1 = 010 // chocolate
1 << 2 = 100 // strawberry
Voila! Now when you want a view with flexible left margin and flexible right margin, you can mix the flags using bitwise or: FlexibleRightMargin | FlexibleLeftMargin → 1<<2 | 1<<0 → 100 | 001 → 101. On the receiving end the method can mask the interesting bit using logical and:
// 101 & 100 = 100 or 4 decimal, which boolifies as YES
BOOL flexiRight = givenNumber & FlexibleRightMargin;
Hope that helps.
The << means that all bits in the expression on the left side are shifted left by the amount on the right side of the operator
so 1 << 1 means:
0001 becomes 0010 (those are binary numbers)
another example:
0001 0100 << 2 = 0101 0000
most of the time shift left is the same as multiply by 2.
exception:
when high bits are set and you shift them left (in a 16 bit integer 1000 0000 0000 0000 << 1) they will be discarded or wrapped around (i don't know how it is done in each language)
Its a bit shift.
In C-inspired languages, the left and
right shift operators are "<<" and
">>", respectively. The number of
places to shift is given as the second
argument to the shift operators.
Bit Shift!!!
For example
500 >> 4 = 31,
Original: 111110100
1st Shift:011111010
2nd Shift:001111101
3rd Shift:000111110
4th Shift:000011111 which equals 31.
Same as
500/16 = 31
500/2^4 = 31
Bitwise shift left. For more info see the Wikipedia article.

Three boolean values saved in one tinyint

probably a simple question but I seem to be suffering from programmer's block. :)
I have three boolean values: A, B, and C. I would like to save the state combination as an unsigned tinyint (max 255) into a database and be able to derive the states from the saved integer.
Even though there are only a limited number of combinations, I would like to avoid hard-coding each state combination to a specific value (something like if A=true and B=true has the value 1).
I tried to assign values to the variables so (A=1, B=2, C=3) and then adding, but I can't differentiate between A and B being true from i.e. only C being true.
I am stumped but pretty sure that it is possible.
Thanks
Binary maths I think. Choose a location that's a power of 2 (1, 2, 4, 8 etch) then you can use the 'bitwise and' operator & to determine the value.
Say A = 1, B = 2 , C= 4
00000111 => A B and C => 7
00000101 => A and C => 5
00000100 => C => 4
then to determine them :
if( val & 4 ) // same as if (C)
if( val & 2 ) // same as if (B)
if( val & 1 ) // same as if (A)
if((val & 4) && (val & 2) ) // same as if (C and B)
No need for a state table.
Edit: to reflect comment
If the tinyint has a maximum value of 255 => you have 8 bits to play with and can store 8 boolean values in there
binary math as others have said
encoding:
myTinyInt = A*1 + B*2 + C*4 (assuming you convert A,B,C to 0 or 1 beforehand)
decoding
bool A = myTinyInt & 1 != 0 (& is the bitwise and operator in many languages)
bool B = myTinyInt & 2 != 0
bool C = myTinyInt & 4 != 0
I'll add that you should find a way to not use magic numbers. You can build masks into constants using the Left Logical/Bit Shift with a constant bit position that is the position of the flag of interest in the bit field. (Wow... that makes almost no sense.) An example in C++ would be:
enum Flags {
kBitMask_A = (1 << 0),
kBitMask_B = (1 << 1),
kBitMask_C = (1 << 2),
};
uint8_t byte = 0; // byte = 0b00000000
byte |= kBitMask_A; // Set A, byte = 0b00000001
byte |= kBitMask_C; // Set C, byte = 0b00000101
if (byte & kBitMask_A) { // Test A, (0b00000101 & 0b00000001) = T
byte &= ~kBitMask_A; // Clear A, byte = 0b00000100
}
In any case, I would recommend looking for Bitset support in your favorite programming language. Many languages will abstract the logical operations away behind normal arithmetic or "test/set" operations.
Need to use binary...
A = 1,
B = 2,
C = 4,
D = 8,
E = 16,
F = 32,
G = 64,
H = 128
This means A + B = 3 but C = 4. You'll never have two conflicting values. I've listed the maximum you can have for a single byte, 8 values or (bits).