I'm working on an input system that would allow the user to translate input mappings between different input devices and operating systems and potentially define their own.
I'm trying to create a MaskField for an editor window where the user can select from a list of RuntimePlatforms, but selecting individual values results in multiple values being selected.
Mainly for debugging I set it up to generate an equivalent enum RuntimePlatformFlags that it uses instead of RuntimePlatform:
[System.Flags]
public enum RuntimePlatformFlags: long
{
OSXEditor=(0<<0),
OSXPlayer=(0<<1),
WindowsPlayer=(0<<2),
OSXWebPlayer=(0<<3),
OSXDashboardPlayer=(0<<4),
WindowsWebPlayer=(0<<5),
WindowsEditor=(0<<6),
IPhonePlayer=(0<<7),
PS3=(0<<8),
XBOX360=(0<<9),
Android=(0<<10),
NaCl=(0<<11),
LinuxPlayer=(0<<12),
FlashPlayer=(0<<13),
LinuxEditor=(0<<14),
WebGLPlayer=(0<<15),
WSAPlayerX86=(0<<16),
MetroPlayerX86=(0<<17),
MetroPlayerX64=(0<<18),
WSAPlayerX64=(0<<19),
MetroPlayerARM=(0<<20),
WSAPlayerARM=(0<<21),
WP8Player=(0<<22),
BB10Player=(0<<23),
BlackBerryPlayer=(0<<24),
TizenPlayer=(0<<25),
PSP2=(0<<26),
PS4=(0<<27),
PSM=(0<<28),
XboxOne=(0<<29),
SamsungTVPlayer=(0<<30),
WiiU=(0<<31),
tvOS=(0<<32),
Switch=(0<<33),
Lumin=(0<<34),
BJM=(0<<35),
}
In this linked screenshot, only the first 4 options were selected. The integer next to "Platforms: " is the mask itself.
I'm not a bitwise wizard by a large margin, but my assumption is that this occurs because EditorGUILayout.MaskField returns a 32bit int value, and there are over 32 enum options. Are there any workarounds for this or is something else causing the issue?
First thing I've noticed is that all values inside that Enum is the same because you are shifting 0 bits to left. You can observe this by logging your values with this script.
// Shifts 0 bits to the left, printing "0" 36 times.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((0 << i), 2));
}
// Shifts 1 bits to the left, printing values up to 2^35.
for(int i = 0; i < 36; i++){
Debug.Log(System.Convert.ToString((1 << i), 2));
}
The reason inheriting from long does not work alone, is because of bit shifting. Check out this example I found about the issue:
UInt32 x = ....;
UInt32 y = ....;
UInt64 result = (x << 32) + y;
The programmer intended to form a 64-bit value from two 32-bit ones by shifting 'x' by 32 bits and adding the most significant and the least significant parts. However, as 'x' is a 32-bit value at the moment when the shift operation is performed, shifting by 32 bits will be equivalent to shifting by 0 bits, which will lead to an incorrect result.
So you should also cast the shifting bits. Like this:
public enum RuntimePlatformFlags : long {
OSXEditor = (1 << 0),
OSXPlayer = (1 << 1),
WindowsPlayer = (1 << 2),
OSXWebPlayer = (1 << 3),
// With literals.
tvOS = (1L << 32),
Switch = (1L << 33),
// Or with casts.
Lumin = ((long)1 << 34),
BJM = ((long)1 << 35),
}
How do I convert a string to an integer in JavaScript?
The simplest way would be to use the native Number function:
var x = Number("1000")
If that doesn't work for you, then there are the parseInt, unary plus, parseFloat with floor, and Math.round methods.
parseInt()
var x = parseInt("1000", 10); // You want to use radix 10
// So you get a decimal number even with a leading 0 and an old browser ([IE8, Firefox 20, Chrome 22 and older][1])
Unary plus
If your string is already in the form of an integer:
var x = +"1000";
floor()
If your string is or might be a float and you want an integer:
var x = Math.floor("1000.01"); // floor() automatically converts string to number
Or, if you're going to be using Math.floor several times:
var floor = Math.floor;
var x = floor("1000.01");
parseFloat()
If you're the type who forgets to put the radix in when you call parseInt, you can use parseFloat and round it however you like. Here I use floor.
var floor = Math.floor;
var x = floor(parseFloat("1000.01"));
round()
Interestingly, Math.round (like Math.floor) will do a string to number conversion, so if you want the number rounded (or if you have an integer in the string), this is a great way, maybe my favorite:
var round = Math.round;
var x = round("1000"); // Equivalent to round("1000", 0)
Try parseInt function:
var number = parseInt("10");
But there is a problem. If you try to convert "010" using parseInt function, it detects as octal number, and will return number 8. So, you need to specify a radix (from 2 to 36). In this case base 10.
parseInt(string, radix)
Example:
var result = parseInt("010", 10) == 10; // Returns true
var result = parseInt("010") == 10; // Returns false
Note that parseInt ignores bad data after parsing anything valid.
This guid will parse as 51:
var result = parseInt('51e3daf6-b521-446a-9f5b-a1bb4d8bac36', 10) == 51; // Returns true
There are two main ways to convert a string to a number in JavaScript. One way is to parse it and the other way is to change its type to a Number. All of the tricks in the other answers (e.g., unary plus) involve implicitly coercing the type of the string to a number. You can also do the same thing explicitly with the Number function.
Parsing
var parsed = parseInt("97", 10);
parseInt and parseFloat are the two functions used for parsing strings to numbers. Parsing will stop silently if it hits a character it doesn't recognise, which can be useful for parsing strings like "92px", but it's also somewhat dangerous, since it won't give you any kind of error on bad input, instead you'll get back NaN unless the string starts with a number. Whitespace at the beginning of the string is ignored. Here's an example of it doing something different to what you want, and giving no indication that anything went wrong:
var widgetsSold = parseInt("97,800", 10); // widgetsSold is now 97
It's good practice to always specify the radix as the second argument. In older browsers, if the string started with a 0, it would be interpreted as octal if the radix wasn't specified which took a lot of people by surprise. The behaviour for hexadecimal is triggered by having the string start with 0x if no radix is specified, e.g., 0xff. The standard actually changed with ECMAScript 5, so modern browsers no longer trigger octal when there's a leading 0 if no radix has been specified. parseInt understands radixes up to base 36, in which case both upper and lower case letters are treated as equivalent.
Changing the Type of a String to a Number
All of the other tricks mentioned above that don't use parseInt, involve implicitly coercing the string into a number. I prefer to do this explicitly,
var cast = Number("97");
This has different behavior to the parse methods (although it still ignores whitespace). It's more strict: if it doesn't understand the whole of the string than it returns NaN, so you can't use it for strings like 97px. Since you want a primitive number rather than a Number wrapper object, make sure you don't put new in front of the Number function.
Obviously, converting to a Number gives you a value that might be a float rather than an integer, so if you want an integer, you need to modify it. There are a few ways of doing this:
var rounded = Math.floor(Number("97.654")); // other options are Math.ceil, Math.round
var fixed = Number("97.654").toFixed(0); // rounded rather than truncated
var bitwised = Number("97.654")|0; // do not use for large numbers
Any bitwise operator (here I've done a bitwise or, but you could also do double negation as in an earlier answer or a bit shift) will convert the value to a 32 bit integer, and most of them will convert to a signed integer. Note that this will not do want you want for large integers. If the integer cannot be represented in 32 bits, it will wrap.
~~"3000000000.654" === -1294967296
// This is the same as
Number("3000000000.654")|0
"3000000000.654" >>> 0 === 3000000000 // unsigned right shift gives you an extra bit
"300000000000.654" >>> 0 === 3647256576 // but still fails with larger numbers
To work correctly with larger numbers, you should use the rounding methods
Math.floor("3000000000.654") === 3000000000
// This is the same as
Math.floor(Number("3000000000.654"))
Bear in mind that coercion understands exponential notation and Infinity, so 2e2 is 200 rather than NaN, while the parse methods don't.
Custom
It's unlikely that either of these methods do exactly what you want. For example, usually I would want an error thrown if parsing fails, and I don't need support for Infinity, exponentials or leading whitespace. Depending on your use case, sometimes it makes sense to write a custom conversion function.
Always check that the output of Number or one of the parse methods is the sort of number you expect. You will almost certainly want to use isNaN to make sure the number is not NaN (usually the only way you find out that the parse failed).
ParseInt() and + are different
parseInt("10.3456") // returns 10
+"10.3456" // returns 10.3456
Fastest
var x = "1000"*1;
Test
Here is little comparison of speed (macOS only)... :)
For Chrome, 'plus' and 'mul' are fastest (>700,000,00 op/sec), 'Math.floor' is slowest. For Firefox, 'plus' is slowest (!) 'mul' is fastest (>900,000,000 op/sec). In Safari 'parseInt' is fastest, 'number' is slowest (but results are quite similar, >13,000,000 <31,000,000). So Safari for cast string to int is more than 10x slower than other browsers. So the winner is 'mul' :)
You can run it on your browser by this link
https://jsperf.com/js-cast-str-to-number/1
I also tested var x = ~~"1000";. On Chrome and Safari, it is a little bit slower than var x = "1000"*1 (<1%), and on Firefox it is a little bit faster (<1%).
I use this way of converting string to number:
var str = "25"; // String
var number = str*1; // Number
So, when multiplying by 1, the value does not change, but JavaScript automatically returns a number.
But as it is shown below, this should be used if you are sure that the str is a number (or can be represented as a number), otherwise it will return NaN - not a number.
You can create simple function to use, e.g.,
function toNumber(str) {
return str*1;
}
Try parseInt.
var number = parseInt("10", 10); //number will have value of 10.
I love this trick:
~~"2.123"; //2
~~"5"; //5
The double bitwise negative drops off anything after the decimal point AND converts it to a number format. I've been told it's slightly faster than calling functions and whatnot, but I'm not entirely convinced.
Another method I just saw here (a question about the JavaScript >>> operator, which is a zero-fill right shift) which shows that shifting a number by 0 with this operator converts the number to a uint32 which is nice if you also want it unsigned. Again, this converts to an unsigned integer, which can lead to strange behaviors if you use a signed number.
"-2.123" >>> 0; // 4294967294
"2.123" >>> 0; // 2
"-5" >>> 0; // 4294967291
"5" >>> 0; // 5
In JavaScript, you can do the following:
ParseInt
parseInt("10.5") // Returns 10
Multiplying with 1
var s = "10";
s = s*1; // Returns 10
Using the unary operator (+)
var s = "10";
s = +s; // Returns 10
Using a bitwise operator
(Note: It starts to break after 2140000000. Example: ~~"2150000000" = -2144967296)
var s = "10.5";
s = ~~s; // Returns 10
Using Math.floor() or Math.ceil()
var s = "10";
s = Math.floor(s) || Math.ceil(s); // Returns 10
Please see the below example. It will help answer your question.
Example Result
parseInt("4") 4
parseInt("5aaa") 5
parseInt("4.33333") 4
parseInt("aaa"); NaN (means "Not a Number")
By using parseint function, it will only give op of integer present and not the string.
Beware if you use parseInt to convert a float in scientific notation!
For example:
parseInt("5.6e-14")
will result in
5
instead of
0
Also as a side note: MooTools has the function toInt() which is used on any native string (or float (or integer)).
"2".toInt() // 2
"2px".toInt() // 2
2.toInt() // 2
We can use +(stringOfNumber) instead of using parseInt(stringOfNumber).
Example: +("21") returns int of 21, like the parseInt("21").
We can use this unary "+" operator for parsing float too...
To convert a String into Integer, I recommend using parseFloat and not parseInt. Here's why:
Using parseFloat:
parseFloat('2.34cms') //Output: 2.34
parseFloat('12.5') //Output: 12.5
parseFloat('012.3') //Output: 12.3
Using parseInt:
parseInt('2.34cms') //Output: 2
parseInt('12.5') //Output: 12
parseInt('012.3') //Output: 12
So if you have noticed parseInt discards the values after the decimals, whereas parseFloat lets you work with floating point numbers and hence more suitable if you want to retain the values after decimals. Use parseInt if and only if you are sure that you want the integer value.
There are many ways in JavaScript to convert a string to a number value... All are simple and handy. Choose the way which one works for you:
var num = Number("999.5"); //999.5
var num = parseInt("999.5", 10); //999
var num = parseFloat("999.5"); //999.5
var num = +"999.5"; //999.5
Also, any Math operation converts them to number, for example...
var num = "999.5" / 1; //999.5
var num = "999.5" * 1; //999.5
var num = "999.5" - 1 + 1; //999.5
var num = "999.5" - 0; //999.5
var num = Math.floor("999.5"); //999
var num = ~~"999.5"; //999
My prefer way is using + sign, which is the elegant way to convert a string to number in JavaScript.
Try str - 0 to convert string to number.
> str = '0'
> str - 0
0
> str = '123'
> str - 0
123
> str = '-12'
> str - 0
-12
> str = 'asdf'
> str - 0
NaN
> str = '12.34'
> str - 0
12.34
Here are two links to compare the performance of several ways to convert string to int
https://jsperf.com/number-vs-parseint-vs-plus
http://phrogz.net/js/string_to_number.html
Here is the easiest solution
let myNumber = "123" | 0;
More easy solution
let myNumber = +"123";
In my opinion, no answer covers all edge cases as parsing a float should result in an error.
function parseInteger(value) {
if(value === '') return NaN;
const number = Number(value);
return Number.isInteger(number) ? number : NaN;
}
parseInteger("4") // 4
parseInteger("5aaa") // NaN
parseInteger("4.33333") // NaN
parseInteger("aaa"); // NaN
The easiest way would be to use + like this
const strTen = "10"
const numTen = +strTen // string to number conversion
console.log(typeof strTen) // string
console.log(typeof numTen) // number
I actually needed to "save" a string as an integer, for a binding between C and JavaScript, so I convert the string into an integer value:
/*
Examples:
int2str( str2int("test") ) == "test" // true
int2str( str2int("t€st") ) // "t¬st", because "€".charCodeAt(0) is 8364, will be AND'ed with 0xff
Limitations:
maximum 4 characters, so it fits into an integer
*/
function str2int(the_str) {
var ret = 0;
var len = the_str.length;
if (len >= 1) ret += (the_str.charCodeAt(0) & 0xff) << 0;
if (len >= 2) ret += (the_str.charCodeAt(1) & 0xff) << 8;
if (len >= 3) ret += (the_str.charCodeAt(2) & 0xff) << 16;
if (len >= 4) ret += (the_str.charCodeAt(3) & 0xff) << 24;
return ret;
}
function int2str(the_int) {
var tmp = [
(the_int & 0x000000ff) >> 0,
(the_int & 0x0000ff00) >> 8,
(the_int & 0x00ff0000) >> 16,
(the_int & 0xff000000) >> 24
];
var ret = "";
for (var i=0; i<4; i++) {
if (tmp[i] == 0)
break;
ret += String.fromCharCode(tmp[i]);
}
return ret;
}
String to Number in JavaScript:
Unary + (most recommended)
+numStr is easy to use and has better performance compared with others
Supports both integers and decimals
console.log(+'123.45') // => 123.45
Some other options:
Parsing Strings:
parseInt(numStr) for integers
parseFloat(numStr) for both integers and decimals
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
JavaScript Functions
Math functions like round(numStr), floor(numStr), ceil(numStr) for integers
Number(numStr) for both integers and decimals
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
Unary Operators
All basic unary operators, +numStr, numStr-0, 1*numStr, numStr*1, and numStr/1
All support both integers and decimals
Be cautious about numStr+0. It returns a string.
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // =>3.3
console.log('123.123'+0, typeof ('123.123' + 0)) // => 123.1230 string
Bitwise Operators
Two tilde ~~numStr or left shift 0, numStr<<0
Supports only integers, but not decimals
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
// Parsing
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
// Function
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
// Unary
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // => 3.3
console.log('123.123'+0, typeof ('123.123'+0)) // => 123.1230 string
// Bitwise
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
function parseIntSmarter(str) {
// ParseInt is bad because it returns 22 for "22thisendsintext"
// Number() is returns NaN if it ends in non-numbers, but it returns 0 for empty or whitespace strings.
return isNaN(Number(str)) ? NaN : parseInt(str, 10);
}
You can use plus.
For example:
var personAge = '24';
var personAge1 = (+personAge)
then you can see the new variable's type bytypeof personAge1 ; which is number.
Summing the multiplication of digits with their respective power of ten:
i.e: 123 = 100+20+3 = 1100 + 2+10 + 31 = 1*(10^2) + 2*(10^1) + 3*(10^0)
function atoi(array) {
// Use exp as (length - i), other option would be
// to reverse the array.
// Multiply a[i] * 10^(exp) and sum
let sum = 0;
for (let i = 0; i < array.length; i++) {
let exp = array.length - (i+1);
let value = array[i] * Math.pow(10, exp);
sum += value;
}
return sum;
}
The safest way to ensure you get a valid integer:
let integer = (parseInt(value, 10) || 0);
Examples:
// Example 1 - Invalid value:
let value = null;
let integer = (parseInt(value, 10) || 0);
// => integer = 0
// Example 2 - Valid value:
let value = "1230.42";
let integer = (parseInt(value, 10) || 0);
// => integer = 1230
// Example 3 - Invalid value:
let value = () => { return 412 };
let integer = (parseInt(value, 10) || 0);
// => integer = 0
Another option is to double XOR the value with itself:
var i = 12.34;
console.log('i = ' + i);
console.log('i ⊕ i ⊕ i = ' + (i ^ i ^ i));
This will output:
i = 12.34
i ⊕ i ⊕ i = 12
I only added one plus(+) before string and that was solution!
+"052254" // 52254
Number()
Number(" 200.12 ") // Returns 200.12
Number("200.12") // Returns 200.12
Number("200") // Returns 200
parseInt()
parseInt(" 200.12 ") // Return 200
parseInt("200.12") // Return 200
parseInt("200") // Return 200
parseInt("Text information") // Returns NaN
parseFloat()
It will return the first number
parseFloat("200 400") // Returns 200
parseFloat("200") // Returns 200
parseFloat("Text information") // Returns NaN
parseFloat("200.10") // Return 200.10
Math.floor()
Round a number to the nearest integer
Math.floor(" 200.12 ") // Return 200
Math.floor("200.12") // Return 200
Math.floor("200") // Return 200
function doSth(){
var a = document.getElementById('input').value;
document.getElementById('number').innerHTML = toNumber(a) + 1;
}
function toNumber(str){
return +str;
}
<input id="input" type="text">
<input onclick="doSth()" type="submit">
<span id="number"></span>
This (probably) isn't the best solution for parsing an integer, but if you need to "extract" one, for example:
"1a2b3c" === 123
"198some text2hello world!30" === 198230
// ...
this would work (only for integers):
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (!isNaN(str[i - 1])) {
result += parseInt(str[i - 1]) * factor
factor *= 10
}
}
return result
}
console.log(extractInteger(str))
Of course, this would also work for parsing an integer, but would be slower than other methods.
You could also parse integers with this method and return NaN if the string isn't a number, but I don't see why you'd want to since this relies on parseInt internally and parseInt is probably faster.
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (isNaN(str[i - 1])) return NaN
result += parseInt(str[i - 1]) * factor
factor *= 10
}
return result
}
console.log(extractInteger(str))
i'm having one enum
typedef NS_ENUM(NSInteger, Node) {
NodeTop ,
NodeLeft ,
NodeBottom ,
NodeRight ,
} ;
and property as,
#property Node node;
now in my controller i'm assigning node multiple values using pipeline,
node =top | left | bottom | right ;
(Q-1 does node have 0000,0011 kind of values using NodeTop,Left OR simply final result of ORing of top|left|bottom|right?)
this way NSLog("%d",node); giving result 1.
now if node contains 0001 , i want to left shift it by 1 so i tried
node<<1;
which changes node value 1 to 2 but is it does not seem really changing 0001 to 0010?
in short i want node to have value like 0001,
and later on i want to shift its value like,
0010
0100
1000
0001
0010
...
Any help? let me know if question is not clear!
You can use NS_OPTIONS to create a bit mask where each value is defined as 1<<n. This NSHipster post explain the usage of both NS_ENUM and NS_OPTIONS.
typedef NS_OPTIONS(NSInteger, Node) {
NodeTop = 1 << 0, // 1
NodeLeft = 1 << 1, // 2
NodeBottom = 1 << 2, // 4
NodeRight = 1 << 3 // 8
};
That way when you write
NodeTop | NodeBottom
it will be the same as
...0001 | ...0100 = ....0101
And shifting the bit will move it to the next option
Node node = NodeBottom<<1; // = NodeRight
since ...0100 << ...1000.
which changes node value 1 to 2 but is it does not seem really changing 0001 to 0010?
It does. 0010 binary is 2 decimal.
You can only bitwise OR these values together in a meaningful way if you make each value correspond to a different bit, e.g.
typedef NS_ENUM(NSInteger, Node) {
NodeTop = 1,
NodeLeft = 2,
NodeBottom = 4,
NodeRight = 8
};
node = NodeTop; // 0b0001 = 0x01 = 1
node <<= 1; // 0b0010 = 0x02 = 2 = NodeLeft
node <<= 1; // 0b0100 = 0x04 = 4 = NodeBottom
Well as for the circular bit shifting, I think the best you can do is something like this (quickly thrown together C code)
enum test {
test1 = 1,
test2 = 2,
test4 = 4,
test8 = 8
};
static enum test val(enum test input) {
while(input >= 16) //Or whatever max bit you want
input = input >> 4; //Be sure to shift by the appropriate number
return input;
}
Then when you run this code:
enum test foo = test1;
foo = val(foo << 5);
foo will be test2
I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?
The input format contains each colour in a single short (2 bytes).
The output format is 32bit RGB. This means each pixel has 3 bytes I believe?
I need to convert the short value into RGB colours.
Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.
Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:
red8bit = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit = (blue5bit << 3) | (blue5bit >> 2);
To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31. We want to translate that into a new fraction eightbit/255. Some simple arithmetic:
fivebit eightbit
------- = --------
31 255
Yields:
eightbit = fivebit * 8.226
Or closely (note the squiggly ≈):
eightbit ≈ (fivebit * 8) + (fivebit * 0.25)
That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:
eightbit = (fivebit << 3) | (fivebit >> 2);
The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:
five bit space eight bit space error
00000 00000000 0%
11111 11111111 0%
10101 10101010 0.02%
00111 00111001 -1.01%
Common formats include BGR0,
RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this
is easy, just modify the shift amounts in the last line.
unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */
/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);
/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */
r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */
/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;
/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}
Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)
You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)
Update This only applies to colors that use 4 bits for each channel and have an alpha channel.
There some questions about the model like is it HSV, RGB?
If you wanna ready, fire, aim I'd try this first.
#include <stdint.h>
uint32_t convert(uint16_t _pixel)
{
uint32_t pixel;
pixel = (uint32_t)_pixel;
return ((pixel & 0xF000) << 16)
| ((pixel & 0x0F00) << 12)
| ((pixel & 0x00F0) << 8)
| ((pixel & 0x000F) << 4);
}
This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.
I'm here long after the fight, but I actually had the same problem with ARGB color instead, and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:
AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15.
However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x
Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101).
Now that you have this, you can compute your whole number by doing:
a = v & 0b1111000000000000
r = v & 0b111100000000
g = v & 0b11110000
b = v & 0b1111
return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16
Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:
return b << 4 | g << 8 | r << 12 | a << 16
(All the left shifts values are strange because we did not bother doing a right shift before)
I have been working on another UTF-8 parser as a personal exercise, and while my implementation works quite well, and it rejects most malformed sequences (replacing them with U+FFFD), I can't seem to figure out how to implement rejection of overlong forms. Could anyone tell me how to do so?
Pseudocode:
let w = 0, // the number of continuation bytes pending
c = 0, // the currently being constructed codepoint
b, // the current byte from the source stream
valid(c) = (
(c < 0x110000) &&
((c & 0xFFFFF800) != 0xD800) &&
((c < 0xFDD0) || (c > 0xFDEF)) &&
((c & 0xFFFE) != 0xFFFE))
for each b:
if b < 0x80:
if w > 0: // premature ending to multi-byte sequence
append U+FFFD to output string
w = 0
append U+b to output string
else if b < 0xc0:
if w == 0: // unwanted continuation byte
append U+FFFD to output string
else:
c |= (b & 0x3f) << (--w * 6)
if w == 0: // done
if valid(c):
append U+c to output string
else if b < 0xfe:
if w > 0: // premature ending to multi-byte sequence
append U+FFFD to output string
w = (b < 0xe0) ? 1 :
(b < 0xf0) ? 2 :
(b < 0xf8) ? 3 :
(b < 0xfc) ? 4 : 5;
c = (b & ((1 << (6 - w)) - 1)) << (w * 6); // ugly monstrosity
else:
append U+FFFD to output string
if w > 0: // end of stream and we're still waiting for continuation bytes
append U+FFFD to output string
If you save the number of bytes you'll need (so you save a second copy of the initial value of w), you can compare the UTF32 value of the codepoint (I think you are calling it c) with the number of bytes that were used to encode it. You know that:
U+0000 - U+007F 1 byte
U+0080 - U+07FF 2 bytes
U+0800 - U+FFFF 3 bytes
U+10000 - U+1FFFFF 4 bytes
U+200000 - U+3FFFFFF 5 bytes
U+4000000 - U+7FFFFFFF 6 bytes
(and I hope I have done the right math on the left column! Hex math isn't my strong point :-) )
Just as a sidenote: I think there are some logic errors/formatting errors. if b < 0x80 if w > 0 what happens if w = 0? (so for example if you are decoding A)? And shouldn't you reset c when you find an illegal codepoint?
Once you have the decoded character, you can tell how many bytes it should have had if properly encoded just by looking at the highest bit set.
If the highest set bit's position is <= 7, the UTF-8 encoding requires 1 octet.
If the highest set bit's position is <= 11, the UTF-8 encoding requires 2 octets.
If the highest set bit's position is <= 16, the UTF-8 encoding requires 3 octets.
etc.
If you save the original w and compare it to these values, you'll be able to tell if the encoding was proper or overlong.
I had initially thought that if at any point in time after decoding a byte, w > 0 && c == 0, you have an overlong form. However, it's more complicated than that as Jan pointed out. The simplest answer is probably to have a table like xanatos has, only rejecting anything longer than 4 bytes:
if c < 0x80 && len > 1 ||
c < 0x800 && len > 2 ||
c < 0x10000 && len > 3 ||
len > 4:
append U+FFFD to output string