CString wrote as an unsigned char - vc90

I am building an application to read and write to a microchip`s memory.
I need to pass an unsigned char array that has 4 fixed bytes and 2 variable bytes so suposing I want to read memory bank 0004 I will pass
unsigned char sRequest[32]={0};
sRequest[0] = 0x02; //FIXED
sRequest[1] = 0x1F; //FIXED
sRequest[2] = 0x0A; //FIXED
sRequest[3] = 0x20; //FIXED
sRequest[4] = 0x00; //VARIABLE
sRequest[5] = 0x04; //VARIABLE
I want to put 2 CEdit boxes for the user to input that variable memory bank, so it would write 0x00 on first CEdit and 0x04 on second one.
so my question is, how can I translate eacho of these inputs on an unsigned char so I can set it on my sRequest variable?
thanks in advance
(thanks dave, mistyped bytes into bits, fixed now)

Actually, I wouldn't do that with free-form text entry at all - it would be too much of a bother to do validation (being inherently lazy). If there's a chance to restrict what your user is able to give you as early as possible, you should take it :-)
I would provide drop-down boxes for the values, each nybble so that the quantity to select from is not too onerous. Something like (and with apologies for my graphical skills):
+---+-+ +---+-+ +---+-+ +---+-+
Enter value 0x | 0 |V| | 0 |V| | 0 |V| | 4 |V|
+---+-+ +---+-+ +---+-+ +---+-+
|>0<|
| 1 |
| 1 |
| 2 |
| 3 |
: : :
| E |
| F |
+---+
For the values in the listbox, I would set the item data to be 0 through 15 inclusive (for the items 0 through F).
That way, you could get a single byte value with something like:
byte04 = listboxA.GetItemData(listboxA.GetCurSel()) * 16 +
listboxB.GetItemData(listboxB.GetCurSel());
byte05 = listboxC.GetItemData(listboxC.GetCurSel()) * 16 +
listboxD.GetItemData(listboxD.GetCurSel());
If you must use a less restrictive input method, C++ provides:
int stoi (const string& str, size_t *idx = 0, int base = 10);
(and stol/stoul for signed and unsigned longs) to allow you to convert C++ strings to integral types.
For your purposes. you'll probably have to detect and strip a leading 0x (along with possibly leading/trailing spaces and so forth) which is why I suggest the restrictive route as a better option.

Related

Converting ASCII decimal

I receive two equivalent strings from my database depending on whether I ask for it in binary or text format.
Binary is hexadecimal... 4d4d002a0000100801010101010101...(134916 characters)
Text is (I think ASCII decimal)... //x3464346430303261303030... (269832 characters)
I can convert the hexadecimal version into a byte array and ultimately an NSData (67458 bytes):
let data = NSMutableData(capacity: self.characters.count / 2)
for var index = self.startIndex; index < self.endIndex; index = index.advancedBy(2) {
let byteString = self.substringWithRange(Range<String.Index>(start: index, end: index.advancedBy(2)))
let byteUInt = UInt8(strtoul(byteString, nil, 16))
data?.appendBytes([UInt8]([byteUInt]), length: 1)
}
But I am having no such luck with the text version. Tried parsing it a million different ways and I can't come up with an equivalent conversion.
If it matters, the database is PostgreSQL v9.5 and the data in text format is returned as a null-terminated character string (char *).
Any insight would be greatly appreciated.
It appears that "ASCII representation" is a hex encoding of the hex encoding, so you should be able to produce a proper result by applying the same conversion twice:
34 | 64 | 34 | 64 | 30 | 30 | 32 | 61 | 30 | 30 | 30 | -- Original
4 | d | 4 | d | 0 | 0 | 2 | a | 0 | 0 | 0 | -- ASCII conversion

PowerShell formating numbers by variables

I have integer values: 3 60 150 1500 and float values 1.23354, 1.234, 1.234567...
I calculate the number of digits of the biggest integer:
$nInt = [System.Math]::Ceiling([math]::log10($maxInt))
# nInt = 4
and in another way the biggest number of dec. behind the decimal point of the float-variable: $nDec = 6
How can I format a print out that all integer do have the same string-length with leading spaces?
|1500
| 500
| 60
| 3
And all float with the same string-length as well?
1.234567|
1.23354 |
1.234 |
The | is just to mark my 'point of measure'.
Of course I have to choose a character-set where all characters do have the same pixex-size.
I am thinking of formatting by "{0:n}" or $int.ToString(""), but I can't see how to use this.
Try PadLeft or PadRight. For example, for your integers:
$maxInt.ToString().PadLeft($nInt.ToString().Length, ' ')

Convert 16bit colour to 32bit

I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?
The input format contains each colour in a single short (2 bytes).
The output format is 32bit RGB. This means each pixel has 3 bytes I believe?
I need to convert the short value into RGB colours.
Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.
Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:
red8bit = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit = (blue5bit << 3) | (blue5bit >> 2);
To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31. We want to translate that into a new fraction eightbit/255. Some simple arithmetic:
fivebit eightbit
------- = --------
31 255
Yields:
eightbit = fivebit * 8.226
Or closely (note the squiggly ≈):
eightbit ≈ (fivebit * 8) + (fivebit * 0.25)
That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:
eightbit = (fivebit << 3) | (fivebit >> 2);
The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:
five bit space eight bit space error
00000 00000000 0%
11111 11111111 0%
10101 10101010 0.02%
00111 00111001 -1.01%
Common formats include BGR0,
RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this
is easy, just modify the shift amounts in the last line.
unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */
/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);
/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */
r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */
/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;
/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}
Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)
You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)
Update This only applies to colors that use 4 bits for each channel and have an alpha channel.
There some questions about the model like is it HSV, RGB?
If you wanna ready, fire, aim I'd try this first.
#include <stdint.h>
uint32_t convert(uint16_t _pixel)
{
uint32_t pixel;
pixel = (uint32_t)_pixel;
return ((pixel & 0xF000) << 16)
| ((pixel & 0x0F00) << 12)
| ((pixel & 0x00F0) << 8)
| ((pixel & 0x000F) << 4);
}
This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.
I'm here long after the fight, but I actually had the same problem with ARGB color instead, and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:
AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15.
However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x
Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101).
Now that you have this, you can compute your whole number by doing:
a = v & 0b1111000000000000
r = v & 0b111100000000
g = v & 0b11110000
b = v & 0b1111
return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16
Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:
return b << 4 | g << 8 | r << 12 | a << 16
(All the left shifts values are strange because we did not bother doing a right shift before)

ObjC getting 64 bit number from stream

I am successfully passing a 64 bit number from a objC client to a java client, but am unable to send to an objC client.
Java Code
/*
* Retrieve a double (64-bit) number from the stream.
*/
private double getDouble() throws IOException
{
byte[] buffer = getBytes(8);
long bits =
((long)buffer[0] & 0x0ff) |
(((long)buffer[1] & 0x0ff) << 8) |
(((long)buffer[2] & 0x0ff) << 16) |
(((long)buffer[3] & 0x0ff) << 24) |
(((long)buffer[4] & 0x0ff) << 32) |
(((long)buffer[5] & 0x0ff) << 40) |
(((long)buffer[6] & 0x0ff) << 48) |
(((long)buffer[7] & 0x0ff) << 56);
return Double.longBitsToDouble(bits);
}
objC code
/*
* Retrieve a double (64-bit) number from the stream.
*/
- (double)getDouble
{
NSRange dblRange = NSMakeRange(0, 8);
char buffer[8];
[stream getBytes:buffer length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
long long bits =
((long long)buffer[0] & 0x0ff) |
(((long long)buffer[1] & 0x0ff) << 8) |
(((long long)buffer[2] & 0x0ff) << 16) |
(((long long)buffer[3] & 0x0ff) << 24) |
(((long long)buffer[4] & 0x0ff) << 32) |
(((long long)buffer[5] & 0x0ff) << 40) |
(((long long)buffer[6] & 0x0ff) << 48) |
(((long long)buffer[7] & 0x0ff) << 56);
NSNumber *tempNum = [NSNumber numberWithLongLong:bits];
NSLog(#"\n***********\nsizeof long long %d \n tempNum: %#\nbits %lld",sizeof(long long), tempNum, bits);
return [tempNum doubleValue];
}
the result of NSLog is
sizeof long long 8
tempNum: -4616134021117358511
bits -4616134021117358511
the number should be : -1.012345
The problem is that I am trying to convert Java to objC in the getDouble func. My middleware takes into account the endian issues. The simple solution is if the target is little endian
- (double)getDouble
NSRange dblRange = NSMakeRange(0, 8);
double swapped;
[stream getBytes:&swapped length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
return swapped;
Thanks all for input - got a lot of experience and a little understanding from this exercise.
A double and a long long are not the same thing. A long represents an integer, which has no fractional portion, and a double represents a floating-point number, which has a fractional portion. These two types have completely different ways of representing their values in memory. That is to say, if you were to look at the bits for a long long representing the number 4000 and compare those to the bits for a double representing the number 4000, they would be different.
So as Wevah notes, the first step is for you to use the proper double type, and the correct %f formatter in your call to NSLog().
I would add, though, that you also need to be careful to get your bytes in the native order for the machine your C code is running on. For a detailed description of what I'm referring to, see http://en.wikipedia.org/wiki/Endianness The short version is that different processors may represent numbers in different ways in memory, and you need to ensure in your code that once you get a pile of bytes from the network, you are putting the bytes in the right order for your processor before you attempt to interpret it as a number.
Luckily, this is a solved issue, and is easily accounted for by using the CFConvertFloat64SwappedToHost() function from CoreFoundation:
[stream getBytes:buffer length:8];
[stream replaceBytesInRange:dblRange withBytes:NULL length:0];
double myDouble = CFConvertFloat64SwappedToHost(*((double*)buffer));
NSNumber *tempNum = [NSNumber numberWithDouble:myDouble];
NSLog(#"\n***********\nsizeof double %d \n tempNum: %#\nbits %f",sizeof(double), tempNum, myDouble);
return [tempNum doubleValue];
You probably want to convert it to a double (possibly/probably via a union; see Jonathan's comment) and use the %f specifier.

iPhone SDK << meaning?

Hi another silly simple question. I have noticed that in certain typedefs in Apple's frameworks use the symbols "<<" can anyone tell me what that means?:
enum {
UIViewAutoresizingNone = 0,
UIViewAutoresizingFlexibleLeftMargin = 1 << 0,
UIViewAutoresizingFlexibleWidth = 1 << 1,
UIViewAutoresizingFlexibleRightMargin = 1 << 2,
UIViewAutoresizingFlexibleTopMargin = 1 << 3,
UIViewAutoresizingFlexibleHeight = 1 << 4,
UIViewAutoresizingFlexibleBottomMargin = 1 << 5
};
typedef NSUInteger UIViewAutoresizing;
Edit: Alright so I now understand how and why you would use the left bit-shift, my next question is how would I test to see if the the value had a certain trait using and if/then statement or a switch/case method?
This is a way to create constants that would be easy to mix. For example you can have an API to order an ice cream and you can choose any of vanilla, chocolate and strawberry flavours. You could use booleans, but that’s a bit heavy:
- (void) iceCreamWithVanilla: (BOOL) v chocolate: (BOOL) ch strawerry: (BOOL) st;
A nice trick to solve this is using numbers, where you can mix the flavours using simple adding. Let’s say 1 for vanilla, 2 for chocolate and 4 for strawberry:
- (void) iceCreamWithFlavours: (NSUInteger) flavours;
Now if the number has its rightmost bit set, it’s got vanilla flavour in it, another bit stands for chocolate and the third bit from right is strawberry. For example vanilla + chocolate would be 1+2=3 decimal (011 in binary).
The bitshift operator x << y takes the left number (x) and shifts its bits y times. It’s a good tool to create numeric constants:
1 << 0 = 001 // vanilla
1 << 1 = 010 // chocolate
1 << 2 = 100 // strawberry
Voila! Now when you want a view with flexible left margin and flexible right margin, you can mix the flags using bitwise or: FlexibleRightMargin | FlexibleLeftMargin → 1<<2 | 1<<0 → 100 | 001 → 101. On the receiving end the method can mask the interesting bit using logical and:
// 101 & 100 = 100 or 4 decimal, which boolifies as YES
BOOL flexiRight = givenNumber & FlexibleRightMargin;
Hope that helps.
The << means that all bits in the expression on the left side are shifted left by the amount on the right side of the operator
so 1 << 1 means:
0001 becomes 0010 (those are binary numbers)
another example:
0001 0100 << 2 = 0101 0000
most of the time shift left is the same as multiply by 2.
exception:
when high bits are set and you shift them left (in a 16 bit integer 1000 0000 0000 0000 << 1) they will be discarded or wrapped around (i don't know how it is done in each language)
Its a bit shift.
In C-inspired languages, the left and
right shift operators are "<<" and
">>", respectively. The number of
places to shift is given as the second
argument to the shift operators.
Bit Shift!!!
For example
500 >> 4 = 31,
Original: 111110100
1st Shift:011111010
2nd Shift:001111101
3rd Shift:000111110
4th Shift:000011111 which equals 31.
Same as
500/16 = 31
500/2^4 = 31
Bitwise shift left. For more info see the Wikipedia article.