I do not understand why
char test = '\032';
converts to
26 dec
'\032' seems to be interpreted as octal, but I want it to be treated as a decimal number.
I think I am confused with the character encoding.
Can anybody clarify this for me and give me a hint on how to convert it the way I want it?
In C, '\octal-digit' begins an octal-escape-sequence. There is no decimal-escape-sequence.
Code could simply use:
char test = 32;
To assign the value of 32 to a char, code has many options:
// octal escape sequence
char test1 = '\040'; // \ and then 1, 2 or 3 octal digits
char test2 = '\40';
// hexadecimal escape sequence
char test3 = '\x20'; // \x and then 1 or more hexadecimal digits
// integer decimal constant
char test4 = 32; // 1-9 and then 0 or more decimal digits
// integer octal constant
char test5 = 040; // 0 and then 0 or more octal digits
char test6 = 0040;
char test7 = 00040;
// integer hexadecimal constant
char test8 = 0x20; // 0x or 0X and then 1 or more hexadecimal digits
char test9 = 0X20;
// universal-character-name
char testA = '\u0020'; // \u & 4 hex digits
char testB = '\U00000020'; // \U & 8 hex digits
// character constant
char testC = ' '; // When the character set is ASCII
The syntax you are using (\0xxx) is for octal. To use decimal, you can just do:
char test = (char)32;
Related
I'm declaring the following constants and printing their values, but when I print they always print in their decimal form. How do I get them to print in their originally defined form?
let binaryInteger = 0b10001
let octalInteger = 0o21
let hexadecimalInteger = 0x11
print("Binary Integer: \(binaryInteger) and in binary \(binaryInteger)")
print("Octal Integer: \(octalInteger) and in octal \(octalInteger)")
print("Hexadecimal Integer: \(hexadecimalInteger) and in hexadecimal \(hexadecimalInteger)")
The output I'd like to see for the first print statement would be:
Binary Integer: 17 and in binary 10001.
What do I need to put in front of the \( to get it to print in binary/octal/hex?
You have to convert the integers to given String representation manually:
print("Binary Integer: \(binaryInteger) and in binary \(String(binaryInteger, radix: 2))")
print("Octal Integer: \(octalInteger) and in octal \(String(octalInteger, radix: 8))")
print("Hexadecimal Integer: \(hexadecimalInteger) and in hexadecimal \(String(hexadecimalInteger, radix: 16))")
Of course, you might also create custom String interpolation to be able to use something like the following:
print("Hexadecimal Integer: \(hexadecimalInteger) and in hexadecimal \(hexadecimalInteger, radix: 16)")
See an example here
Basic answer is given by Silthan, I would also wrap it in the structure, which helps to preserve the original format:
enum Radix: Int {
case binary = 2
case octal = 8
case hexa = 16
var prefix: String {
switch self {
case .binary:
return "0b"
case .octal:
return "0o"
case .hexa:
return "0x"
}
}
}
struct PresentableNumber<T> where T: BinaryInteger & CustomStringConvertible {
let number: T
let radix: Radix
var originalPresentation: String {
return radix.prefix + String(number, radix: radix.rawValue)
}
}
extension PresentableNumber: CustomStringConvertible {
var description: String {
return number.description
}
}
This way you can define and print your numbers like this:
let binaryInteger = PresentableNumber(number: 0b10001, radix: .binary)
let octalInteger = PresentableNumber(number: 0o21, radix: .octal)
let hexadecimalInteger = PresentableNumber(number: 0x11, radix: .hexa)
print("Binary Integer: \(binaryInteger) and in binary \(binaryInteger.originalPresentation)")
print("Octal Integer: \(octalInteger) and in octal \(octalInteger.originalPresentation)")
print("Hexadecimal Integer: \(hexadecimalInteger) and in hexadecimal \(hexadecimalInteger.originalPresentation)")
The output:
Binary Integer: 17 and in binary 0b10001
Octal Integer: 17 and in octal 0o21
Hexadecimal Integer: 17 and in hexadecimal 0x11
I am trying to get a two character hex value from an integer:
let hex = String(format:"%2X", 0)
print ("hex = \(hex)")
hex = "0"
How can I format String to result in always 2 characters, in this case I would want
hex = "00"
You can add a padding 0 before the formatter string:
let hex = String(format:"%02X", 0)
Result:
let hex = String(format:"%02X", 0) // 00
let hex = String(format:"%02X", 15) // 0F
let hex = String(format:"%02X", 16) // 10
I have been trying to create an iphone app sending telnet command. However what puzzling me is that the sizes of certain strings are so much different, particularly when they include \n or \r. I listed out a few examples. Please assist.
const char *a = "play 25\n";
int sizeBitA1 = sizeof(a); // 8 units
int sizeBitA2 = sizeof("play 25\n"); // 9 units
const char *b = "\r\n";
int sizeBitB1 = sizeof(b); // 8 units
int sizeBitB2 = sizeof("\r\n"); // 3 units
The following code snippet illustrates all the options of using string constants, arrays and pointers, and sizeof and strlen.
const char *a = "play\n";
const char at[] = "play\n";
int sizeBitA1 = sizeof(a); // 8 bytes == size of a pointer
int sizeBitA2 = sizeof("play\n"); // 6 bytes, including the trailing '\0'
int sizeBitA3 = sizeof(at); // 6 bytes, including the trailing '\0'
int sizeBitA4 = strlen(a); // 5 bytes, excluding the trailing '\0'
int sizeBitA5 = strlen("play\n"); // 5 bytes, excluding the trailing '\0'
int sizeBitA6 = strlen(at); // 5 bytes, excluding the trailing '\0'
const char *b = "\r\n";
const char bt[] = "\r\n";
int sizeBitB1 = sizeof(b); // 8 bytes == size of a pointer
int sizeBitB2 = sizeof("\r\n"); // 3 bytes, including the trailing '\0'
int sizeBitB3 = sizeof(bt); // 3 bytes, including the trailing '\0'
int sizeBitB4 = strlen(b); // 2 bytes, excluding the trailing '\0'
int sizeBitB5 = strlen("\r\n"); // 2 bytes, excluding the trailing '\0'
int sizeBitB6 = strlen(bt); // 2 bytes, excluding the trailing '\0'
sizeof returns the size of the datatype (at compile-time). But you're probably interested in the length of the string. For that purpose, you should use strlen.
I have the following
NSString *timeString = [NSString stringWithFormat:#"%i", time];
NSLog(#"Timestring is %#", timeString);
NSLog(#"the character at 1 is %d", [timeString characterAtIndex:1]);
and get the following in the print outs
Timestring is 59
the character at 1 is 57
If I print out characterAtIndex:0 it prints out
Timestring is 59
the character at 0 is 53
I think it is printing out the char representation of the number.
How could I do this so that I can extract both numbers from e.g. 60 and use the 6 and the 0 to set an image.
e.g. #"%d.png"
format specifier %d make nslog to treat corresponding value as integer, so in your case char value is treated as integer and integer value printed. To output actual character use %c specifier:
NSLog(#"the character at 1 is %c", [timeString characterAtIndex:1]);
157,453796 = hex 18068A
157,455093 = hex 180697
71,5037 = hex E91D00
71,506104 = hex E93500
71,507103 = hex E93F00
0 = hex 000000
I know exactly what it is not IEEE 754
The following depends on the byte-order of your processor architecture and thus can't be read back on every system:
double f = 10020.2093;
char acz[sizeof(double)+1] = '\0';
std::copy((char*)(&f), ((char*)&f)+sizeof(double), acz);