Hi I am trying to convert an octal number to decimal in swift. What would be the easiest way to do this?
From Octal to Decimal
There is a specific Int initializer for this
let octal = 10
if let decimal = Int(String(octal), radix: 8) {
print(decimal) // 8
}
From Decimal to Octal
let decimal = 8
if let octal = Int(String(decimal, radix: 8)) {
print(octal) // 10
}
Note 1: Please pay attention: parenthesis are different in the 2 code snippets.
Note 2: Int initializer can fail for string representations of number with more exotic radixes. Please read the comment by #AMomchilov below.
you can convert from octal to decimal easily. Swift supports octal syntax natively. You have to write "0o" before the octal number.
let number = 0o10
print(number) // it prints the number 8 in decimal
Integer Literals
Integer literals represent integer values of unspecified precision. By
default, integer literals are expressed in decimal; you can specify an
alternate base using a prefix. Binary literals begin with 0b, octal
literals begin with 0o, and hexadecimal literals begin with 0x.
Here is the documentation's reference.
I hope it helps you
Related
I am learning dart programming language for Flutter. In the integer class what does the word radix means ? Please explain me this. Thanks
Sometimes we have to work with string in radix number format. Dart int parse() method also supports convert string into a number with radix in the range 2..36:
For example, we convert a Hex string into int:
var n_16 = int.parse('FF', radix: 16);
The output of the code = 255
Using radix function we can also convert Binary numbers into decimal numbers like this
var decimal = int.parse('1001001', radix:2)'
I'm trying to create a string from hex values in an array, but whenever a hex in the array starts with a zero it disappears in the resulting string as well.
I use String(value:radix:uppercase) to create the string.
An example:
Here's an array: [0x13245678, 0x12345678, 0x12345678, 0x12345678].
Which gives me the string: 12345678123456781234567812345678 (32 characters)
But the following array: [0x02345678, 0x12345678, 0x02345678, 0x12345678] (notice that I replaced two 1's with zeroes).
Gives me the string: 234567812345678234567812345678 (30 characters)
I'm not sure why it removes the zeroes. I know the value is correct; how can I format it to keep the zero if it was there?
The number 0x01234567 is really just 0x1234567. Leading zeros in number literals don't mean anything (unless you are using the leading 0 for octal number literals).
Instead of using String(value:radix:uppercase), use String(format:).
let num = 0x1234567
let str = String(format: "%08X", num)
Explanation of the format:
The 0 means to pad the left end of the string with zeros as needed.
The 8 means you want the result to be 8 characters long
The X means you want the number converted to uppercase hex. Use x if you want lowercase hex.
I am creating an iPhone app and I need to convert a single digit number into an integer.
My code has a variable called char that has a type Character, but I need to be able to do math with it, therefore I think I need to convert it to a string, however I cannot find a way to do that.
In the latest Swift versions (at least in Swift 5) there is a more straighforward way of converting Character instances. Character has property wholeNumberValue which tries to convert a character to Int and returns nil if the character does not represent and integer.
let char: Character = "5"
if let intValue = char.wholeNumberValue {
print("Value is \(intValue)")
} else {
print("Not an integer")
}
With a Character you can create a String. And with a String you can create an Int.
let char: Character = "1"
if let number = Int(String(char)) {
// use number
}
The String middleman type conversion isn’t necessary if you use the unicodeScalars property of Swift 4.0’s Character type.
let myChar: Character = "3"
myChar.unicodeScalars.first!.value - Unicode.Scalar("0")!.value // 3: UInt32
This uses a trick commonly seen in C code of subtracting the value of the char ’0’ literal to convert from ascii values to decimal values. See this site for the conversions: https://www.asciitable.com
Also there are some implicit unwraps in my answer. To avoid those, you can validate that you have a decimal digit with CharacterSet.decimalDigits, and/or use guard lets around the first property. You can also subtract 48 directly rather than converting ”0” through Unicode.Scalar.
Could someone explain why I can not use int() to convert an integer number represented in string-scientific notation into a python int?
For example this does not work:
print int('1e1')
But this does:
print int(float('1e1'))
print int(1e1) # Works
Why does int not recognise the string as an integer? Surely its as simple as checking the sign of the exponent?
Behind the scenes a scientific number notation is always represented as a float internally. The reason is the varying number range as an integer only maps to a fixed value range, let's say 2^32 values. The scientific representation is similar to the floating representation with significant and exponent. Further details you can lookup in https://en.wikipedia.org/wiki/Floating_point.
You cannot cast a scientific number representation as string to integer directly.
print int(1e1) # Works
Works because 1e1 as a number is already a float.
>>> type(1e1)
<type 'float'>
Back to your question: We want to get an integer from float or scientific string. Details: https://docs.python.org/2/reference/lexical_analysis.html#integers
>>> int("13.37")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '13.37'
For float or scientific representations you have to use the intermediate step over float.
Very Simple Solution
print(int(float(1e1)))
Steps:-
1- First you convert Scientific value to float.
2- Convert that float value to int .
3- Great you are able to get finally int data type.
Enjoy.
Because in Python (at least in 2.x since I do not use Python 3.x), int() behaves differently on strings and numeric values. If you input a string, then python will try to parse it to base 10 int
int ("077")
>> 77
But if you input a valid numeric value, then python will interpret it according to its base and type and convert it to base 10 int. then python will first interperet 077 as base 8 and convert it to base 10 then int() will jsut display it.
int (077) # Leading 0 defines a base 8 number.
>> 63
077
>> 63
So, int('1e1') will try to parse 1e1 as a base 10 string and will throw ValueError. But 1e1 is a numeric value (mathematical expression):
1e1
>> 10.0
So int will handle it as a numeric value and handle it as though, converting it to float(10.0) and then parse it to int. So Python will first interpret 1e1 since it was a numric value and evaluate 10.0 and int() will convert it to integer.
So calling int() with a string value, you must be sure that string is a valid base 10 integer value.
int(float(1e+001)) will work.
Whereas like what others had mention 1e1 is already a float.
How to convert 64 bits integer represented as decimal string into hex string?
I need to do it in Perl on system that doesn't support Quads.
use Math::BigInt;
my $decimal_string = '123456789123456789';
$hex_string = Math::BigInt->new($decimal_string)->as_hex();
use bignum;