Hexadecimal to decimal - numbers

I have to convert a hexadecimal number to decimal, but don't know how. In the AutoIt documentation (pictured below) some constants (being assigned hexadecimal values) are defined:
0x00200000 hexadecimal (underlined in image) equals 8192 decimal (this is the true conversion). But convertors return 2097152. I have to convert another hex value (0x00000200), but convertors get it wrong. How to correctly convert it?
When I use the definition $WS_EX_CLIENTEDGE (or a hexadecimal value), it doesn't work. If I use an integer I believe it will work.

As per Documentation - Language Reference - Datatypes:
In AutoIt there is only one datatype called a Variant. A variant can
contain numeric or string data and decides how to use the data
depending on the situation it is being used in.
Issuing:
ConsoleWrite(0x00200000 & #LF)
demonstrates stated behavior. Use Int() in case of conversion requirement:
#region Hex2Dec
Global Const $dBin1 = 0x00200000
Global Const $iInt1 = Int($dBin1)
ConsoleWrite($iInt1 & #LF)
#endregion
#region Dec2Hex
Global Const $iInt2 = 8192
Global Const $dBin2 = Hex($iInt2)
ConsoleWrite('0x' & $dBin2 & #LF)
#endregion
Related functions include:
Int()
Number()
String()
StringToBinary()
StringToASCIIArray()
StringFromASCIIArray()
Binary()
BinaryToString()
Ptr()
HWnd()

Related

Getting character ASCII value as an Integer in Swift

I have been trying to get the character ascii code as an int so as then I can modify it and change the character by doing some math. However I am finding it difficult to do so as I get conversion errors between the different types of integers and can't seem to find an answer
var n:Character = pass[I] //using the string protocol extension
if n.isASCII
{
var tempo:Int = Int(n.asciiValue)
temp += (tempo | key) //key and temp are of type int
}
In Swift, a Character may not necessarily be an ASCII one. It would for example have no sense to return the ascii value of "🪂" which requires a large unicode encoding. This is why asciiValue property has an optional UInt8 value, which is annotated UInt8?.
The simplest solution
Since you checked yourself that the character isAscii, you can safely go for an unconditional unwrapping with !:
var tempo:Int = Int(n.asciiValue!) // <--- just change this line
A more elegant alternative
You could also take advantage of optional binding that uses the fact that the optional is nil when there is no ascii value (i.e. n was not an ASCII character):
if let tempo = n.asciiValue // is true only if there is an ascii value
{
temp += (Int(tempo) | key)
}

Count the scale of a given decimal

How can I count the scale of a given decimal in Powershell?
$a = 0.0001
$b = 0.000001
Casting $a to a string and returning $a.Length gives a result of 6...I need 4.
I thought there'd be a decimal or math function but I haven't found it and messing with a string seems inelegant.
There's probably a better mathematic way but I'd find the decimal places like this:
$a = 0.0001
$decimalPlaces = ("$a" -split '\.')[-1].TrimEnd('0').Length
Basically, split the string on the . character and get the length of the last string in the array. Wrapping $a in double-quotes implicitly calls .ToString() with an invariant culture (you could expand this as $a.ToString([CultureInfo]::InvariantCulture)), making this method to determine the number of decimal places culture-invariant.
.TrimEnd('0') is used in case $a were sourced from a string, not a proper number type, it's possible that trailing zeroes could be included that should not count as decimal places. However, if you want the scale and not just the used decimal places, leave .TrimEnd('0') off like so:
$decimalPlaces = ("$a" -split '\.')[-1].Length
mclayton helpfully linked to this answer to a related C# question in a comment, and the solution there can indeed be adapted to PowerShell, if working with or conversion to type [decimal] is acceptable:
# Define $a as a [decimal] literal (suffix 'd')
# This internally records the scale (number of decimal places) as specified.
$a = 0.0001d
# [decimal]::GetBits() allows extraction of the scale from the
# the internal representation:
[decimal]::GetBits($a)[-1] -shr 16 -band 0xFF # -> 4, the number of decimal places
The System.Decimal.GetBits method returns an array of internal bit fields whose last element contains the scale in bits 16 - 23 (8 bits, even though the max. scale allowed is 28), which is what the above extracts.
Note: A PowerShell number literal that is a fractional number without the d suffix - e.g., 0.0001 becomes a [double] instance, i.e. a double-precision binary floating-point number.
PowerShell automatically converts [double] to [decimal] values on demand, but do note that there can be rounding errors due to the differing internal representations, and that [double] can store larger numbers than [decimal] can (although not accurately).
A [decimal] literal - one with suffix d (note that C# uses suffix m) - is parsed with a scale exactly as specified, so that applying the above to 0.000d and 0.010d yields 3 in both cases; that is, the trailing zeros are meaningful.
This does not apply if you (implicitly) convert from [double] instances such as 0.000 and 0.010, for which the above yields 0 and 2, respectively.
A string-based solution:
To offer a more concise (also culture-invariant) alternative to Bender The Greatest's helpful answer:
$a = 0.0001
("$a" -replace '.+\.').Length # -> 4, the number of decimal places
Caveat: This solution relies on the default string representation of a [double] number, which need not match the original input format; for instance, .0100, when stringified later, becomes '0.01'; however, as discussed above, you can preserve trailing zeros if you start with a [decimal] literal: .0100d stringifies to '0.0100' (input number of decimals preserved).
"$a", uses an expandable string (PowerShell's string interpolation) to create a culture-invariant string representation of the number so as to ensure that the string representation uses . as the decimal mark.
In effect, PowerShell calls $a.ToString([cultureinfo]::InvariantCulture) behind the scenes.[1].
By contrast, .ToString() (argument-less) applies the rules of the current culture, and in some cultures it is , - not . - that is used as the decimal mark.
Caveat: If you use just $a as the LHS of -replace, $a is implicitly stringified, in which case you - curiously - get culture-sensitive behavior, as with .ToString() - see this GitHub issue.
-replace '.+\.' effectively removes all characters up to and including the decimal point from the input string, and .Length counts the characters in the resulting string - the number of decimal places.
[1] Note that casts from strings in PowerShell too use the invariant culture (effectively, ::Parse($value, [cultureinfo]::InvariantCulture) is called) so that in order to parse a a culture-local string representation you'll need to use the ::Parse() method explicitly; e.g., [double]::Parse('1,2'), not [double] '1,2'.

Converting number in scientific notation to int

Could someone explain why I can not use int() to convert an integer number represented in string-scientific notation into a python int?
For example this does not work:
print int('1e1')
But this does:
print int(float('1e1'))
print int(1e1) # Works
Why does int not recognise the string as an integer? Surely its as simple as checking the sign of the exponent?
Behind the scenes a scientific number notation is always represented as a float internally. The reason is the varying number range as an integer only maps to a fixed value range, let's say 2^32 values. The scientific representation is similar to the floating representation with significant and exponent. Further details you can lookup in https://en.wikipedia.org/wiki/Floating_point.
You cannot cast a scientific number representation as string to integer directly.
print int(1e1) # Works
Works because 1e1 as a number is already a float.
>>> type(1e1)
<type 'float'>
Back to your question: We want to get an integer from float or scientific string. Details: https://docs.python.org/2/reference/lexical_analysis.html#integers
>>> int("13.37")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '13.37'
For float or scientific representations you have to use the intermediate step over float.
Very Simple Solution
print(int(float(1e1)))
Steps:-
1- First you convert Scientific value to float.
2- Convert that float value to int .
3- Great you are able to get finally int data type.
Enjoy.
Because in Python (at least in 2.x since I do not use Python 3.x), int() behaves differently on strings and numeric values. If you input a string, then python will try to parse it to base 10 int
int ("077")
>> 77
But if you input a valid numeric value, then python will interpret it according to its base and type and convert it to base 10 int. then python will first interperet 077 as base 8 and convert it to base 10 then int() will jsut display it.
int (077) # Leading 0 defines a base 8 number.
>> 63
077
>> 63
So, int('1e1') will try to parse 1e1 as a base 10 string and will throw ValueError. But 1e1 is a numeric value (mathematical expression):
1e1
>> 10.0
So int will handle it as a numeric value and handle it as though, converting it to float(10.0) and then parse it to int. So Python will first interpret 1e1 since it was a numric value and evaluate 10.0 and int() will convert it to integer.
So calling int() with a string value, you must be sure that string is a valid base 10 integer value.
int(float(1e+001)) will work.
Whereas like what others had mention 1e1 is already a float.

Need code for removing all unicode characters in vb6

I need code for removing all unicode characters in a vb6 string.
If this is UTF-16 text (as normal VB6 String values all are) and you can ignore the issue of surrogate pairs, then this is fairly quick and reasonably concise:
Private Sub DeleteNonAscii(ByRef Text As String)
Dim I As Long
Dim J As Long
Dim Char As String
I = 1
For J = 1 To Len(Text)
Char = Mid$(Text, J, 1)
If (AscW(Char) And &HFFFF&) <= &H7F& Then
Mid$(Text, I, 1) = Char
I = I + 1
End If
Next
Text = Left$(Text, I - 1)
End Sub
This has the workaround for the unfortunate choice VB6 had to make in returning a signed 16-bit integer from the AscW() function. It should have been a Long for symmatry with ChrW$() but it is what it is.
It should beat the pants off any regular expression library in clarity, maintainability, and performance. If better performance is required for truly massive amounts of text then SAFEARRAY or CopyMemory stunts could be used.
Public Shared Function StripUnicodeCharactersFromString(ByVal inputValue As String) As String
Return Regex.Replace(inputValue, "[^\u0000-\u007F]", String.Empty)
End Function
Vb6 - not sure will
sRTF = "\u" & CStr(AscW(char))
work? - You could do this for all char values above 127
StrConv is the command for converting strings.
StrConv Function
Returns a Variant (String) converted as specified.
Syntax
StrConv(string, conversion, LCID)
The StrConv function syntax has these named arguments:
Part Description
string Required. String expression to be converted.
conversion Required. Integer. The sum of values specifying the type of conversion to perform. `128` is Unicode to local code page (or whatever the optional LCID is)
LCID Optional. The LocaleID, if different than the system LocaleID. (The system LocaleID is the default.)

Objective-C Decimal to Base 16 Hex conversion

Does anyone have a code snippet or a class that will take a long long and turn it into a 16 byte Hex string?
I'm looking to turn data like this
long long decimalRepresentation = 1719886131591410351;
and turn it into this
//Base 16 Hex Output: 17DE435307A07300
The %x operator doesn't want to work for me
NSLog(#"Hex: %x",decimalRepresentation);
//console : "Hex: 7a072af"
As you can see that's not even close. Any help is truly appreciated!
%x prints an unsigned integer in hexadecimal representation and sizeof(long long) != sizeof(unsigned). See e.g. "Data Type Size and Alignment" in the 64bit transitioning guide.
Use the ll specifier (thats two lower-case L) to get the desired output:
NSLog(#"%llx", myLongLong);