Convert byte array (hex) to signed Int - powershell

I am trying to convert a (variable length) Hex String to Signed Integer (I need either positive or negative values).
[Int16] [int 32] and [int64] seem to work fine with 2,4+ byte length Hex Strings but I'm stuck with 3 byte strings [int24] (no such command in powershell).
Here's what I have now (snippet):
$start = $mftdatarnbh.Substring($DataRunStringsOffset+$LengthBytes*2+2,$StartBytes*2) -split "(..)"
[array]::reverse($start)
$start = -join $start
if($StartBytes*8 -le 16){$startd =[int16]"0x$($start)"}
elseif($StartBytes*8 -in (17..48)){$startd =[int32]"0x$($start)"}
else{$startd =[int64]"0x$($start)"}
With the above code, a $start value of "D35A71" gives '13851249' instead of '-2925967'. I tried to figure out a way to implement two's complement but got lost. Any easy way to do this right?
Thank you in advance
Edit: Basically, I think I need to implement something like this:
int num = (sbyte)array[0] << 16 | array[1] << 8 | array[2];
as seen here.
Just tried this:
$start = "D35A71"
[sbyte]"0x$($start.Substring(0,2))" -shl 16 -bor "0x$($start.Substring(2,2))" -shl 8 -bor "0x$($start.Substring(4,2))"
but doesn't seem to get the correct result :-/

To parse your hex.-number string as a negative number you can use [bigint] (System.Numerics.BigInteger):
# Since the most significant hex digit has a 1 as its most significant bit
# (is >= 0x8), it is parsed as a NEGATIVE number.
# To force unconditional interpretation as a positive number, prepend '0'
# to the input hex string.
PS> [bigint]::Parse('D35A71', 'AllowHexSpecifier')
-2925967
You can cast the resulting [bigint] instance back to an [int] (System.Int32).
Note:
The result is a negative number, because the most significant hex digit of the hex input string is >= 0x8, i.e. has its high bit set.
To force [bigint] to unconditionally interpret a hex. input string as a positive number, prepend 0.
The internal two's complement representation of a resulting negative number is performed at byte boundaries, so that a given hex number with an odd number of digits (i.e. if the first hex digit is a "half byte") has the missing half byte filled with 1 bits.
Therefore, a hex-number string whose most significant digit is >= 0x8 (parses as a negative number) results in the same number as prepending one or more Fs (0xF == 1111) to it; e.g., the following calls all result in -2048:
[bigint]::Parse('800', 'AllowHexSpecifier'),
[bigint]::Parse('F800', 'AllowHexSpecifier'),
[bigint]::Parse('FF800', 'AllowHexSpecifier'), ...
See the docs for details about the parsing logic.
Examples:
# First digit (7) is < 8 (high bit NOT set) -> positive number
[bigint]::Parse('7FF', 'AllowHexSpecifier') # -> 2047
# First digit (8) is >= 8 (high bit IS SET) -> negative number
[bigint]::Parse('800', 'AllowHexSpecifier') # -> -2048
# Prepending additional 'F's to a number that parses as
# a negative number yields the *same* result
[bigint]::Parse('F800', 'AllowHexSpecifier') # -> -2048
[bigint]::Parse('FF800', 'AllowHexSpecifier') # -> -2048
# ...
# Starting the hex-number string with '0'
# *unconditionally* makes the result a *positive* number
[bigint]::Parse('0800', 'AllowHexSpecifier') # -> 2048

Related

Numbers change inside variables for powershell

I am literally just trying to convert a string with week and day information to numbers and store it as variable, yet something really funky is happening, as of now, I have tested this behavior in 4 PCs, and in Powershell 5 and 7 its happening all over the place.
$UP_Down = "6w0d"
[int]$weeks = if ($Up_Down -match "w"){$Up_Down[$($Up_Down.IndexOf('w')-1)]}Else{0}
[int]$days = if ($Up_Down -match "d"){$Up_Down[$Up_Down.IndexOf('d')-1]}Else{0}
[int]$totaldays = (7 * $weeks) + $days
Now the data from the initial variable is obviously 6 weeks and 0 days to which I have to convert to 42 Total days (this is just an example, its happening regardless of combination)
However the following is the Funky results I get which I have elaborated by Write-Output
Weeks If statement results by itself 6
Weeks variable results 54
days If statement results by itself 0
days variable results 48
totaldays variable results are 426
The problem occurs regardless of what numeric datatype I use
Ironically the Variables have the correct value if i DO NOT assign datatype to them, BUT ,
the moment it hits (7*$weeks) even IF the $weeks is correct the value outputted 426, and remember no [int]etc anywhere
What am I doing wrong?
The problem is, that you are not converting the number inside the string "6" to the number 6, but the character '6' to its value according to the underlying character encoding scheme that is 54 in the case of ASCII. Same with the day: '0' has a value of 48. 7 * 54 + 48 = 426.
See the difference:
PS C:\Users\name> [int]"6"[0]
54
PS C:\Users\name> [int]"6"
6
When extracting an element of the string through indexing with [0] you get a character instead of a string of length 1. A cast to int will then return the ASCII value of this character.
You're indexing ([...]) into string $Up_Down, which means you're returning a single character, i.e. a [char] (System.Char) instance.
Casting a [char] to [int] yields its Unicode code point ("ASCII value"), not the digit that the character happens to represent.
For instance, the character 6 is Unicode character DIGIT SIX with code point U+0036; 0036 is the hexadecimal form of the numeric code point, and the decimal form of hexadecimal 0x36 is 54.
PS> [int] "6w0d"[0]
54 # !! Same as: [int] [char] "6"
To interpret the character as a digit, you need an intermediate [string] cast:
PS> [int] [string] "6w0d"[0]
6 # OK - a string is parsed as expected; same as: [int] "6"
If you cast a string rather than char to [int], PowerShell effectively calls System.Int32.Parse behind the scenes as follows: [int]::Parse($string, [cultureinfo]::InvariantCulture).
Note that PowerShell has no char literals - unlike in C#, '...' quoting also produces strings (verbatim ones), and [int] '6' yields integer 6, just like [int] "6" does.
Conversely, you need an explicit [char] cast to convert a single-character string literal to a [char]; e.g., [char] '6'; a multi-character string would cause the cast to fail.
The solution in the context of your command:
[int]$weeks = if ($Up_Down -match "w"){[string] $Up_Down[$Up_Down.IndexOf('w')-1]} Else {0}
[int]$days = if ($Up_Down -match "d"){[string] $Up_Down[$Up_Down.IndexOf('d')-1]} Else {0}
However, I suggest solving the problem differently:
[int] $totalDays = 0
if ($UP_Down -match '^(?:(?<weeks>\d+)w)?(?:(?<days>\d+)d)?$') {
[int] $weeks, [int] $days = $Matches.weeks, $Matches.days
$totalDays = 7 * $weeks + $days
} # else: string wasn't in expected format.
others have shown you why the problem hit you, so this is just an alternate way to get the total day count. [grin]
what it does ...
fakes reading in a text file of Week/Day codes
when ready to use real data, remove the entire #region/#endregion block and use Get-Content.
iterates thru the list
splits on the w
trims away the trailing d
assigns the resulting strings to the two [int] variables on the left of the =
this forces the two number strings to become number objects.
calcs the total days
displays the week/day code, week count, day count, and total days
the code ...
#region >>> fake reading in a list of week/day codes
# in real life use Get-Content
$WD_List = #'
6w0d
3w3d
0w1d
66w6d
9w1d
'# -split [System.Environment]::NewLine
#endregion >>> fake reading in a list of week/day codes
foreach ($WL_Item in $WD_List)
{
[int]$WeekCount, [int]$DayCount = $WL_Item.Split('w').TrimEnd('d')
$TotalDays = ($WeekCount * 7) + $DayCount
$WL_Item
$WeekCount
$DayCount
$TotalDays
'=' * 20
}
the output ...
6w0d
6
0
42
====================
3w3d
3
3
24
====================
0w1d
0
1
1
====================
66w6d
66
6
468
====================
9w1d
9
1
64
====================

Count the scale of a given decimal

How can I count the scale of a given decimal in Powershell?
$a = 0.0001
$b = 0.000001
Casting $a to a string and returning $a.Length gives a result of 6...I need 4.
I thought there'd be a decimal or math function but I haven't found it and messing with a string seems inelegant.
There's probably a better mathematic way but I'd find the decimal places like this:
$a = 0.0001
$decimalPlaces = ("$a" -split '\.')[-1].TrimEnd('0').Length
Basically, split the string on the . character and get the length of the last string in the array. Wrapping $a in double-quotes implicitly calls .ToString() with an invariant culture (you could expand this as $a.ToString([CultureInfo]::InvariantCulture)), making this method to determine the number of decimal places culture-invariant.
.TrimEnd('0') is used in case $a were sourced from a string, not a proper number type, it's possible that trailing zeroes could be included that should not count as decimal places. However, if you want the scale and not just the used decimal places, leave .TrimEnd('0') off like so:
$decimalPlaces = ("$a" -split '\.')[-1].Length
mclayton helpfully linked to this answer to a related C# question in a comment, and the solution there can indeed be adapted to PowerShell, if working with or conversion to type [decimal] is acceptable:
# Define $a as a [decimal] literal (suffix 'd')
# This internally records the scale (number of decimal places) as specified.
$a = 0.0001d
# [decimal]::GetBits() allows extraction of the scale from the
# the internal representation:
[decimal]::GetBits($a)[-1] -shr 16 -band 0xFF # -> 4, the number of decimal places
The System.Decimal.GetBits method returns an array of internal bit fields whose last element contains the scale in bits 16 - 23 (8 bits, even though the max. scale allowed is 28), which is what the above extracts.
Note: A PowerShell number literal that is a fractional number without the d suffix - e.g., 0.0001 becomes a [double] instance, i.e. a double-precision binary floating-point number.
PowerShell automatically converts [double] to [decimal] values on demand, but do note that there can be rounding errors due to the differing internal representations, and that [double] can store larger numbers than [decimal] can (although not accurately).
A [decimal] literal - one with suffix d (note that C# uses suffix m) - is parsed with a scale exactly as specified, so that applying the above to 0.000d and 0.010d yields 3 in both cases; that is, the trailing zeros are meaningful.
This does not apply if you (implicitly) convert from [double] instances such as 0.000 and 0.010, for which the above yields 0 and 2, respectively.
A string-based solution:
To offer a more concise (also culture-invariant) alternative to Bender The Greatest's helpful answer:
$a = 0.0001
("$a" -replace '.+\.').Length # -> 4, the number of decimal places
Caveat: This solution relies on the default string representation of a [double] number, which need not match the original input format; for instance, .0100, when stringified later, becomes '0.01'; however, as discussed above, you can preserve trailing zeros if you start with a [decimal] literal: .0100d stringifies to '0.0100' (input number of decimals preserved).
"$a", uses an expandable string (PowerShell's string interpolation) to create a culture-invariant string representation of the number so as to ensure that the string representation uses . as the decimal mark.
In effect, PowerShell calls $a.ToString([cultureinfo]::InvariantCulture) behind the scenes.[1].
By contrast, .ToString() (argument-less) applies the rules of the current culture, and in some cultures it is , - not . - that is used as the decimal mark.
Caveat: If you use just $a as the LHS of -replace, $a is implicitly stringified, in which case you - curiously - get culture-sensitive behavior, as with .ToString() - see this GitHub issue.
-replace '.+\.' effectively removes all characters up to and including the decimal point from the input string, and .Length counts the characters in the resulting string - the number of decimal places.
[1] Note that casts from strings in PowerShell too use the invariant culture (effectively, ::Parse($value, [cultureinfo]::InvariantCulture) is called) so that in order to parse a a culture-local string representation you'll need to use the ::Parse() method explicitly; e.g., [double]::Parse('1,2'), not [double] '1,2'.

Powershell: Convert unique string to unique int

Is there a method for converting unique strings to unique integers in PowerShell?
I'm using a PowerShell function as a service bus between two API's,
the first API produces unique codes e.g. HG44X10999 (varchars)- but the second API which will consume the first as input, will only accept integers. I only care about keeping them unique.
I have looked at $string.gethashcode() but this produces negative integers and also changes between builds. Get-hash | $string -encoding ASCII obviously outputs varchars too.
Other examples on SO are referring to converting a string of numeric characters to integers i.e. $string = 123 - but I can't find a way of quickly computing an int from a string of alphanumeric
The Fowler-Noll-Vo hash function seems well-suited for your purpose, as it can produce a 32-bit hash output.
Here's a simple implementation in PowerShell (the offset basis and initial prime is taken from the wikipedia reference table for 32-bit outputs):
function Get-FNVHash {
param(
[string]$InputString
)
# Initial prime and offset chosen for 32-bit output
# See https://en.wikipedia.org/wiki/Fowler–Noll–Vo_hash_function
[uint32]$FNVPrime = 16777619
[uint32]$offset = 2166136261
# Convert string to byte array, may want to change based on input collation
$bytes = [System.Text.Encoding]::UTF8.GetBytes($InputString)
# Copy offset as initial hash value
[uint32]$hash = $offset
foreach($octet in $bytes)
{
# Apply XOR, multiply by prime and mod with max output size
$hash = $hash -bxor $octet
$hash = $hash * $FNVPrime % [System.Math]::Pow(2,32)
}
return $hash
}
Now you can repeatably produce distinct integers from the input strings:
PS C:\> Get-FNVHash HG44X10999
1174154724
If the target API only accepts positive signed 32-bit integers you can change the modulus to [System.Math]::Pow(2,31) (doubling the chance of collisions, to
approx. 1 in 4300 for 1000 distinct inputs)
For further insight into this simple approach, see this page on FNV and have a look at this article exploring short string hashing

Converting a hex to string in Swift formatted to keep the same number of digits

I'm trying to create a string from hex values in an array, but whenever a hex in the array starts with a zero it disappears in the resulting string as well.
I use String(value:radix:uppercase) to create the string.
An example:
Here's an array: [0x13245678, 0x12345678, 0x12345678, 0x12345678].
Which gives me the string: 12345678123456781234567812345678 (32 characters)
But the following array: [0x02345678, 0x12345678, 0x02345678, 0x12345678] (notice that I replaced two 1's with zeroes).
Gives me the string: 234567812345678234567812345678 (30 characters)
I'm not sure why it removes the zeroes. I know the value is correct; how can I format it to keep the zero if it was there?
The number 0x01234567 is really just 0x1234567. Leading zeros in number literals don't mean anything (unless you are using the leading 0 for octal number literals).
Instead of using String(value:radix:uppercase), use String(format:).
let num = 0x1234567
let str = String(format: "%08X", num)
Explanation of the format:
The 0 means to pad the left end of the string with zeros as needed.
The 8 means you want the result to be 8 characters long
The X means you want the number converted to uppercase hex. Use x if you want lowercase hex.

xor between two numbers (after hex to binary conversion)

i donot know why there is error in this coding:
hex_str1 = '5'
bin_str1 = dec2bin(hex2dec(hex_str1))
hex_str2 = '4'
bin_str2 = dec2bin(hex2dec(hex_str2))
c=xor(bin_str1,bin_str2)
the value of c is not correct when i transform the hex to binary by using the xor function.but when i used the array the value of c is correct.the coding is
e=[1 1 1 0];
f=[1 0 1 0];
g=xor(e,f)
what are the mistake in my first coding to xor of hec to binary value??anyone can help me find the solution...
Your mistake is applying xor on two strings instead of actual numerical arrays.
For the xor command, logical "0"s are represented by actual zero elements. Any non-zero elements are interpreted as logical "1"s.
When you apply xor on two strings, the numerical value of each character (element) is its ASCII value. From xor's point of view, the zeroes in your string are not really zeroes, but simply non-zero values (being equal to the ASCII value of the character '0'), which are interpreted as logical "1"s. The bottom line is that in your example you're xor-ing 111b and 111b, and so the result is 0.
The solution is to convert your strings to logical arrays:
num1 = (bin_str1 == '1');
num2 = (bin_str2 == '1');
c = xor(num1, num2);
To convert the result back into a string (of a binary number), use this:
bin_str3 = sprintf('%d', c);
... and to a hexadecimal string, add this:
hex_str3 = dec2hex(bin2dec(bin_str3));
it is really helpful, and give me the correct conversion while forming HMAC value in matlab...
but in matlab you can not convert string of length more than 52 character using bin2dec() function and similarly hex2dec() can not take hexadecimal character string more than 13 length.