Powershell generate a hexadecimal list - powershell

I am looking to generate a list of hex values using Powershell. For instance, [0..100] creates an array of numbers 0-100.
How would I go about creating an array of values from 000-FFF?

You can use following expression:
0..0xfff|% ToString X3
Where:
0 is zero.
.. is range operator.
0xfff is integer literal for 4095 in hexadecimal form.
| is pipe operator.
% is alias for ForEach-Object cmdlet.
ToString is name of method to call.
X3 is parameter for method. It is standard numeric format string that means, format as hexadecimal number with at least three digits.
So:
0..0xfff creates an array of numbers 0-4095.
| pass elements of array to next command.
% ToString X3 format each number as hexadecimal with at least three digits.

Related

Count the scale of a given decimal

How can I count the scale of a given decimal in Powershell?
$a = 0.0001
$b = 0.000001
Casting $a to a string and returning $a.Length gives a result of 6...I need 4.
I thought there'd be a decimal or math function but I haven't found it and messing with a string seems inelegant.
There's probably a better mathematic way but I'd find the decimal places like this:
$a = 0.0001
$decimalPlaces = ("$a" -split '\.')[-1].TrimEnd('0').Length
Basically, split the string on the . character and get the length of the last string in the array. Wrapping $a in double-quotes implicitly calls .ToString() with an invariant culture (you could expand this as $a.ToString([CultureInfo]::InvariantCulture)), making this method to determine the number of decimal places culture-invariant.
.TrimEnd('0') is used in case $a were sourced from a string, not a proper number type, it's possible that trailing zeroes could be included that should not count as decimal places. However, if you want the scale and not just the used decimal places, leave .TrimEnd('0') off like so:
$decimalPlaces = ("$a" -split '\.')[-1].Length
mclayton helpfully linked to this answer to a related C# question in a comment, and the solution there can indeed be adapted to PowerShell, if working with or conversion to type [decimal] is acceptable:
# Define $a as a [decimal] literal (suffix 'd')
# This internally records the scale (number of decimal places) as specified.
$a = 0.0001d
# [decimal]::GetBits() allows extraction of the scale from the
# the internal representation:
[decimal]::GetBits($a)[-1] -shr 16 -band 0xFF # -> 4, the number of decimal places
The System.Decimal.GetBits method returns an array of internal bit fields whose last element contains the scale in bits 16 - 23 (8 bits, even though the max. scale allowed is 28), which is what the above extracts.
Note: A PowerShell number literal that is a fractional number without the d suffix - e.g., 0.0001 becomes a [double] instance, i.e. a double-precision binary floating-point number.
PowerShell automatically converts [double] to [decimal] values on demand, but do note that there can be rounding errors due to the differing internal representations, and that [double] can store larger numbers than [decimal] can (although not accurately).
A [decimal] literal - one with suffix d (note that C# uses suffix m) - is parsed with a scale exactly as specified, so that applying the above to 0.000d and 0.010d yields 3 in both cases; that is, the trailing zeros are meaningful.
This does not apply if you (implicitly) convert from [double] instances such as 0.000 and 0.010, for which the above yields 0 and 2, respectively.
A string-based solution:
To offer a more concise (also culture-invariant) alternative to Bender The Greatest's helpful answer:
$a = 0.0001
("$a" -replace '.+\.').Length # -> 4, the number of decimal places
Caveat: This solution relies on the default string representation of a [double] number, which need not match the original input format; for instance, .0100, when stringified later, becomes '0.01'; however, as discussed above, you can preserve trailing zeros if you start with a [decimal] literal: .0100d stringifies to '0.0100' (input number of decimals preserved).
"$a", uses an expandable string (PowerShell's string interpolation) to create a culture-invariant string representation of the number so as to ensure that the string representation uses . as the decimal mark.
In effect, PowerShell calls $a.ToString([cultureinfo]::InvariantCulture) behind the scenes.[1].
By contrast, .ToString() (argument-less) applies the rules of the current culture, and in some cultures it is , - not . - that is used as the decimal mark.
Caveat: If you use just $a as the LHS of -replace, $a is implicitly stringified, in which case you - curiously - get culture-sensitive behavior, as with .ToString() - see this GitHub issue.
-replace '.+\.' effectively removes all characters up to and including the decimal point from the input string, and .Length counts the characters in the resulting string - the number of decimal places.
[1] Note that casts from strings in PowerShell too use the invariant culture (effectively, ::Parse($value, [cultureinfo]::InvariantCulture) is called) so that in order to parse a a culture-local string representation you'll need to use the ::Parse() method explicitly; e.g., [double]::Parse('1,2'), not [double] '1,2'.

Powershell: Convert unique string to unique int

Is there a method for converting unique strings to unique integers in PowerShell?
I'm using a PowerShell function as a service bus between two API's,
the first API produces unique codes e.g. HG44X10999 (varchars)- but the second API which will consume the first as input, will only accept integers. I only care about keeping them unique.
I have looked at $string.gethashcode() but this produces negative integers and also changes between builds. Get-hash | $string -encoding ASCII obviously outputs varchars too.
Other examples on SO are referring to converting a string of numeric characters to integers i.e. $string = 123 - but I can't find a way of quickly computing an int from a string of alphanumeric
The Fowler-Noll-Vo hash function seems well-suited for your purpose, as it can produce a 32-bit hash output.
Here's a simple implementation in PowerShell (the offset basis and initial prime is taken from the wikipedia reference table for 32-bit outputs):
function Get-FNVHash {
param(
[string]$InputString
)
# Initial prime and offset chosen for 32-bit output
# See https://en.wikipedia.org/wiki/Fowler–Noll–Vo_hash_function
[uint32]$FNVPrime = 16777619
[uint32]$offset = 2166136261
# Convert string to byte array, may want to change based on input collation
$bytes = [System.Text.Encoding]::UTF8.GetBytes($InputString)
# Copy offset as initial hash value
[uint32]$hash = $offset
foreach($octet in $bytes)
{
# Apply XOR, multiply by prime and mod with max output size
$hash = $hash -bxor $octet
$hash = $hash * $FNVPrime % [System.Math]::Pow(2,32)
}
return $hash
}
Now you can repeatably produce distinct integers from the input strings:
PS C:\> Get-FNVHash HG44X10999
1174154724
If the target API only accepts positive signed 32-bit integers you can change the modulus to [System.Math]::Pow(2,31) (doubling the chance of collisions, to
approx. 1 in 4300 for 1000 distinct inputs)
For further insight into this simple approach, see this page on FNV and have a look at this article exploring short string hashing

Read specific character from cell-array of string

I have an cell-array of dimensions 1x6 like this:
A = {'25_2.mat','25_3.mat','25_4.mat','25_5.mat','25_6.mat','25_7.mat'};
I want to read for example from the A{1} , the number after the '_' i.e 2 for my example
Using cellfun, strfind and str2double
out = cellfun(#(x) str2double(x(strfind(x,'_')+1:strfind(x,'.')-1)),A)
How does it work?
This code simply finds the index of character one number after the occurrence of '_'. Lets call it as start_index. Then finds the character one number lesser than the index of occurrence of '.' character. Lets call it as end_index. Then retrieves all the characters between start_index and end_index. Finally converts those characters to numbers using str2double.
Sample Input:
A = {'2545_23.mat','2_3.mat','250_4.mat','25_51.mat','25_6.mat','25_7.mat'};
Output:
>> out
out =
23 3 4 51 6 7
You can access the contents of the cell by using the curly braces{...}. Once you have access to the contents, you can use indexes to access the elements of the string as you would do with a normal array. For example:
test = {'25_2.mat', '25_3.mat', '25_4.mat', '25_5.mat', '25_6.mat', '25_7.mat'}
character = test{1}(4);
If your string length is variable, you can use strfind to find the index of the character you want.
Assuming the numbers are non-negative integers after the _ sign: use a regular expression with lookbehind, and then convert from string to number:
numbers = cellfun(#(x) str2num(x{1}), regexp(A, '(?<=\_)\d+', 'match'));

PARI:similar function like Integer.parseInt()

I want to convert the text hello to ascii decimal in PARI/GP. After that I will
concatenate the values.
I initialize a Vecsmall(hello), after that I run a loop to concatenate the ascii decimal values,
I want to use this concatenated value to * by certain values. The value is now in String type, In Java, there is a Integer.parseInt() to convert the string to int. I wonder if there is a similar function in PARI/GP?
v=Vecsmall("hello");'
for (i = 1, length(v), text=Str(text,v[i]););
//is there any similar function like Integer.praseInt(text) in PARI?
You can use eval
eval(text)
or else a combination of Vecsmall and fromdigits which is faster:
fromdigits(apply(n->n-49, Vec(text)))

xor between two numbers (after hex to binary conversion)

i donot know why there is error in this coding:
hex_str1 = '5'
bin_str1 = dec2bin(hex2dec(hex_str1))
hex_str2 = '4'
bin_str2 = dec2bin(hex2dec(hex_str2))
c=xor(bin_str1,bin_str2)
the value of c is not correct when i transform the hex to binary by using the xor function.but when i used the array the value of c is correct.the coding is
e=[1 1 1 0];
f=[1 0 1 0];
g=xor(e,f)
what are the mistake in my first coding to xor of hec to binary value??anyone can help me find the solution...
Your mistake is applying xor on two strings instead of actual numerical arrays.
For the xor command, logical "0"s are represented by actual zero elements. Any non-zero elements are interpreted as logical "1"s.
When you apply xor on two strings, the numerical value of each character (element) is its ASCII value. From xor's point of view, the zeroes in your string are not really zeroes, but simply non-zero values (being equal to the ASCII value of the character '0'), which are interpreted as logical "1"s. The bottom line is that in your example you're xor-ing 111b and 111b, and so the result is 0.
The solution is to convert your strings to logical arrays:
num1 = (bin_str1 == '1');
num2 = (bin_str2 == '1');
c = xor(num1, num2);
To convert the result back into a string (of a binary number), use this:
bin_str3 = sprintf('%d', c);
... and to a hexadecimal string, add this:
hex_str3 = dec2hex(bin2dec(bin_str3));
it is really helpful, and give me the correct conversion while forming HMAC value in matlab...
but in matlab you can not convert string of length more than 52 character using bin2dec() function and similarly hex2dec() can not take hexadecimal character string more than 13 length.