Checking the 13th-bit (shadow setting) from registry item (reg_binary)? - powershell

Using the following PowerShell code:
$RegConnect = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey([Microsoft.Win32.RegistryHive]"CurrentUser", "$env:COMPUTERNAME")
$RegCursors = $RegConnect.OpenSubKey("Control Panel\Desktop", $true)
$myVal = $RegCursors.GetValue("UserPreferencesMask")
write-output $myVal
$RegCursors.Close()
$RegConnect.Close()
It returns:
190
30
7
128
18
1
0
0
From the MS help on UserPreferencesMask, the bit I'm after is the 13th, Cursor shadow.
13 Cursor shadow -- 1 Default
Cursor shadow is enabled. This effect only appears if the system has a color depth of more than 256 colors.
How can I extract the boolean for the current mouse shadow from this?
Here's the values in the on and off state.
on = "UserPreferencesMask"=hex:be,3e,07,80,12,01,00,00
off = "UserPreferencesMask"=hex:be,1e,07,80,12,01,00,00

It looks like you're adding 32 or 0x20 to the second byte to turn it on:
$myval[1] += 32 # on
$myval[1] -= 32 # off
Bitwise, "or" to set, "and" with "bit complement (not)" to unset.
0x1e -bor 0x20 # on
62
0x3e -band -bnot 0x20 # off
30
Maybe you could make a flags enum for all the settings, but you'd have to convert the byte array to one large int.
EDIT: Oh, if you just want to check that a bit is set:
$shadowbit = 0x20
if (0x3e -band $shadowbit ) { 'yep' } else { 'nope' } # 32
yep
if (0x1e -band $shadowbit ) { 'yep' } else { 'nope' } # 0
nope
See also How do you set, clear, and toggle a single bit?
EDIT:
I went a little overboard. Having this in preperation:
[Flags()] enum UserPreferencesMask {
ActiveWindowTracking = 0x1
MenuAnimation = 0x2
ComboBoxAnimation = 0x4
ListBoxSmoothScrolling = 0x8
GradientCaptions = 0x10
KeybordCues = 0x20
ActiveWindowTrackingZOrder = 0x40
HotTracking = 0x80
Reserved8 = 0x100
MenuFade = 0x200
SelectionFade = 0x400
ToolTipAnimation = 0x800
ToolTipFade = 0x1000
CursorShadow = 0x2000 # 13
Reserved14 = 0x4000
Reserved15 = 0x8000
Reserved16 = 0x10000
Reserved17 = 0x20000
Reserved18 = 0x40000
Reserved19 = 0x80000
Reserved20 = 0x100000
Reserved21 = 0x200000
Reserved22 = 0x400000
Reserved23 = 0x800000
Reserved24 = 0x1000000
Reserved25 = 0x2000000
Reserved26 = 0x4000000
Reserved27 = 0x8000000
Reserved28 = 0x10000000
Reserved29 = 0x20000000
Reserved30 = 0x40000000
UIEffects = 0x80000000 # 31
}
You can do:
$myVal = get-itemproperty 'HKCU:\Control Panel\Desktop' UserPreferencesMask |
% UserPreferencesMask
$b = [bitconverter]::ToInt32($myVal,0)
'0x{0:x}' -f $b
0x80073e9e
[UserPreferencesMask]$b
MenuAnimation, ComboBoxAnimation, ListBoxSmoothScrolling,
GradientCaptions, HotTracking, MenuFade, SelectionFade,
ToolTipAnimation, ToolTipFade, CursorShadow, Reserved16, Reserved17,
Reserved18, UIEffects
[UserPreferencesMask]$b -band 'CursorShadow'
CursorShadow
if ([UserPreferencesMask]$b -band 'CursorShadow') { 'yes' }
yes
Note that 3 undocumented reserved bits are already in use in my Windows 10. This is with "show shadows under mouse pointer" checked under "performance options" (advanced system) in the control panel
OR, getting simple without the enums:
$b = [bitconverter]::ToInt32($myVal,0) # 4 bytes from reg_binary to int
if ($b -band [math]::pow(2,13)) { 'cursor shadow' }
I've noticed that that registry entry is actually 8 bytes long, but bringing in all 8 bytes doesn't change the answer, even if some of those extra bits are set in windows 10.

To find the specific bit:
You need to calculate the byte index (starting from 0) by dividing the absolute bit index by 8:
[math]::floor(13 / 8) → byte 1 for the absolute bit 13
*Note: as #mklement0 pointed out, you can't use [Int] for this as is doesn't round down, see: division and rounding
Then calculate the relative bit index in that byte by finding the remainder (modulo) of the division:
$BitIndex - 8 * $ByteIndex → 13 - (8 * 1) = 5
*Note: I am not using the arithmetic operator (%) for the modulus due to the modulo operation with negative numbers issue and I used [math]::floor vs [math]::truncate for rounding. This way, the function also supports negative bit indices, with -1 referring to the most significant bit
Then create a byte mask from the relative bit:
[Byte][math]::pow(2, <relative bit>) → 25 = 32 (20h)
And finally mask (-bAnd) the concerned byte:
[Bool]($ByteArray[$ByteIndex] -bAnd $ByteMask) → 62 bAnd 32 = 32 (true), 30 bAnd 32 = 0 (false)
To make this more clear, I have put this in a Test-Bit function:
Function Test-Bit([Int]$BitIndex, [Byte[]]$ByteArray) {
$ByteIndex = [math]::floor($BitIndex / 8)
$ByteMask = [Byte][math]::pow(2, ($BitIndex - 8 * $ByteIndex))
[Bool]($ByteArray[$ByteIndex] -bAnd $ByteMask)
}
*Note: the Test-Bit function is based on little-endian byte order (see: Endianness)
Test:
Test-Bit 13 ([Byte[]](0xbe, 0x3e, 0x07, 0x80, 0x12, 0x01, 0x00, 0x00)) # True
Test-Bit 13 ([Byte[]](0xbe, 0x1e, 0x07, 0x80, 0x12, 0x01, 0x00, 0x00)) # False
Specific to your question:
$RegConnect = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey([Microsoft.Win32.RegistryHive]"CurrentUser", "$env:COMPUTERNAME")
$RegCursors = $RegConnect.OpenSubKey("Control Panel\Desktop", $true)
$MyVal = $RegCursors.GetValue("UserPreferencesMask")
$State = Test-Bit 13 $MyVal
If ($State) {
# Cursor shadow is enabled
} Else {
# Cursor shadow is disabled
}
$RegCursors.Close()
$RegConnect.Close()

Related

How to convert from 8byte hex to real

I'm currently working on a file converter. I've never done anything using binary file reading before. There are many converters available for this file type (gdsII to text), but none in swift that I can find.
I've gotten all the other data types working (2byte int, 4byte int), but I'm really struggling with the real data type.
From a spec document :
http://www.cnf.cornell.edu/cnf_spie9.html
Real numbers are not represented in IEEE format. A floating point number is made up of three parts: the sign, the exponent, and the mantissa. The value of the number is defined to be (mantissa) (16) (exponent). If "S" is the sign bit, "E" is exponent bits, and "M" are mantissa bits then an 8-byte real number has the format
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM
MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The exponent is in "excess 64" notation; that is, the 7-bit field shows a number that is 64 greater than the actual exponent. The mantissa is always a positive fraction greater than or equal to 1/16 and less than 1. For an 8-byte real, the mantissa is in bits 8 to 63. The decimal point of the binary mantissa is just to the left of bit 8. Bit 8 represents the value 1/2, bit 9 represents 1/4, and so on.
I've tried implementing something similar to what I've seen in python or Perl but each language has features that swift doesn't have, also the type conversions get very confusing.
This is one method I tried, based on Perl. Doesn't seem to get the right value. Bitwise math is new to me.
var sgn = 1.0
let andSgn = 0x8000000000000000 & bytes8_test
if( andSgn > 0) { sgn = -1.0 }
// var sgn = -1 if 0x8000000000000000 & num else 1
let manta = bytes8_test & 0x00ffffffffffffff
let exp = (bytes8_test >> 56) & 0x7f
let powBase = sgn * Double(manta)
let expPow = (4.0 * (Double(exp) - 64.0) - 56.0)
var testReal = pow( powBase , expPow )
Another I tried:
let bitArrayDecode = decodeBitArray(bitArray: bitArray)
let valueArray = calcValueOfArray(bitArray: bitArrayDecode)
var exponent:Int16
//calculate exponent
if(negative){
exponent = valueArray - 192
} else {
exponent = valueArray - 64
}
//calculate mantessa
var mantissa = 0.0
//sgn = -1 if 0x8000000000000000 & num else 1
//mant = num & 0x00ffffffffffffff
//exp = (num >> 56) & 0x7f
//return math.ldexp(sgn * mant, 4 * (exp - 64) - 56)
for index in 0...7 {
//let mantaByte = bytes8_1st[index]
//mantissa += Double(mantaByte) / pow(256.0, Double(index))
let bit = pow(2.0, Double(7-index))
let scaleBit = pow(2.0, Double( index ))
var mantab = (8.0 * Double( bytes8_1st[1] & UInt8(bit)))/(bit*scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[2] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[3] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
}
let real = mantissa * pow(16.0, Double(exponent))
UPDATE:
The following part seems to work for the exponent. Returns -9 for the data set I'm working with. Which is what I expect.
var exp = Int16((bytes8 >> 56) & 0x7f)
exp = exp - 65 //change from excess 64
print(exp)
var sgnVal = 0x8000000000000000 & bytes8
var sgn = 1.0
if(sgnVal == 1){
sgn = -1.0
}
For the mantissa though I can't get the calculation correct some how.
The data set:
3d 68 db 8b ac 71 0c b4
38 6d f3 7f 67 5e f6 ec
I think it should return 1e-9 for exponent and 0.0001
The closest I've gotten real Double 0.0000000000034907316148746757
var bytes7 = Array<UInt8>()
for (index, by) in data.enumerated(){
if(index < 4) {
bytes7.append(by[0])
bytes7.append(by[1])
}
}
for index in 0...7 {
mantissa += Double(bytes7[index]) / (pow(256.0, Double(index) + 1.0 ))
}
var real = mantissa * pow(16.0, Double(exp));
print(mantissa)
END OF UPDATE.
Also doesn't seem to produce the correct values. This one was based on a C file.
If anyone can help me out with an English explanation of what the spec means, or any pointers on what to do I would really appreciate it.
Thanks!
According to the doc, this code returns the 8-byte Real data as Double.
extension Data {
func readUInt64BE(_ offset: Int) -> UInt64 {
var value: UInt64 = 0
_ = Swift.withUnsafeMutableBytes(of: &value) {bytes in
copyBytes(to: bytes, from: offset..<offset+8)
}
return value.bigEndian
}
func readReal64(_ offset: Int) -> Double {
let bitPattern = readUInt64BE(offset)
let sign: FloatingPointSign = (bitPattern & 0x80000000_00000000) != 0 ? .minus: .plus
let exponent = (Int((bitPattern >> 56) & 0x00000000_0000007F)-64) * 4 - 56
let significand = Double(bitPattern & 0x00FFFFFF_FFFFFFFF)
let result = Double(sign: sign, exponent: exponent, significand: significand)
return result
}
}
Usage:
//Two 8-byte Real data taken from the example in the doc
let data = Data([
//1.0000000000000E-03
0x3e, 0x41, 0x89, 0x37, 0x4b, 0xc6, 0xa7, 0xef,
//1.0000000000000E-09
0x39, 0x44, 0xb8, 0x2f, 0xa0, 0x9b, 0x5a, 0x54,
])
let real1 = data.readReal64(0)
let real2 = data.readReal64(8)
print(real1, real2) //->0.001 1e-09
Another example from "UPDATE":
//0.0001 in "UPDATE"
let data = Data([0x3d, 0x68, 0xdb, 0x8b, 0xac, 0x71, 0x0c, 0xb4, 0x38, 0x6d, 0xf3, 0x7f, 0x67, 0x5e, 0xf6, 0xec])
let real = data.readReal64(0)
print(real) //->0.0001
Please remember that Double has only 52-bit significand (mantissa), so this code loses some significant bits in the original 8-byte Real. I'm not sure that can be an issue or not.

Convert hex string to a base36 string

In AWS each instance has an ID that looks something like this: i-1234567890abcdef ; the last 16 characters being a hexadecimal number. I would like to treat the "1234567890abcdef" part as a hex number and convert it to base36, so a-z0-9. This way I can use is as the computer's name and not go over the 15 character limit. How is that done in Powershell ?
Converting the input from hex is easy enough: skip the first two characters, and convert to UInt64:
[convert]::ToUInt64($text.Substring(2), 16)
but PowerShell (.Net) has no built-in way to convert to base 36. You'll need to implement it yourself, e.g. this code taken from https://ss64.com/ps/syntax-base36.html and adjusted for larger numbers:
function convertTo-Base36
{
[CmdletBinding()]
param (
[parameter(valuefrompipeline=$true, HelpMessage="Integer number to convert")]
[uint64]$DecimalNumber=""
)
$alphabet = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
do
{
$remainder = ($DecimalNumber % 36)
$char = $alphabet.Substring($remainder, 1)
$base36Num = "$char$base36Num"
$DecimalNumber = ($DecimalNumber - $remainder) / 36
}
while ($DecimalNumber -gt 0)
$base36Num
}
Then:
$x='i-1234567890abcdef'
$hexPart = $x.Substring(2)
$decimal = [convert]::ToUInt64($hexPart, 16)
convertTo-Base36 $decimal
# -> 9YS742MX86WF
or:
[convert]::ToUInt64('i-1234567890abcdef'.Substring(2), 16) | Convertto-Base36

perl: build big/little endian bigfields and output them with a single pack

I'm building binary data in perl.
This binary data is based on a C structure, and is used on 32 and 64bit, big and little endian systems.
The diffucult part is the bitfield in the FORMAT structure. This is layed out differently in memory on little/big endian architectures.
I am currently making the bitfields like this:
struct FORMAT
{
void * X,
void * Y,
void * Z,
unsigned int size : 26;
unsigned int type : 6;
<4 byte padding on 64bit targets>
}
my $size = 0x3C;
my $type = 0x05;
=> big endian
print pack("L>", $size << 6) | pack("L>", $type); # 00 00 0f 05
=> little endian
print pack("L<", $size) | pack("L<", $type << 26); # 3c 00 00 14
But I would like to have the above in a single pack format,(and thus not call pack two times, and OR the result and then print it.)
Eventually I want to print an entire FORMAT record in one shot.
pack("Q>3L>L>", X, Y, Z, ???, 0) #64bit big endian
I have about 500k FORMAT records that I need to write out, and calling pack() multiple times for each record is too costly.
You could use XS. This will handle endianness and compiler-specific alignment (padding) for you.
use strict;
use warnings;
use Inline C => <<'__EOC__';
typedef struct {
void* X;
void* Y;
void* Z;
unsigned int size: 26;
unsigned int type: 6;
} FORMAT;
SV* pack_FORMAT(UV X, UV Y, UV Z, unsigned int size, unsigned int type) {
FORMAT format;
format.X = INT2PTR(void*, X);
format.Y = INT2PTR(void*, Y);
format.Z = INT2PTR(void*, Z);
format.size = size;
format.type = type;
return newSVpvn(&format, sizeof(format));
}
__EOC__
my $size = 0x3C;
my $type = 0x05;
my $packed = pack_FORMAT(0, 0, 0, $size, $type);
printf("%v02X\n", $packed);
That said
pack("L>", $size << 6) | pack("L>", $type)
pack("L<", $size) | pack("L<", $type << 26)
can be written as
pack("L>", ($size << 6) | $type)
pack("L<", $size | ($type << 26))

Expression for setting lowest n bits that works even when n equals word size

NB: the purpose of this question is to understand Perl's bitwise operators better. I know of ways to compute the number U described below.
Let $i be a nonnegative integer. I'm looking for a simple expression E<$i>1 that will evaluate to the unsigned int U, whose $i lowest bits are all 1's, and whose remaining bits are all 0's. E.g. E<8> should be 255. In particular, if $i equals the machine's word size (W), E<$i> should equal ~02.
The expressions (1 << $i) - 1 and ~(~0 << $i) both do the right thing, except when $i equals W, in which case they both take on the value 0, rather than ~0.
I'm looking for a way to do this that does not require computing W first.
EDIT: OK, I thought of an ugly, plodding solution
$i < 1 ? 0 : do { my $j = 1 << $i - 1; $j < $j << 1 ? ( $j << 1 ) - 1 : ~0 }
or
$i < 1 ? 0 : ( 1 << ( $i - 1 ) ) < ( 1 << $i ) ? ( 1 << $i ) - 1 : ~0
(Also impractical, of course.)
1 I'm using the strange notation E<$i> as shorthand for "expression based on $i".
2 I don't have a strong preference at the moment for what E<$i> should evaluate to when $i is strictly greater than W.
On systems where eval($Config{nv_overflows_integers_at}) >= 2**($Config{ptrsize*8}) (which excludes one that uses double-precision floats and 64-bit ints),
2**$i - 1
On all systems,
( int(2**$i) - 1 )|0
When i<W, int will convert the NV into an IV/UV, allowing the subtraction to work on systems with the precision of NVs is less than the size of UVs. |0 has no effect in this case.
When i≥W, int has no effect, so the subtraction has no effect. |0 therefore overflows, in which case Perl returns the largest integer.
I don't know how reliable that |0 behaviour is. It could be compiler-specific. Don't use this!
use Config qw( %Config );
$i >= $Config{uvsize}*8 ? ~0 : ~(~0 << $i)
Technically, the word size is looked up, not computed.
Fun challenge!
use Devel::Peek qw[Dump];
for my $n (8, 16, 32, 64) {
Dump(~(((1 << ($n - 1)) << 1) - 1) ^ ~0);
}
Output:
SV = IV(0x7ff60b835508) at 0x7ff60b835518
REFCNT = 1
FLAGS = (PADTMP,IOK,pIOK)
IV = 255
SV = IV(0x7ff60b835508) at 0x7ff60b835518
REFCNT = 1
FLAGS = (PADTMP,IOK,pIOK)
IV = 65535
SV = IV(0x7ff60b835508) at 0x7ff60b835518
REFCNT = 1
FLAGS = (PADTMP,IOK,pIOK)
IV = 4294967295
SV = IV(0x7ff60b835508) at 0x7ff60b835518
REFCNT = 1
FLAGS = (PADTMP,IOK,pIOK,IsUV)
UV = 18446744073709551615
Perl compiled with:
ivtype='long', ivsize=8, nvtype='double', nvsize=8
The documentation on the shift operators in perlop has an answer to your problem: use bigint;.
From the documentation:
Note that both << and >> in Perl are implemented directly using << and >> in C. If use integer (see Integer Arithmetic) is in force then signed C integers are used, else unsigned C integers are used. Either way, the implementation isn't going to generate results larger than the size of the integer type Perl was built with (32 bits or 64 bits).
The result of overflowing the range of the integers is undefined because it is undefined also in C. In other words, using 32-bit integers, 1 << 32 is undefined. Shifting by a negative number of bits is also undefined.
If you get tired of being subject to your platform's native integers, the use bigint pragma neatly sidesteps the issue altogether:
print 20 << 20; # 20971520
print 20 << 40; # 5120 on 32-bit machines,
# 21990232555520 on 64-bit machines
use bigint;
print 20 << 100; # 25353012004564588029934064107520

How many bits do i need to store AB+C?

I was wondering about this-
If A, B are 16-bit numbers and C is 8-bit, how many bits would I need to store the result ? 32 or 33 ?
And, what if C was a 16-bit number? What then ?
I would appreciate if I got answers with an explanation of the hows and whys.
Why don't you just take the maximum value for each register, and check the result?
If all registers are unsigned:
0xFFFF * 0xFFFF + 0xFF = 0xFFFE0100 = // 32 bits are enough
0xFFFF * 0xFFFF + 0xFFFF = 0xFFFF0000 // 32 bits are enough
If all registers are signed, then 0xFFFF = -32767, but 0xFFFF * 0xFFFF would be the same as before (negative * negative = positive). Register C will make the result a little smaller than the previous result, but you would still require 32 bits in order to store it.