Write Bytes to file, bytes shifted - powershell

I have my bytes stored as string values
like this in file D:\source.txt
208
203
131
132
148
128
128
128
128
128
I just want to read them, and store in another file
I am quite new for powershell, so wrote program like this
$bytes = New-Object System.Collections.ArrayList
foreach($line in [System.IO.File]::ReadLines("D:\source.txt"))
{
[void]$bytes.Add([System.Convert]::ToByte($line));
}
[System.IO.File]::WriteAllBytes("D:\target.zip",[Byte[]]$bytes.ToArray());
So from my undestanding it should get string value, convert it to byte
store it in ArrayList, convert ArrayList to byte array and wrote it to file
And everything goes ok, even if i do echo [Byte[]]$bytes.ToArray() i see correct value
But result file is corrupted, and when i check it byte by byte i see next values
-48
-53
-125
-124
-108
-128
-128
-128
-128
-128
Seems like WriteAllBytes shift my byte values by 128, but why and where?
I am not very professional with powershell, and i cant find anything related in documentation
So can you suggest how i can correct this?
Thanks you for any info

Thanks, i actually found what is the problem. Cause for corruption was incorrect library method for converting from java byte(values from -128...127) to unsigned powershell byte And in hex redactor i`ve got int(8) representation, which is corresponds, if check in powershell(uint) bytes are shown correctly Thanks for help

Related

python: send UDP message

I have the following challenge sending UDP packet:
The packet is 40 bytes long where all fields constant except some counter and checksum.
header='\xaf\x18\x25\x25'
message= 'ABCDEFGHIGKLMNOPQRTSUVXYZ0123456'
i=1
#do some checksum calculation and store result into the checksum variable
message=header + chr(i) + data + chr(checksum >>8) + chr(checksum & 0xFF)
sock.sendto(message.encode('utf-8'), (DST_IP, int(DST_PORT)))
However, looking into a wireshark, I can see that the message is 43 bytes where i have a 0xC2 at first location instead of the actual header 1st byte and 0XC3 and 0xC2 before the checksums MSB & LSB (which are the 3 extra bytes)
Any ssugestion what is the issue and how to fix it?
changed the encoding solved the issue
sock.sendto(message.encode('charmap'), (DST_IP, int(DST_PORT)))

Powershell: Translate Octet String (SNMP) Output to Hex (Mac address)

So i will shortly explain the env:
Need to work on a Win2k8 Server with Powershell 4.0
I want to get some information with using SNMP (so printer type and printer MAC address):
$SNMP = new-object -ComObject olePrn.OleSNMP
$SNMP.open($P_IP,"public",2,3000)
$PType = $SNMP.get(".1.3.6.1.2.1.25.3.2.1.3.1")
$PMac = $SNMP.get(".1.3.6.1.2.1.2.2.1.6.2")
echo $PType
echo $PMac
So, the Output looks like this (as an example):
$PType = HP Officejet Pro 251dw Printer
$PMac =  ÓÁÔ*
So, first of all i started to check, if i used the right OID - using the command line tool of SnmpSoft Company. There, the output looked well:
OID= OID=.1.3.6.1.2.1.2.2.1.6.2
Type=OctetString
Value= A0 D3 C1 D4 2A 95 ....*.
Alright, so i started to check, what kind of datatype this OID value have: It's octet string. In the next steps, i started to search for possibilities, how to transform this octet string value to some readable hex - until now without any progression. i tried to transform it into Bytes this way:
$bytes = [System.Text.Encoding]::Unicode.GetBytes($PMac)
[System.Text.Encoding]::ASCII.GetString($bytes)
echo $bytes
But the output just confusing me
160
0
211
0
193
0
212
0
42
0
34
32
Tryed to interpret this output without any success. Google can't help me anymore because i don't understand slowly also, how or what to search.
So here i am and hoping to get some help or an advice, how i can change the output of this query to something readable.
It's an encoding problem.
1.3.6.1.2.1.2.2.1.6 is the interface physical address. So I would expect the value to be the MAC address of the interface. Your command line result begins with A0-D3-C1, which is an HP MAC address range, so it's consistent. Your printers MAC address must be A0 D3 C1 D4 2A 95? You didn't state that, so you're leaving me to guess.
I suspect that $PMac is supposed to be a [byte[]] (byte array), but the output is converting it to a string and PowerShell's output system is interpreting it as characters.
Example:
PS C:\> [byte[]]$bytes = 0xa0, 0xd3, 0xc1, 0xd4, 0x2a, 0x95
PS C:\> [System.Text.Encoding]::Default.GetString($bytes)
 ÓÁÔ*•
You probably need to do something like this:
$MAC = [System.Text.Encoding]::Default.GetBytes($PMac) | ForEach-Object {
$_.ToString('X2')
}
$MAC = $MAC -join '-'
You may want to use [System.Text.Encoding]::ASCII.GetBytes($PMac) instead, since raw SNMP is supposed to use ASCII encoding. I've no idea what olePrn.OleSNMP uses.
You might also look at one of the SNMP PowerShell modules on the PowerShell Gallery. That will be much easier than dealing with COM object in PowerShell.
I also came across this page on #SNMP's handling of OCTET STRING. #SNMP is a .Net SNMP library, and OCTET STRING appears to be what the underlying type is for this OID. The page describes some of the difficulties of working with this particular object type with .Net. You could also use this library for developing your own Cmdlets in PowerShell; it's available through NuGet.
The output you got is very nearly your expected MAC address
160 0 211 0 193 0 212 0 42 0 34 32
160 is decimal for hexadecimal 0xA0
211 is 0xD3
193 is 0xC1
The additional zeros between each byte may have been added during the Unicode.GetBytes call, which I don't think you'll need to use.
I suspect you'll need to read $PMac as an array of bytes, then do hexadecimal string conversion for each byte. This is probably not the most elegant, but may get the job done:
[byte[]] $arrayofBytes = #(160,211,193)
[string] $hexString
foreach ($b in $arrayofBytes) {
$HexString += [convert]::toString($b,16)
$HexString += ' '
}

Different byte length of base64 decryption

I am trying to decode this base64 string using VB.NET
System.Convert.FromBase64String("AgBgVvBR0apvj88GZFp/0ontNtFIcsJoVTachX30kURDlK010Mv9/yv1yLXXr4mqII5z2Hzx9FlGxA==")
And it returns 58 bytes. If I convert from Base64 on any online base64 decode program I get 32 bytes..??
What am I doing wrong?
Your base64 string is 80 characters. Removing the two = padding characters, you get 78 base64 characters. Each one represents 6 bits.
The length of the decoded string should be 78*6/8 = 58 bytes. So, your code is producing the correct output.
The online tools you're using are probably trying to decode into a UTF-8 or ASCII printable characters (which is not the case for your input). That's why you're only seeing less bytes in the output.

Scala- How can I read some specific bytes from a file?

I'd like to encrypt a text(about 1 MB) and I use the max length of RSA keys(4096 bits). However, the key seems too short. As I googled, I got to know that the max size of text that a RSA can encrypt is 8 bytes shorter than the length of the key. Thus, I can only encrypt 501 bytes in this way. So I decided to divide my text into 2093 arrays (1024*1024/501=2092.1).The question is how can I pour the first 501 bytes into the first array in scala?Anyone who can help me this out?
I can't comment on whether your cryptographic approach is okay. (I don't know, but would rely on libraries written and vetted by more knowledgeable cryptographers if I were in your shoes. I'm not sure why you choose 501, which is 11 bytes, not 8, shorter than 512.)
But chunking your arrays into fixed-size blocks should be easy. Just use the grouped function f Array.
val text : String = ???
val bytes = text.getBytes( scala.io.Codec.UTF8.charSet ) // lots of ways to do this
val blocks = bytes.grouped( 501 )
Blocks will be an Iterator[Array[Byte]], each 501 bytes long except for the last (which may be shorter).

Can someone explain Encoding.Unicode.GetBytes("hello") for me?

My code:
string input1;
input1 = Console.ReadLine();
Console.WriteLine("byte output");
byte[] bInput1 = Encoding.Unicode.GetBytes(input1);
for (int x = 0; x < bInput1.Length; x++)
Console.WriteLine("{0} = {1}", x, bInput1[x]);
outputs:
104
0
101
0
108
0
108
0
111
0
for the input "hello"
Is there a reference to the character map where I can make sense of this?
You should read "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" at http://www.joelonsoftware.com/articles/Unicode.html
You can find a list of all Unicode characters at http://www.unicode.org but don't expect to be able to read the files there without learning a lot about text encoding issues.
At http://www.unicode.org/charts/ you can find all the Unicode code charts. http://www.unicode.org/charts/PDF/U0000.pdf shows that the code point for 'h' is U+0068. (Another great tool for viewing this data is BabelMap.)
The exact details of UTF-16 encoding can be found at http://unicode.org/faq/utf_bom.html#6 and http://www.ietf.org/rfc/rfc2781.txt. In short, U+0068 is encoded (in UTF-16LE) as 0x68 0x00. In decimal, this is the first two bytes you see: 104 0.
The other characters are encoded similarly.
Finally, a great reference (when trying to understand the various Unicode specifications), apart from the Unicode Standard itself, is the Unicode Glossary.