before i had see this question and answer: Use WinDbg to Write Contents of Managed Byte[] to File,but i have a question that the mention answer that write all bytes to the file( Method table pointer,array length and the array content),i want just write a array content to the file.
for example,i created a byte array with 8192 length.
var bytes=new Byte[8192]
and use windbg and crash this.
0:034> !do 0x0143fd1c
Name: System.Byte[]
MethodTable: 5ce54944
EEClass: 5cb8af1c
Size: 8204(0x200c) bytes
Array: Rank 1, Number of elements 8192, Type Byte
Element Type:System.Byte
Content: .0.................:.i......$...,x"!.a_.h#......66..vx.4...P.R?...M
Fields:
None
0:034> dd 0x0143fd1c
0143fd1c 5ce54944 00002000 0a0d300a 16460a0d
0143fd2c 957bd993 1f92335c 79a2d058 72455ef6
0143fd3c cc16c7b1 05b18e14 5b1df595 0fb5dbd8
0143fd4c 629a16c6 0edb5c9a 6ede4110 5d5da54e
0143fd5c 4638143a efcad6db 060935f1 a9a48285
0143fd6c e414cff0 8aeaae92 f169b93a f80bd6de
0143fd7c 9a9824d1 22782ccd 5f610c21 0f2368b4
0143fd8c ae09d410 083636c3 0b787616 101ab234
0:034> !da 0143fd1c
Name: System.Byte[]
MethodTable: 5ce54944
EEClass: 5cb8af1c
Size: 8204(0x200c) bytes
Array: Rank 1, Number of elements 8192, Type Byte
Element Methodtable: 5ce525ec
[0] 0143fd24
[1] 0143fd25
.......
so,how to location at start offset and output length in the .writemem command?thanks.
!da gives you the answer.
[0] 0143fd24<-- Address of first byte here.
Take the address of the first byte and pass it to .writemem along with a file name.
.writemem C:\somefile 143fd24 L0n8192
This command says write to C:\somefile, data starting at 143fd24, continuing for decimal 8192 bytes.
Related
Is there any reason why when I run the following
var name = "A"
withUnsafeBytes(of: &name, { bytes in
print (bytes)
for byte in bytes.enumerated() {
print (byte.element, byte.offset)
}
})
The last byte is 255?
I expected the bytes to just contain 65 as that is the ASCII code!
That is, byte 0 is 65 (as expected) and byte 15 is 255 (all the rest are zeroed)
Why is byte 15 255?
struct String is a (opaque) structure, containing pointers to the actual character storage. Short strings are stored directly in those pointers, which is why in your particular case the byte 65 is printed first.
If you run your code with
var name = "A really looooong string"
then you'll notice that there is no direct connection between the output of the program and the characters of the string.
If the intention is to enumerate the bytes of the UTF-8 representation of the string then
for byte in name.utf8 { print(byte) }
is the correct way.
I understand that you have a hex string and perform SHA256 on it twice and then byte-swap the final hex string. The goal of this code is to find a Merkle Root by concatenating two transactions. I would like to understand what's going on in the background a bit more. What exactly are you decoding and encoding?
import hashlib
transaction_hex = "93a05cac6ae03dd55172534c53be0738a50257bb3be69fff2c7595d677ad53666e344634584d07b8d8bc017680f342bc6aad523da31bc2b19e1ec0921078e872"
transaction_bin = transaction_hex.decode('hex')
hash = hashlib.sha256(hashlib.sha256(transaction_bin).digest()).digest()
hash.encode('hex_codec')
'38805219c8ac7e9a96416d706dc1d8f638b12f46b94dfd1362b5d16cf62e68ff'
hash[::-1].encode('hex_codec')
'ff682ef66cd1b56213fd4db9462fb138f6d8c16d706d41969a7eacc819528038'
header_hex is a regular string of lower case ASCII characters and the decode() method with 'hex' argument changes it to a (binary) string (or bytes object in Python 3) with bytes 0x93 0xa0 etc. In C it would be an array of unsigned char of length 64 in this case.
This array/byte string of length 64 is then hashed with SHA256 and its result (another binary string of size 32) is again hashed. So hash is a string of length 32, or a bytes object of that length in Python 3. Then encode('hex_codec') is a synomym for encode('hex') (in Python 2); in Python 3, it replaces it (so maybe this code is meant to work in both versions). It outputs an ASCII (lower hex) string again that replaces each raw byte (which is just a small integer) with a two character string that is its hexadecimal representation. So the final bit reverses the double hash and outputs it as hexadecimal, to a form which I usually call "lowercase hex ASCII".
If I have a System.Int32 at a known memory address of a 64-bit .Net process dump, how can I reference its data in a condition or expression in windbg? For example:
? poi(000000ba2e4de938) == 0n27 is displaying as 0 instead of 1, even though I know that the value at that address is 27 (dt int 000000ba2e4de938 displays 0n27). It is doing so, because it picks up 32 junk bits after the value I am attempting to access, since poi grabs pointer-sized data.
Is there a way to grab data of a certain size to use within expressions? I have so far only found ways to dump the data, but not use it within expressions or conditions.
Short answer: use dwo(...) for 32 bits, qwo(...) for 64 bits and poi(...) for architecture dependent size.
Long answer:
Let's look at an Int32 with SOS first:
0:014> .symfix
0:014> .reload
0:014> .loadby sos clr
0:006> !name2ee *!System.Int32
Module: 000007feecab1000
Assembly: mscorlib.dll
Token: 00000000020000f0
MethodTable: 000007feed1603d0
EEClass: 000007feecb71810
Name: System.Int32
[...]
0:006> !dumpheap -mt 000007feed1603d0
Address MT Size
00000000028376d8 000007feed1603d0 24
0:006> !do 00000000028376d8
Name: System.Int32
MethodTable: 000007feed1603d0
EEClass: 000007feecb71810
Size: 24(0x18) bytes
File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
Fields:
MT Field Offset Type VT Attr Value Name
000007feed1603d0 400055f 8 System.Int32 1 instance 8868 m_value
From that output, you can see that the value (m_value) of the Int32 is at the address of the object + an offset of 8.
The length of Int32 is 32 bit, so we need the dd (dump DWORD) to look at the memory:
0:006> dd 00000000028376d8+8 L1
00000000`028376e0 000022a4
Convert that into decimal and it'll be what has been displayed by SOS before:
0:006> ? 22a4
Evaluate expression: 8868 = 00000000`000022a4
To use it in a condition, use dwo (DWORD size) instead of poi (pointer size, which is qwo on 64 bits):
0:006> ? dwo(00000000028376d8+8)
Evaluate expression: 8868 = 00000000`000022a4
I need to convert the given text (not in file format) into binary values and store in a single array that is to be given as input to other function in Matlab .
Example:
Hi how are you ?
It is to be converted into binary and stored in an array.I have used dec2bin() function but i did not suceed in getting the output required.
Sounds a bit like a trick question. In MATLAB, a character array (string) is just a different representation of 16-bit unsigned character codes.
>> str = 'Hi, how are you?'
str =
Hi, how are you?
>> whos str
Name Size Bytes Class Attributes
str 1x16 32 char
Note that the 16 characters occupy 32 bytes, or 2 bytes (16-bits) per character. From the documentation for char:
Valid codes range from 0 to 65535, where codes 0 through 127 correspond to 7-bit ASCII characters. The characters that MATLABĀ® can process (other than 7-bit ASCII characters) depend upon your current locale setting. To convert characters into a numeric array,use the double function.
Now, you could use double as it recommends to get the character codes into double arrays, but a minimal representation would simply involve uint16:
int16bStr = uint16(str)
To split this into bytes, typecast into 8-bit integers:
typecast(int16bStr,'uint8')
which yields 32 uint8 values (bytes), which are suitable for conversion to binary representation with dec2bin, if you want to see the binary (but these arrays are already binary data).
If you don't expect anything other than ASCII characters, just throw out the extra bits from the start:
>> int8bStr =
72 105 44 32 104 111 119 32 97 114 101 32 121 111 117 63
>> binStr = reshape(dec2bin(binStr8b.'),1,[])
ans =
110011101110111001111111111111110000001001001011111011000000 <...snip...>
Since Mongo uses BSON, I am using the BSONDecoder from Java API to get the BSON document from the Mongo query and print the string output. In the following a byte[] array stores the bytes of the MongoDB document (when I print the hex values they are the same as in Wireshark)
byte[] array = byteBuffer.array();
BasicBSONDecoder decoder = new BasicBSONDecoder();
BSONObject bsonObject = decoder.readObject(array);
System.out.println(bsonObject.toString());
I get the following error:
org.bson.BSONException: should be impossible
Caused by: java.io.IOException: unexpected EOF
at org.bson.BasicBSONDecoder$BSONInput._need(BasicBSONDecoder.java:327)
at org.bson.BasicBSONDecoder$BSONInput.read(BasicBSONDecoder.java:364)
at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:118)
at org.bson.BasicBSONDecoder._decode(BasicBSONDecoder.java:79)
at org.bson.BasicBSONDecoder.decode(BasicBSONDecoder.java:57)
at org.bson.BasicBSONDecoder.readObject(BasicBSONDecoder.java:42)
at org.bson.BasicBSONDecoder.readObject(BasicBSONDecoder.java:32)
... 4 more
Looking on the implementation
https://github.com/mongodb/mongo-java-driver/blob/master/src/main/org/bson/LazyBSONDecoder.java it looks that it is caught in
throw new BSONException( "should be impossible" , ioe );
The above takes place in query to the database (by query I mean that byte[] array contains all the bytes after the document length). The query itself contains a string "ismaster" or in hex is "x10 ismaster x00 x01 x00 x00 x00 x00". I suspect it is the BSON format of {isMaster: 1}, but I still do not understand why it fails.
You say:
byte[] array contains all the bytes after the document length
If you are stripping off the first part of the BSON that's returned, you are not passing a valid BSON document to the parser/decoder.
See BSON spec for details, but in a nut-shell, the first four bytes are the total size of the binary document in little endian format.
You are getting an exception in the code that is basically trying to read an expected number of bytes. It read the first int32 as length and then tried to parse the rest of it as BSON elements (and got an exception when it didn't find a valid type in the next byte). Pass it everything you get back from the query, including document size and it will work correctly.
This works just fine:
byte[] array = new BigInteger("130000001069734d6173746572000100000000", 16).toByteArray();
BasicBSONDecoder decoder = new BasicBSONDecoder();
BSONObject bsonObject = decoder.readObject(array);
System.out.println(bsonObject.toString());
And produces this output:
{ "isMaster" : 1}
There is something wrong with the bytes in your byteBuffer. Note that you must include the whole document (including the first 4 bytes which are the size).