I'm trying to write Checksum Algorithm using MARIE.js, but I'm stuck on doing 1's complement.
I saw other assembly languages have CMA code, but I couldn't find that information on MARIE.
Thus, I typed G that opcode is 2F to find the checksum byte but the output is not what I expected.
What did I miss or do something wrong?
Input /Takes user input
Store A /Stores user input to A
Input /Takes user input
Store B /Stores user input to B
Input /Takes user input
Store C /Stores user input to C
Input /Takes user input
Store D /Stores user input to D
Load A /Load A to AC
Add B /Add B to A
Add C /Add C to B
Add D /Add D to C
Subt F /Subtract F from Sum of data 1,2,3,4
Store E /Sum of data 1,2,3,4 ignoring carry
Subt G
/Add ONE
Output /Print checksum byte
HALT /End program
/Variable decleration
A, HEX 0 /Data 1
B, HEX 0 /Data 2
C, HEX 0 /Data 3
D, HEX 0 /Data 4
E, HEX 0 /Checksum byte
F, HEX 100 /Ignore carry
G, HEX 2F
ONE, DEC 1
Two's complement, -n, is defined as one's complement + 1, e.g. ~n + 1
Therefore, since MARIE has subtraction you can make two's complement (e.g. 0-n) and subtracting 1 from that will yield one's complement, ~n.
Related
I've got a task from my professor and unfortunately I'm really confused.
The task:
Find a string D1 for which hash(D1) contains 4 first bytes equal 0.
So it should look like "0000....."
As I know we cannot just decrypt a hash, and checking them one by one is kind of pointless work.
I've got a task from my professor...
Find a string D1 for which hash(D1) contains 4 first bytes equal 0. So it should look like "0000....."
As I know we cannot just decrypt a hash, and checking them one by one is kind of pointless work.
In this case it seem like the work is not really "pointless." Rather, you are doing this work because your professor asked you to do it.
Some commenters have mentioned that you could look at the bitcoin blockchain as a source of hashes, but this will only work if your hash of interest is the same one use by bitcoin (double-SHA256!)
The easiest way to figure this out in general is just to brute force it:
Pseudo-code a la python
for x in range(10*2**32): # Any number bigger than about 4 billion should work
x_str = str(x) # Any old method to generate some bytes to hash should work
x_bytes = x_str.encode('utf-8')
hash_bytes = hash(x_bytes) # assuming hash() returns bytes
if hash_bytes[0:4] == b'\x00\x00\x00\x00':
print("Found string: {}".format(x_str))
break
I wrote a short python3 script, which repeatedly tries hashing random values until it finds a value whose SHA256 hash has four leading zero bytes:
import secrets
import hashlib
while(True):
p=secrets.token_bytes(64)
h=hashlib.sha256(p).hexdigest()
if(h[0:8]=='00000000'): break
print('SHA256(' + p.hex() + ')=' + h)
After running for a few minutes (on my ancient Dell laptop), it found a value whose SHA256 hash has four leading zero bytes:
SHA256(21368dc16afcb779fdd9afd57168b660b4ed786872ad55cb8355bdeb4ae3b8c9891606dc35d9f17c44219d8ea778d1ee3590b3eb3938a774b2cadc558bdfc8d4)=000000007b3038e968377f887a043c7dc216961c22f8776bbf66599acd78abf6
The following command-line command verifies this result:
echo -n '21368dc16afcb779fdd9afd57168b660b4ed786872ad55cb8355bdeb4ae3b8c9891606dc35d9f17c44219d8ea778d1ee3590b3eb3938a774b2cadc558bdfc8d4' | xxd -r -p | sha256sum
As expected, this produces:
000000007b3038e968377f887a043c7dc216961c22f8776bbf66599acd78abf6
Edit 5/8/21
Optimized version of the script, based on my conversation with kelalaka in the comments below.
import secrets
import hashlib
N=0
p=secrets.token_bytes(32)
while(True):
h=hashlib.sha256(p).digest()
N+=1
if(h.hex()[0:8]=='0'*8): break
p=h
print('SHA256(' + p.hex() + ')=' + h.hex())
print('N=' + str(N))
Instead of generating a new random number in each iteration of the loop to use as the input to the hash function, this version of the script uses the output of the hash function from the previous iteration as the input to the hash function in the current iteration. On my system, this quadruples the number of iterations per second. It found a match in 1483279719 iterations in a little over 20 minutes:
$ time python3 findhash2.py
SHA256(69def040a417caa422dff20e544e0664cb501d48d50b32e189fba5c8fc2998e1)=00000000d0d49aaaf9f1e5865c8afc40aab36354bc51764ee2f3ba656bd7c187
N=1483279719
real 20m47.445s
user 20m46.126s
sys 0m0.088s
The sha256 hash of the string $Eo is 0000958bc4dc132ad12abd158073204d838c02b3d580a9947679a6
This was found using the code below which restricts the string to only UTF8 keyboard characters. It cycles through the hashes of each 1 character string (technically it hashes bytes, not strings), then each 2 character string, then each 3 character string, then each 4 character string (it never had to go to 4 characters, so I'm not 100% sure the math for that part of the function is correct).
The 'limit" value is included to prevent the code from running forever in case a match is not found. This ended up not being necessary as a match was found in 29970 iterations and the execution time was nearly instantaneous.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from hashlib import sha256
utf8_chars = list(range(0x21,0x7f))
def make_str(attempt):
if attempt < 94:
c0 = [attempt%94]
elif attempt >= 94 and attempt < 8836:
c2 = attempt//94
c1 = attempt%94
c0 = [c2,c1]
elif attempt >= 8836 and attempt < 830584:
c3 = attempt//8836
c2 = (attempt-8836*c3)//94
c1 = attempt%94
c0 = [c3,c2,c1]
elif attempt >= 830584 and attempt < 78074896:
c4 = attempt//830584
c3 = (attempt-830584*c4)//8836
c2 = ((attempt-830584*c4)-8836*c3)//94
c1 = attempt%94
c0 = [c4,c3,c2,c1]
return bytes([utf8_chars[i] for i in c0])
target = '0000'
limit = 1200000
attempt = 0
hash_value = sha256()
hash_value.update(make_str(attempt))
while hash_value.hexdigest()[0:4] != target and attempt <= limit:
hash_value = sha256()
attempt += 1
hash_value.update(make_str(attempt))
t = ''.join([chr(i) for i in make_str(attempt)])
print([t, attempt])
Could someone tell me why characters from extended ASCII table are being converted to 2 hexagonal numbers instead of 1? For example:
a = 61
รข = C3 A2 (even though it should be normally encoded as E2)
This is "Hex UTF-8 bytes".
U+007F (127) -> 1 Byte
U+07FF (2,047) -> 2 Byte
http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=%C3%A2&mode=char
http://unicode.mayastudios.com/examples/utf8.html
I have the task to transfer each letter of a given sequence into an integer vector in matlab. For instance, given the input sequence, 'seq = TGCA'. Since here we totally have 4 distinct letters, I plan to encode 'A' as '0001', encode 'T' as '0010', encode 'G' as '0100' and encode 'C' as '1000'. And then the whole sequence can be encoded as the contenationn of all the encoded (0,1) vectors. So, in this case, the whole sequence would be '0010010010000001'. Any comments would be appreciated. Many thanks.
The idea behind this solution is to define a key, which returns the expected output when compared to the string:
>> key='CGTA'
key =
CGTA
>> key=='A'
ans =
0 0 0 1
>> key=='T'
ans =
0 0 1 0
This basically solves it, now use bsxfun to vectorize:
E=reshape(bsxfun(#eq,key(:),seq(:).'),1,[])
This outputs a logical vector, if char is inteded use:
F=char(reshape(bsxfun(#eq,key(:),seq(:).'),1,[])+'0')
Octave doesn't support containers.Map, so I'm gonna waste 80 rows out of an 84x4 matrix...
codes(['A','T','G','C'],:)=['0001';'0010';'0100';'1000'];
seq = 'ATACAGCTAGGATCA';
encodedSeq=codes(seq,:);
encodedSeq =
0001
0010
0001
1000
0001
0100
1000
0010
0001
0100
0100
0001
0010
1000
0001
or
reshape(encodedSeq,1,[])
ans = 000100100000010000001000110000010000010000100101010001001001
I have a number like this - 778310098 - and I want to read 2 bytes at a time. So, I am expecting my output to be 77; 83; 10; 09; 8. I tried using the below:
uint16(fread(fileID,inf, 'ubit8')) and the output I get is the ASCII value of the individual numbers:
55
55
56
51
49
48
48
57
56
What do I need to do to get the desired output?
To read pairs of ASCII digits from a text file (we tend not to describe text files in byets, but in characters), use:
[10 1] * (fread(fileID,[2 inf], 'char') - 48)
To read bytes pairwise from a binary file, try
fread(fileID,inf, '*uint16')
One method is to convert it to a string, then process the string, then convert it back to an integer. While this may not be particularly elegant or perfect, will this do the trick?
a = 778310098;
b = num2str(a);
for i = 1:2:length(b)
if i == length(b) % to handle the case for odd input
split = str2num(b(i))
else
split = str2num(b(i:i+1)) % handle all others
end
end
How can I convert the numbers in the range 1 through 26 to their respective letter position in the alphabet?
1 = A
2 = B
...
26 = Z
CHR(#) will give you the ASCII character, you just need to offset it based on the ASCII table:
e.g. A = 65, so you will need to add 64 to 1:
CHR(64 + #) = A if # is 1
ASCII code is the numerical representation of a character such as 'a' or 'Z'. Therefore by looking at the table one can see that capital A has a value of 65 and Z has a value of 90. Adding 64 from each value in the range 1-26 will give you their corresponding letter.