Diference between offset and Index in addressing modes? - cpu-architecture

In http://en.wikipedia.org/wiki/Addressing_mode
Indexed absolute
+------+-----+-----+--------------------------------+
| load | reg |index| address |
+------+-----+-----+--------------------------------+
(Effective address = address + contents of specified index
register)
Note that this is more or less the same as base-plus-offset addressing mode, except that the offset in this case is large enough to address any memory location.
I still don't understand what differences are between offset and index? And differences between base-plus-offset addressing mode and Indexed absolute addressing mode?
Thanks.

Offset is an absolute number of bytes. So if address = 0x1000 and offset = 0x100 then the effective address = 0x1000 + 0x100 = 0x1100.
Index is an offset that is multiplied by a constant. So if address = 0x1000 and index = 0x100 and size of element = 4 then address = 0x1000 + 0x100*4 = 0x1400. You would use this when indexing into an array of 32-bit values.
To me, the address+index example sounds like the x86 LEA instruction:
http://www.cs.virginia.edu/~evans/cs216/guides/x86.html#instructions
With that said, when I read the Wikipedia article, I can't see a difference between "Indexed absolute" "Base plus index" and "Scaled." It looks like the exact same thing, except the term "address" and "base" are interchanged. This looks to me like too many authors writing the same thing over again. If this response gets enough upvotes, I'm editing the article. :-)

Related

Why doesn't this bor and bnot expression give the expected result in Powershell?

why doesn't this bor bnot give the expected result in powershell?
To find the last address in an ipv6 subnet one needs to do a "binary or" and a "binary not" operation.
The article I'm reading (https://www.codeproject.com/Articles/660429/Subnetting-with-IPv6-Part-1-2) describes it like this:
(2001:db8:1234::) | ~(ffff:ffff:ffff::) = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Where | is a "binary or" and
~ is a "binary not"
In powershell however, I try it like:
$mask = 0xffffffff
$someOctet = 0x0000
"{0:x4}" -f ($someOctet -bor -bnot ($mask) )
and I get 0000 instead of ffff
Why is this?
The tutorial is doing a -not of the entire subnet mask, so ff00 inverts to 00ff and similar for longer Fs and 0s; you aren't doing that, so you don't get the same results.
The fully expanded calculation that you show is doing this:
1. (2001:0db8:1234:0000:0000:0000:0000:0000) | ~(ffff:ffff:ffff:0000:0000:0000:0000:0000)
2. (2001:0db8:1234:0000:0000:0000:0000:0000) | (0000:0000:0000:ffff:ffff:ffff:ffff:ffff)
3. = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Note how in step 1. to step 2, the not is inverting the pattern of Fs and 0s, switching the subnet mask around, and switching it around between the bit where the prefix ends and the bit where the host part begins.
Then step 3 or takes only the set bits from the left to keep those numbers the same (neither zero'd nor ffff'd), and all the set bits from the right (to ffff those, maxing them to the max IP address within that prefix).
In other words, it makes no sense to do this "an octet at a time". This is a whole IP address (or whole prefix) + whole subnet mask operation.
Where the tutorial says:
& (AND), | (OR), ~ (NOT or bit INVERTER): We will use these three bitwise operators in our calculations. I think everybody is familiar -at least from university digital logic courses- and knows how they operate. I will not explain the details here again. You can search for 'bitwise operators' for further information.
If you aren't very familiar with what they do, it would be worth studying that more, before trying to apply them to IP subnetting. Because you are basically asking why 0 or (not 1) is 0 and the answer is because that's how Boolean logic "or" and "not" work.
Edit for your comment
[math]::pow(2,128) is a lot bigger than [decimal]::maxvalue, so I don't think Decimal will do.
I don't know what a recommended way to do it is, but I imagine if you really wanted to do it all within PowerShell with -not you'd have to process it with [bigint] (e.g. [bigint]::Parse('20010db8123400000000000000000000', 'hex')).
But more likely, you'd do something more long-winded like:
# parse the address and mask into IP address objects
# which saves you having to expand the short version to
$ip = [ipaddress]::Parse('fe80::1')
$mask = [ipaddress]::Parse('ffff::')
# Convert them into byte arrays, then convert those into BitArrays
$ipBits = [System.Collections.BitArray]::new($ip.GetAddressBytes())
$maskBits = [System.Collections.BitArray]::new($mask.GetAddressBytes())
# ip OR (NOT mask) calculation using BitArray's own methods
$result = $ipBits.Or($maskBits.Not())
# long-winded way to get the resulting BitArray back to an IP
# via a byte array
$byteTemp = [byte[]]::new(16)
$result.CopyTo($byteTemp, 0)
$maxIP = [ipaddress]::new($byteTemp)
$maxIP.IPAddressToString
# fe80:ffff:ffff:ffff:ffff:ffff:ffff:ffff

Scala- How can I read some specific bytes from a file?

I'd like to encrypt a text(about 1 MB) and I use the max length of RSA keys(4096 bits). However, the key seems too short. As I googled, I got to know that the max size of text that a RSA can encrypt is 8 bytes shorter than the length of the key. Thus, I can only encrypt 501 bytes in this way. So I decided to divide my text into 2093 arrays (1024*1024/501=2092.1).The question is how can I pour the first 501 bytes into the first array in scala?Anyone who can help me this out?
I can't comment on whether your cryptographic approach is okay. (I don't know, but would rely on libraries written and vetted by more knowledgeable cryptographers if I were in your shoes. I'm not sure why you choose 501, which is 11 bytes, not 8, shorter than 512.)
But chunking your arrays into fixed-size blocks should be easy. Just use the grouped function f Array.
val text : String = ???
val bytes = text.getBytes( scala.io.Codec.UTF8.charSet ) // lots of ways to do this
val blocks = bytes.grouped( 501 )
Blocks will be an Iterator[Array[Byte]], each 501 bytes long except for the last (which may be shorter).

How to truncate a 2's complement output

I have data written into short data type. The data written is of 2's complement form.
Now when I try to print the data using %04x, the data with MSB=0 is printed fine for eg if data=740, print I get is 0740
But when the MSB=1, I am unable to get a proper print. For eg if data=842, print I get is fffff842
I want the data truncated to 4 bytes so expected output is f842
Either declare your data as a type which is 16 bits long, or make sure the printing function uses the right format for 16 bits value. Or use your current type, but do a bitwise AND with 0xffff. What you can do depends on the language you're doing it in really.
But whichever way you go, check your assumptions again. There seems to be a few issues in your question:
2s-complement applies to signed numbers only. There are no negative numbers in your question.
Assuming you mean C's short - it doesn't have to be 16 bits long.
"I get is fffff842 I want the data truncated to 4 bytes" - fffff842 is 4 bytes long. f842 is 2 bytes long.
2-bytes long value 842 does not have the MSB set.
I'm assuming C (or possibly C++) as the language here.
Because of the default argument promotions involved when calling a variable argument function (such as printf), your use of a short will result in an integer promotion, which states that "If an int can represent all values of the original type (as restricted by the width, for a
bit-field), the value is converted to an int".
A short is converted to an int by means of sign-extension, and 0xf842 sign-extended to 32 bits is 0xfffff842.
You can use a bitwise AND to mask off the most significant word:
printf("%04x", data & 0xffff);
You could also add the h length specifier to state that you only want to print an (unsigned) short worth of bits from an int:
printf("%04hx", data);

How to read SNMP OID Output (bits) (hrPrinterDetectedErrorState)

I have a quick question. Its most likely user error so I apologize before I begin.
I’m trying to setup threshold for a device so that it will alert us when one of our printers is in a certain state. (Jammed, out of toner, no paper etc) I’ve found the specific oid that handles this. (1.3.6.1.2.1.25.3.5.1.2.1) The specific oid is called hrPrinterDetectedErrorState under the HOST-RESOURCE-MIB. I’ve verified that I can see the oid via SNMPWALK. My problem is interpreting the data its spitting out. What i'm reading in the MIB and what i'm seeing via SNMPWALK are different.
Here is the description of oid from the MIB:
"This object represents any error conditions detected
by the printer. The error conditions are encoded as
bits in an octet string, with the following
definitions:
Condition Bit #
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
inputTrayMissing 8
outputTrayMissing 9
markerSupplyMissing 10
outputNearFull 11
outputFull 12
inputTrayEmpty 13
overduePreventMaint 14
Bits are numbered starting with the most significant
bit of the first byte being bit 0, the least
significant bit of the first byte being bit 7, the
most significant bit of the second byte being bit 8,
and so on. A one bit encodes that the condition was
detected, while a zero bit encodes that the condition
was not detected.
This object is useful for alerting an operator to
specific warning or error conditions that may occur,
especially those requiring human intervention."
The odd part is that SNMPWALK says the oid is a Hex-String while the MIB specifies that it should be a Octet-String. Are the two different? Do I need to convert the data that is output by SNMPWALK somehow to get it to match up with what the MIB is saying?
To test everything, I put the printer into several different “States.” I then ran an SNMPWALK on the device to see what the oid output. Here are the results. As you will see, these results don’t match up to what the MIB specifies.
Case 1: Opened the toner door
Expected Output based on MIB: 4
SNMP Output: 08
Case 2: Removed Toner & Closed the door
Expected Output based on MIB: 1
SNMP Output: 10
Case 3: Placed 1 sheet of paper and printed a 2 page document. The printer ran out of paper.
Expected Output based on MIB: 0 or 1
SNMP Output: #
I’m confused at the output. I just need to know how to read the oid so I can setup thresholds so that when it sees, for example, a 08 it performs a certain action.
Thanks for your help!
You are reading this wrong. The data you receive back should actually be interpreted as a bit array and every bit is its own value for the specific alarm in your case
Expected Output based on MIB: 4
SNMP Output: 08
You actually get back the output:
00001000 00000000
The first byte here covers those values
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
So lowPaper is bit 0, noPaper is bit 1, lowToner is bit 2, etc. And doorOpen is bit 4 and as you can see that bit is set, indicating that the door is open.
EDIT:
This is very dependent on the device and implementation. To know how to parse it right involves a lot of trial and error (at least from my experience). As an example if you get back the message 9104, this could be either be
91 and 04 are separate so you first translate 91 to binary from hex and then the
same thing with 04
91: 10010001
04: 00000100
Which would mean low paper, noToner, service requested and inputTrayEmpty. This looks like the most likely way this works.
If you only get one byte back this then means you should only look for alarms in the first 8 bits. As a example of things you need to look out for: If the only alarm present is doorOpen you could be getting back only 08 but it would actually be 0008 where the first 2 hex chars are actually the second part of the alarms but aren't shown because they are none present. So in your case you actually first have to switch the bytes (if there are 4) parse the first two and the second two on their own and then you get the actual result.
As I said there is no real standard here from what i have seen and you have to just work with it until you are sure you know how the data is sent and how to parse it.
Powershell function to interpret the hrPrinterDetectedErrorState octetstring. (I feel like there's some more often used way to do this?) I've been using this module, but it doesn't even resolve hostnames to ip addresses.
The data property returned is just a string not an octetstring type. None of the printers are snmp v3.
https://www.powershellgallery.com/packages/SNMP/1.0.0.1
A single byte would have to be padded with a zero. Most of the time only one byte is sent. The bytes are in the, I think confusing, order that it's documented in. Converting two bytes to an integer results in a different order (depending on big endian or little endian order).
function snmpmessage {
param($data)
[flags()] Enum hrPrinterDetectedErrorState
{
# more common group
LowPaper = 0x0080
NoPaper = 0x0040
LowToner = 0x0020
NoToner = 0x0010
DoorOpen = 0x0008
Jammed = 0x0004
Offline = 0x0002
ServiceRequested = 0x0001
InputTrayMissing = 0x8000
OutputTrayMissing = 0x4000
MarkerSupplyMissing = 0x2000
OutputNearFull = 0x1000
OutputFull = 0x0800
InputTrayEmpty = 0x0400
OverduePreventMaint = 0x0200
}
$bytes = [byte[]][char[]]$data
if ($bytes.count -eq 1) { $bytes = $bytes[0],0 } # pad 0
$code = [bitconverter]::ToUInt16($bytes, ($startIndex=0))
[hrPrinterDetectedErrorState]$code
}
snmpmessage -join [char[]](1,4)
ServiceRequested, InputTrayEmpty
snmpmessage '#' # 0x40
NoPaper
Or:
$hrPrinterDetectedErrorState = '1.3.6.1.2.1.25.3.5.1.2.1'
$hostname = 'myprinter01'
$ip = (Resolve-DnsName $hostname).ipaddress
$result = Get-SnmpData -ip $ip -oid $hrPrinterDetectedErrorState -v v1
snmpmessage $result.data
LowToner
# ?
# $octetstring = [Lextm.SharpSnmpLib.OctetString]::new($result.data)

How can I convert the tiger hash values from the official implementations into the form used by Direct Connect?

I am trying to implement a Direct Connect Client, and I am currently stuck at a point where I need to hash the files in order to be able to upload them to other clients.
As the all other clients require a TTHL (Tiger Tree Hashing Leaves) support for verification of the downloaded data. I have searched for implementations of the algorithm, and found tiger-hash-python.
I have implemented a routine that uses the hash function from before, and is able to hash large files, according to the logic specified in Tree Hash EXchange format (THEX) (basically, the tree diagram is the important part on that page).
However, the value produced by it is similar to those shown on Wikipedia, a hex digest, but is different from those shown in the DC clients I'm using for reference.
I have been unable to find out how the hex digest form is converted to this other one (39 characters, A-Z, 0-9). Could someone please explain how that is done?
Well ... I tried what Paulo Ebermann said, using the following functions:
def strdivide(list,length):
result = []
# Calculate how many blocks there are, using the condition: i*length = len(list).
# The additional maths operations are to deal with the last block which might have a smaller size
for i in range(0,int(math.ceil(float(len(list))/length))):
result.append(list[i*length:(i+1)*length])
return result
def dchash(data):
result = tiger.hash(data) # From the aformentioned tiger-hash-python script, 48-char hex digest
result = "".join([ "".join(strdivide(result[i:i+16],2)[::-1]) for i in range(0,48,16) ]) # Representation Transform
bits = "".join([chr(int(c,16)) for c in strdivide(result,2)]) # Converting every 2 hex characters into 1 normal
result = base64.b32encode(bits) # Result will be 40 characters
return result[:-1] # Leaving behind the trailing '='
The TTH for an empty file was found to be 8B630E030AD09E5D0E90FB246A3A75DBB6256C3EE7B8635A, which after the transformation specified here, becomes 5D9ED00A030E638BDB753A6A24FB900E5A63B8E73E6C25B6. Base-32 encoding this result yielded LWPNACQDBZRYXW3VHJVCJ64QBZNGHOHHHZWCLNQ, which was found to be what DC++ generates.
The only mention of the format of the hash in the Direct Connect protocol I found is on the $SR page on the NMDC Protocol wiki:
For files containing TTH, the <hub_name> parameter is replaced with TTH:<base32_encoded_tth_hash> (ref: TTH_Hash).
So, it is Base32-encoding. This is defined in RFC 4648 (and some earlier ones), section 6.
Basically, you are using the capital letters A-Z and the decimal digits 2 to 7, and one base32 digit represents 5 bits, while one base16 (hexadecimal) digit represents only 4 ones.
This means, each 5 hex digits map to 4 base32-digits, and for a Tiger hash (192 bits) you will need 40 base32-digits (in the official encoding, the last one would be a = padding, which seems to be omitted if you say that there are always 39 characters).
I'm not sure of an implementation of a conversion from hex (or bytes) to base32, but it shouldn't be too complicated with a lookup table and some bit-shifting.