Computer architecture : how to encode an instruction? - cpu-architecture

We are asked to give a possible encoding of the following instruction, in hexadecimal, in little-endian :
r1 <- Memory[r2+r3]
Where the initial value of r1 (ECX), r2 (EDX) and r3 (EBX) are respectively 0x137dd, 0xb and 0x1f.
I'm confused. I know instructions are encoded in 32 bits where the first few bits are the opcode and the otheres the addresses of the operand, but what is the addresses of the operand, in this case?
Thanks !

The operands here are r2 and r3. The instruction loads the value from address at memory location (value stored in r2 + value stored in r3) into register r1.
So the instruction would work like this:
opcode + target destination + source operand 1 + source operand 2
The question seems incomplete in that it doesn't give you the opcode for this operation, but the address of the operands should be the addresses of r2 and r3 (not the values in r2 and r3!).

Related

Block Encoding format in PKCS

In PKCS block encoding format for RSA what is difference in Block type 01 and 02? And when are they used?
From https://www.rfc-editor.org/rfc/rfc2313#section-8.1 (Thanks, James):
8.1 Encryption-block formatting
A block type BT, a padding string PS, and the data D shall be
formatted into an octet string EB, the encryption block.
EB = 00 || BT || PS || 00 || D . (1)
The block type BT shall be a single octet indicating the structure of
the encryption block. For this version of the document it shall have
value 00, 01, or 02. For a private- key operation, the block type
shall be 00 or 01. For a public-key operation, it shall be 02.
It's worth noting that the phrase "block type" does not appear in the PKCS#1 v2.0 RFC (https://www.rfc-editor.org/rfc/rfc2437)
Later we see the verification half:
9.4 Encryption-block parsing
The encryption block EB shall be parsed into a block type BT, a
padding string PS, and the data D according to Equation (1).
It is an error if any of the following conditions occurs:
The encryption block EB cannot be parsed
unambiguously (see notes to Section 8.1).
The padding string PS consists of fewer than eight
octets, or is inconsistent with the block type BT.
The decryption process is a public-key operation
and the block type BT is not 00 or 01, or the decryption
process is a private-key operation and the block type is
not 02.
(emphasis mine)
Using the terminology from RFC2313:
Signature Generation is an encryption process which is a private-key operation (BT=01)
Signature Verification is a decryption process which is a public-key operation (BT=01)
Initiate Key Transfer / Enveloping is an encryption process which is a public-key operation (BT=02)
Receive Key Transfer is a decryption process which is a private-key operation (BT=02).
RFC 2437 (PKCS#1 2.0) replaced many of these terms, and got rid of the block type notion altogether. That byte is simply dictated to be 01 for PKCS#1 signature formatting, and dictated to be 02 for PKCS#1 encryption. That byte isn't fixed for either OAEP (encryption) or PSS (signing) padding schemes.

How to truncate a 2's complement output

I have data written into short data type. The data written is of 2's complement form.
Now when I try to print the data using %04x, the data with MSB=0 is printed fine for eg if data=740, print I get is 0740
But when the MSB=1, I am unable to get a proper print. For eg if data=842, print I get is fffff842
I want the data truncated to 4 bytes so expected output is f842
Either declare your data as a type which is 16 bits long, or make sure the printing function uses the right format for 16 bits value. Or use your current type, but do a bitwise AND with 0xffff. What you can do depends on the language you're doing it in really.
But whichever way you go, check your assumptions again. There seems to be a few issues in your question:
2s-complement applies to signed numbers only. There are no negative numbers in your question.
Assuming you mean C's short - it doesn't have to be 16 bits long.
"I get is fffff842 I want the data truncated to 4 bytes" - fffff842 is 4 bytes long. f842 is 2 bytes long.
2-bytes long value 842 does not have the MSB set.
I'm assuming C (or possibly C++) as the language here.
Because of the default argument promotions involved when calling a variable argument function (such as printf), your use of a short will result in an integer promotion, which states that "If an int can represent all values of the original type (as restricted by the width, for a
bit-field), the value is converted to an int".
A short is converted to an int by means of sign-extension, and 0xf842 sign-extended to 32 bits is 0xfffff842.
You can use a bitwise AND to mask off the most significant word:
printf("%04x", data & 0xffff);
You could also add the h length specifier to state that you only want to print an (unsigned) short worth of bits from an int:
printf("%04hx", data);

How to read SNMP OID Output (bits) (hrPrinterDetectedErrorState)

I have a quick question. Its most likely user error so I apologize before I begin.
I’m trying to setup threshold for a device so that it will alert us when one of our printers is in a certain state. (Jammed, out of toner, no paper etc) I’ve found the specific oid that handles this. (1.3.6.1.2.1.25.3.5.1.2.1) The specific oid is called hrPrinterDetectedErrorState under the HOST-RESOURCE-MIB. I’ve verified that I can see the oid via SNMPWALK. My problem is interpreting the data its spitting out. What i'm reading in the MIB and what i'm seeing via SNMPWALK are different.
Here is the description of oid from the MIB:
"This object represents any error conditions detected
by the printer. The error conditions are encoded as
bits in an octet string, with the following
definitions:
Condition Bit #
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
inputTrayMissing 8
outputTrayMissing 9
markerSupplyMissing 10
outputNearFull 11
outputFull 12
inputTrayEmpty 13
overduePreventMaint 14
Bits are numbered starting with the most significant
bit of the first byte being bit 0, the least
significant bit of the first byte being bit 7, the
most significant bit of the second byte being bit 8,
and so on. A one bit encodes that the condition was
detected, while a zero bit encodes that the condition
was not detected.
This object is useful for alerting an operator to
specific warning or error conditions that may occur,
especially those requiring human intervention."
The odd part is that SNMPWALK says the oid is a Hex-String while the MIB specifies that it should be a Octet-String. Are the two different? Do I need to convert the data that is output by SNMPWALK somehow to get it to match up with what the MIB is saying?
To test everything, I put the printer into several different “States.” I then ran an SNMPWALK on the device to see what the oid output. Here are the results. As you will see, these results don’t match up to what the MIB specifies.
Case 1: Opened the toner door
Expected Output based on MIB: 4
SNMP Output: 08
Case 2: Removed Toner & Closed the door
Expected Output based on MIB: 1
SNMP Output: 10
Case 3: Placed 1 sheet of paper and printed a 2 page document. The printer ran out of paper.
Expected Output based on MIB: 0 or 1
SNMP Output: #
I’m confused at the output. I just need to know how to read the oid so I can setup thresholds so that when it sees, for example, a 08 it performs a certain action.
Thanks for your help!
You are reading this wrong. The data you receive back should actually be interpreted as a bit array and every bit is its own value for the specific alarm in your case
Expected Output based on MIB: 4
SNMP Output: 08
You actually get back the output:
00001000 00000000
The first byte here covers those values
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
So lowPaper is bit 0, noPaper is bit 1, lowToner is bit 2, etc. And doorOpen is bit 4 and as you can see that bit is set, indicating that the door is open.
EDIT:
This is very dependent on the device and implementation. To know how to parse it right involves a lot of trial and error (at least from my experience). As an example if you get back the message 9104, this could be either be
91 and 04 are separate so you first translate 91 to binary from hex and then the
same thing with 04
91: 10010001
04: 00000100
Which would mean low paper, noToner, service requested and inputTrayEmpty. This looks like the most likely way this works.
If you only get one byte back this then means you should only look for alarms in the first 8 bits. As a example of things you need to look out for: If the only alarm present is doorOpen you could be getting back only 08 but it would actually be 0008 where the first 2 hex chars are actually the second part of the alarms but aren't shown because they are none present. So in your case you actually first have to switch the bytes (if there are 4) parse the first two and the second two on their own and then you get the actual result.
As I said there is no real standard here from what i have seen and you have to just work with it until you are sure you know how the data is sent and how to parse it.
Powershell function to interpret the hrPrinterDetectedErrorState octetstring. (I feel like there's some more often used way to do this?) I've been using this module, but it doesn't even resolve hostnames to ip addresses.
The data property returned is just a string not an octetstring type. None of the printers are snmp v3.
https://www.powershellgallery.com/packages/SNMP/1.0.0.1
A single byte would have to be padded with a zero. Most of the time only one byte is sent. The bytes are in the, I think confusing, order that it's documented in. Converting two bytes to an integer results in a different order (depending on big endian or little endian order).
function snmpmessage {
param($data)
[flags()] Enum hrPrinterDetectedErrorState
{
# more common group
LowPaper = 0x0080
NoPaper = 0x0040
LowToner = 0x0020
NoToner = 0x0010
DoorOpen = 0x0008
Jammed = 0x0004
Offline = 0x0002
ServiceRequested = 0x0001
InputTrayMissing = 0x8000
OutputTrayMissing = 0x4000
MarkerSupplyMissing = 0x2000
OutputNearFull = 0x1000
OutputFull = 0x0800
InputTrayEmpty = 0x0400
OverduePreventMaint = 0x0200
}
$bytes = [byte[]][char[]]$data
if ($bytes.count -eq 1) { $bytes = $bytes[0],0 } # pad 0
$code = [bitconverter]::ToUInt16($bytes, ($startIndex=0))
[hrPrinterDetectedErrorState]$code
}
snmpmessage -join [char[]](1,4)
ServiceRequested, InputTrayEmpty
snmpmessage '#' # 0x40
NoPaper
Or:
$hrPrinterDetectedErrorState = '1.3.6.1.2.1.25.3.5.1.2.1'
$hostname = 'myprinter01'
$ip = (Resolve-DnsName $hostname).ipaddress
$result = Get-SnmpData -ip $ip -oid $hrPrinterDetectedErrorState -v v1
snmpmessage $result.data
LowToner
# ?
# $octetstring = [Lextm.SharpSnmpLib.OctetString]::new($result.data)

How to implement 16 bit->32 bit lookup table in ARM assembly using NEON?

In an iOS 6 project, I have a buffer containing two byte words (16 bits) that need to be translated to four byte words (32 bits) via a lookup table. I hard-code the values into the table, and then use the the value of the two byte buffer to determine which 32 bit table value to retrieve. Here's an example:
void map_values(uint32_t *dst,uint16_t *src,uint32_t *lut,int buf_length){
int i=0;
for(i=0;i<buf_length;i++){
*dst = *(lut+(*src));
dst++;
src++;
}
}
The problem is, it's too slow. Could this be sped up by processing 4 output bytes at a time using NEON? The thing is, I'm iffy on how to take the value from the src buffer and use that as an input to the lookup table to figure out what value to retrieve. Also, the word lengths are the same in the table and the output buffer, but not for the source. So, I can only read two 16 bit words as input, versus the four 32 bit word output I need. Any ideas? Is there a better way to approach this problem, perhaps?
Current asm output from clang (clang -O3 -arch armv7 lut.c -S):
.section __TEXT,__text,regular,pure_instructions
.section __TEXT,__textcoal_nt,coalesced,pure_instructions
.section __TEXT,__const_coal,coalesced
.section __TEXT,__picsymbolstub4,symbol_stubs,none,16
.section __TEXT,__StaticInit,regular,pure_instructions
.syntax unified
.section __TEXT,__text,regular,pure_instructions
.globl _map_values
.align 2
.code 16 # #map_values
.thumb_func _map_values
_map_values:
# BB#0:
cmp r3, #0
it eq
bxeq lr
LBB0_1: # %.lr.ph
# =>This Inner Loop Header: Depth=1
ldrh r9, [r1], #2
subs r3, #1
ldr.w r9, [r2, r9, lsl #2]
str r9, [r0], #4
bne LBB0_1
# BB#2: # %._crit_edge
bx lr
.subsections_via_symbols
Lookup tables are (nearly) unvectorizable. Very small lookup tables can be handled with the vtbl instruction, but your lookup table is far too big for that.
What are you using the lookup table for? If the values can be computed on the fly without too much work instead of looking them up, that may actually be a significant win for you.
My first thought is that you might get some luck out of vtablelookup in the vecLib portion of the Accelerate framework. The signature is:
vUInt32 vtablelookup (
vSInt32 Index_Vect,
uint32_t *Table
);
where vSInt32 and vUInt32 are 128 bit packed 32 bit signed/unsigned integers respectively. I believe the function is backed by NEON on ARM. The big problem will be converting your src array into 32 bit indices, which could well slow things down so much as to render the speed gains from vectorising the lookup pointless.

Encoding of ... some sort?

Forgive me if this has been asked before, but I assure you I've scoured the internet and have turned up nothing, probably because I don't have the right terminology.
I would like to take an integer and convert it to a little-endian(?) hex representation like this:
303 -> 0x2f010000
I can see that the bytes are packed such that the 16's and 1's places are both in the same byte, and that the 4096's place and 256's place share a byte. If someone could just point me to the right terminology for such encoding, I'm sure I could find my answer on how to do it. Thanks!
use bit shift operators combined with bitwise AND and OR operators...
assuming 32 bit unsigned:
int value = 303;
int result = 0x00000000;
for (int i = 0; i < 4; i++)
{
result = result | ((value & (0xFF << (i * 8))) << (24 - (i * 8)));
}
Big-endian and little-endian refer to the order of the bytes in memory. A value like 0x2f100000 has no intrinsic endianness, endianness depends on the CPU architecture.
If you always want to reverse the order of the bytes in a 32-bit value, use the code that Demi posted.
If you always want to get the specific byte order (because you're preparing to transfer those bytes over the network, or store them to a disk file), use something else. E.g. the BSD sockets library has a function htonl() that takes your CPU's native 32-bit value and puts it into big-endian order.
If you're running on a little-endian machine, htonl(303) == 0x2f100000. If you're running on a big-endian machine, htonl(303) == 303. In both cases the result will be represented by bytes [0x00, 0x00, 0x01, 0x2f] in memory.
If anyone can put a specific term to what I was trying to do, I'd still love to hear it. I did, however, find a way to do what I needed, and I'll post it here so if anyone comes looking after me, they can find it. There may be (probably is) an easier, more direct way to do it, but here's what I ended up doing in VB.Net to get back the bytecode I wanted:
Private Function Encode(ByVal original As Integer) as Byte()
Dim twofiftysixes As Integer = CInt(Math.Floor(original / 256))
Dim sixteens As Integer = CInt(Math.Floor((original - (256 * twofiftysixes)) / 16))
Dim ones As Integer = original Mod 16
Dim bytecode As Byte() = {CByte((16 * sixteens) + ones), CByte(twofiftysixes), 0, 0}
Return bytecode
End Function
Effectively breaking the integer up into its hex components, then converting the appropriate pairs to cBytes.