How to read SNMP OID Output (bits) (hrPrinterDetectedErrorState) - powershell

I have a quick question. Its most likely user error so I apologize before I begin.
I’m trying to setup threshold for a device so that it will alert us when one of our printers is in a certain state. (Jammed, out of toner, no paper etc) I’ve found the specific oid that handles this. (1.3.6.1.2.1.25.3.5.1.2.1) The specific oid is called hrPrinterDetectedErrorState under the HOST-RESOURCE-MIB. I’ve verified that I can see the oid via SNMPWALK. My problem is interpreting the data its spitting out. What i'm reading in the MIB and what i'm seeing via SNMPWALK are different.
Here is the description of oid from the MIB:
"This object represents any error conditions detected
by the printer. The error conditions are encoded as
bits in an octet string, with the following
definitions:
Condition Bit #
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
inputTrayMissing 8
outputTrayMissing 9
markerSupplyMissing 10
outputNearFull 11
outputFull 12
inputTrayEmpty 13
overduePreventMaint 14
Bits are numbered starting with the most significant
bit of the first byte being bit 0, the least
significant bit of the first byte being bit 7, the
most significant bit of the second byte being bit 8,
and so on. A one bit encodes that the condition was
detected, while a zero bit encodes that the condition
was not detected.
This object is useful for alerting an operator to
specific warning or error conditions that may occur,
especially those requiring human intervention."
The odd part is that SNMPWALK says the oid is a Hex-String while the MIB specifies that it should be a Octet-String. Are the two different? Do I need to convert the data that is output by SNMPWALK somehow to get it to match up with what the MIB is saying?
To test everything, I put the printer into several different “States.” I then ran an SNMPWALK on the device to see what the oid output. Here are the results. As you will see, these results don’t match up to what the MIB specifies.
Case 1: Opened the toner door
Expected Output based on MIB: 4
SNMP Output: 08
Case 2: Removed Toner & Closed the door
Expected Output based on MIB: 1
SNMP Output: 10
Case 3: Placed 1 sheet of paper and printed a 2 page document. The printer ran out of paper.
Expected Output based on MIB: 0 or 1
SNMP Output: #
I’m confused at the output. I just need to know how to read the oid so I can setup thresholds so that when it sees, for example, a 08 it performs a certain action.
Thanks for your help!

You are reading this wrong. The data you receive back should actually be interpreted as a bit array and every bit is its own value for the specific alarm in your case
Expected Output based on MIB: 4
SNMP Output: 08
You actually get back the output:
00001000 00000000
The first byte here covers those values
lowPaper 0
noPaper 1
lowToner 2
noToner 3
doorOpen 4
jammed 5
offline 6
serviceRequested 7
So lowPaper is bit 0, noPaper is bit 1, lowToner is bit 2, etc. And doorOpen is bit 4 and as you can see that bit is set, indicating that the door is open.
EDIT:
This is very dependent on the device and implementation. To know how to parse it right involves a lot of trial and error (at least from my experience). As an example if you get back the message 9104, this could be either be
91 and 04 are separate so you first translate 91 to binary from hex and then the
same thing with 04
91: 10010001
04: 00000100
Which would mean low paper, noToner, service requested and inputTrayEmpty. This looks like the most likely way this works.
If you only get one byte back this then means you should only look for alarms in the first 8 bits. As a example of things you need to look out for: If the only alarm present is doorOpen you could be getting back only 08 but it would actually be 0008 where the first 2 hex chars are actually the second part of the alarms but aren't shown because they are none present. So in your case you actually first have to switch the bytes (if there are 4) parse the first two and the second two on their own and then you get the actual result.
As I said there is no real standard here from what i have seen and you have to just work with it until you are sure you know how the data is sent and how to parse it.

Powershell function to interpret the hrPrinterDetectedErrorState octetstring. (I feel like there's some more often used way to do this?) I've been using this module, but it doesn't even resolve hostnames to ip addresses.
The data property returned is just a string not an octetstring type. None of the printers are snmp v3.
https://www.powershellgallery.com/packages/SNMP/1.0.0.1
A single byte would have to be padded with a zero. Most of the time only one byte is sent. The bytes are in the, I think confusing, order that it's documented in. Converting two bytes to an integer results in a different order (depending on big endian or little endian order).
function snmpmessage {
param($data)
[flags()] Enum hrPrinterDetectedErrorState
{
# more common group
LowPaper = 0x0080
NoPaper = 0x0040
LowToner = 0x0020
NoToner = 0x0010
DoorOpen = 0x0008
Jammed = 0x0004
Offline = 0x0002
ServiceRequested = 0x0001
InputTrayMissing = 0x8000
OutputTrayMissing = 0x4000
MarkerSupplyMissing = 0x2000
OutputNearFull = 0x1000
OutputFull = 0x0800
InputTrayEmpty = 0x0400
OverduePreventMaint = 0x0200
}
$bytes = [byte[]][char[]]$data
if ($bytes.count -eq 1) { $bytes = $bytes[0],0 } # pad 0
$code = [bitconverter]::ToUInt16($bytes, ($startIndex=0))
[hrPrinterDetectedErrorState]$code
}
snmpmessage -join [char[]](1,4)
ServiceRequested, InputTrayEmpty
snmpmessage '#' # 0x40
NoPaper
Or:
$hrPrinterDetectedErrorState = '1.3.6.1.2.1.25.3.5.1.2.1'
$hostname = 'myprinter01'
$ip = (Resolve-DnsName $hostname).ipaddress
$result = Get-SnmpData -ip $ip -oid $hrPrinterDetectedErrorState -v v1
snmpmessage $result.data
LowToner
# ?
# $octetstring = [Lextm.SharpSnmpLib.OctetString]::new($result.data)

Related

Pymodbus reading holding and input registers : IllegalAddress

I am trying with the interactive version of pymodbus (pip install pymodbus[repl]) to access a modbus device (UPS).
I had accessed it by using mbpoll tool, so the communication and device are both working.
I guess I am having problems to figure out what are the "right" parameters that match the equivalent mbpoll.
From documentation, I am trying to access the values starting at 30405 (page 864 liebert-intellislot-modbus-rtu-reference-guide-SL-28170_0.pdf) what I understand as holding registers as the function is 30xxx:
pg 864 - Vertiv™ | Liebert® IntelliSlot Modbus/BACnet Reference Guide
Table 3.118 Liebert® GXT5—Input and Holding (continued)
Data Label Input Holding # of Reg Scale Notes/Units
System Input Frequency 30405 — 1 10 Uint16
System Input Power Factor L1 30406 — 1 100 Uint16
System Input Power Factor L2 30407 — 1 100 Uint16
System Input Power Factor L3 30408 — 1 100 Uint16
System Input Max Voltage L1-N 30409 — 1 10 Uint16
System Input Min Voltage L1-N 30410 — 1 10 Uint16
System Input Max Voltage L2-N 30411 — 1 10 Uint16
System Input Min Voltage L2-N 30412 — 1 10 Uint16
System Input Max Voltage L3-N 30413 — 1 10 Uint16
Using the mbpool it returns values as expected:
mbpoll -r 396 -t 3 -c 125 -1 -q 192.168.160.1 | \grep -v "65535 (-1)"
-- Polling slave 1...
[396]: 2162
[402]: 7
[405]: 599
[406]: 53
[409]: 2279
[410]: 2048
[415]: 230
[416]: 27
[417]: 60
[418]: 1
[419]: 0
[420]: 190
[431]: 2163
[434]: 599
[435]: 230
[446]: 2
[447]: 0
[448]: 1
[449]: 0
[450]: 1
[451]: 0
[452]: 14446
...
Then I fired /usr/local/bin/pymodbus.console tcp --host 192.168.160.1 --port 502 and at prompt I've tried several combinations of client.read_holding_registers only to get the "message": "IllegalAddress" for all trials below (and others, varying count, unit, addresses, etc):
client.read_holding_registers address=30396 count=10 unit=1
client.read_holding_registers address=396 count=10 unit=1
client.read_holding_registers address=30396 count=10 unit=0
client.read_holding_registers address=396 count=10 unit=0
The complete return of one of theses client.read is:
> client.read_holding_registers address=396 count=10 unit=0
{
"original_function_code": "3 (0x3)",
"error_function_code": "131 (0x83)",
"exception code": 2,
"message": "IllegalAddress"
}
Since it is returning valid values from mbpoll, I suspect I am doing something wrong with the parameters of client.read_holding_registers, but I can't figure out where is the problem.
Both access are from the same machine, at the same time, running opensuse tumbleweed.
Accessing as root or normal user does not make any difference.
I appreciate any hints to deal with the register address with pymodbus module.
As per the comments the mbpoll command line options can cause some confusion:
-t 3:int16 16-bit input register data type with signed int display
-t 4 16-bit output (holding) register data type (default)
This is confusing because the modbus command to read an input register is 4 and for holding registers its 3 (so the reverse of the mbpoll arguments). This is mentioned in an issue.
So the command you need will be client.read_input_registers address=395 count=10 (as per the comments there is also an issue with register numbering - the "Modbus: When 40001 Really Means 1, or 0 Really Means 1 " section in this article explains this well).

Why doesn't this bor and bnot expression give the expected result in Powershell?

why doesn't this bor bnot give the expected result in powershell?
To find the last address in an ipv6 subnet one needs to do a "binary or" and a "binary not" operation.
The article I'm reading (https://www.codeproject.com/Articles/660429/Subnetting-with-IPv6-Part-1-2) describes it like this:
(2001:db8:1234::) | ~(ffff:ffff:ffff::) = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Where | is a "binary or" and
~ is a "binary not"
In powershell however, I try it like:
$mask = 0xffffffff
$someOctet = 0x0000
"{0:x4}" -f ($someOctet -bor -bnot ($mask) )
and I get 0000 instead of ffff
Why is this?
The tutorial is doing a -not of the entire subnet mask, so ff00 inverts to 00ff and similar for longer Fs and 0s; you aren't doing that, so you don't get the same results.
The fully expanded calculation that you show is doing this:
1. (2001:0db8:1234:0000:0000:0000:0000:0000) | ~(ffff:ffff:ffff:0000:0000:0000:0000:0000)
2. (2001:0db8:1234:0000:0000:0000:0000:0000) | (0000:0000:0000:ffff:ffff:ffff:ffff:ffff)
3. = 2001:db8:1234:ffff:ffff:ffff:ffff:ffff
Note how in step 1. to step 2, the not is inverting the pattern of Fs and 0s, switching the subnet mask around, and switching it around between the bit where the prefix ends and the bit where the host part begins.
Then step 3 or takes only the set bits from the left to keep those numbers the same (neither zero'd nor ffff'd), and all the set bits from the right (to ffff those, maxing them to the max IP address within that prefix).
In other words, it makes no sense to do this "an octet at a time". This is a whole IP address (or whole prefix) + whole subnet mask operation.
Where the tutorial says:
& (AND), | (OR), ~ (NOT or bit INVERTER): We will use these three bitwise operators in our calculations. I think everybody is familiar -at least from university digital logic courses- and knows how they operate. I will not explain the details here again. You can search for 'bitwise operators' for further information.
If you aren't very familiar with what they do, it would be worth studying that more, before trying to apply them to IP subnetting. Because you are basically asking why 0 or (not 1) is 0 and the answer is because that's how Boolean logic "or" and "not" work.
Edit for your comment
[math]::pow(2,128) is a lot bigger than [decimal]::maxvalue, so I don't think Decimal will do.
I don't know what a recommended way to do it is, but I imagine if you really wanted to do it all within PowerShell with -not you'd have to process it with [bigint] (e.g. [bigint]::Parse('20010db8123400000000000000000000', 'hex')).
But more likely, you'd do something more long-winded like:
# parse the address and mask into IP address objects
# which saves you having to expand the short version to
$ip = [ipaddress]::Parse('fe80::1')
$mask = [ipaddress]::Parse('ffff::')
# Convert them into byte arrays, then convert those into BitArrays
$ipBits = [System.Collections.BitArray]::new($ip.GetAddressBytes())
$maskBits = [System.Collections.BitArray]::new($mask.GetAddressBytes())
# ip OR (NOT mask) calculation using BitArray's own methods
$result = $ipBits.Or($maskBits.Not())
# long-winded way to get the resulting BitArray back to an IP
# via a byte array
$byteTemp = [byte[]]::new(16)
$result.CopyTo($byteTemp, 0)
$maxIP = [ipaddress]::new($byteTemp)
$maxIP.IPAddressToString
# fe80:ffff:ffff:ffff:ffff:ffff:ffff:ffff

Powershell: Translate Octet String (SNMP) Output to Hex (Mac address)

So i will shortly explain the env:
Need to work on a Win2k8 Server with Powershell 4.0
I want to get some information with using SNMP (so printer type and printer MAC address):
$SNMP = new-object -ComObject olePrn.OleSNMP
$SNMP.open($P_IP,"public",2,3000)
$PType = $SNMP.get(".1.3.6.1.2.1.25.3.2.1.3.1")
$PMac = $SNMP.get(".1.3.6.1.2.1.2.2.1.6.2")
echo $PType
echo $PMac
So, the Output looks like this (as an example):
$PType = HP Officejet Pro 251dw Printer
$PMac =  ÓÁÔ*
So, first of all i started to check, if i used the right OID - using the command line tool of SnmpSoft Company. There, the output looked well:
OID= OID=.1.3.6.1.2.1.2.2.1.6.2
Type=OctetString
Value= A0 D3 C1 D4 2A 95 ....*.
Alright, so i started to check, what kind of datatype this OID value have: It's octet string. In the next steps, i started to search for possibilities, how to transform this octet string value to some readable hex - until now without any progression. i tried to transform it into Bytes this way:
$bytes = [System.Text.Encoding]::Unicode.GetBytes($PMac)
[System.Text.Encoding]::ASCII.GetString($bytes)
echo $bytes
But the output just confusing me
160
0
211
0
193
0
212
0
42
0
34
32
Tryed to interpret this output without any success. Google can't help me anymore because i don't understand slowly also, how or what to search.
So here i am and hoping to get some help or an advice, how i can change the output of this query to something readable.
It's an encoding problem.
1.3.6.1.2.1.2.2.1.6 is the interface physical address. So I would expect the value to be the MAC address of the interface. Your command line result begins with A0-D3-C1, which is an HP MAC address range, so it's consistent. Your printers MAC address must be A0 D3 C1 D4 2A 95? You didn't state that, so you're leaving me to guess.
I suspect that $PMac is supposed to be a [byte[]] (byte array), but the output is converting it to a string and PowerShell's output system is interpreting it as characters.
Example:
PS C:\> [byte[]]$bytes = 0xa0, 0xd3, 0xc1, 0xd4, 0x2a, 0x95
PS C:\> [System.Text.Encoding]::Default.GetString($bytes)
 ÓÁÔ*•
You probably need to do something like this:
$MAC = [System.Text.Encoding]::Default.GetBytes($PMac) | ForEach-Object {
$_.ToString('X2')
}
$MAC = $MAC -join '-'
You may want to use [System.Text.Encoding]::ASCII.GetBytes($PMac) instead, since raw SNMP is supposed to use ASCII encoding. I've no idea what olePrn.OleSNMP uses.
You might also look at one of the SNMP PowerShell modules on the PowerShell Gallery. That will be much easier than dealing with COM object in PowerShell.
I also came across this page on #SNMP's handling of OCTET STRING. #SNMP is a .Net SNMP library, and OCTET STRING appears to be what the underlying type is for this OID. The page describes some of the difficulties of working with this particular object type with .Net. You could also use this library for developing your own Cmdlets in PowerShell; it's available through NuGet.
The output you got is very nearly your expected MAC address
160 0 211 0 193 0 212 0 42 0 34 32
160 is decimal for hexadecimal 0xA0
211 is 0xD3
193 is 0xC1
The additional zeros between each byte may have been added during the Unicode.GetBytes call, which I don't think you'll need to use.
I suspect you'll need to read $PMac as an array of bytes, then do hexadecimal string conversion for each byte. This is probably not the most elegant, but may get the job done:
[byte[]] $arrayofBytes = #(160,211,193)
[string] $hexString
foreach ($b in $arrayofBytes) {
$HexString += [convert]::toString($b,16)
$HexString += ' '
}

How to truncate a 2's complement output

I have data written into short data type. The data written is of 2's complement form.
Now when I try to print the data using %04x, the data with MSB=0 is printed fine for eg if data=740, print I get is 0740
But when the MSB=1, I am unable to get a proper print. For eg if data=842, print I get is fffff842
I want the data truncated to 4 bytes so expected output is f842
Either declare your data as a type which is 16 bits long, or make sure the printing function uses the right format for 16 bits value. Or use your current type, but do a bitwise AND with 0xffff. What you can do depends on the language you're doing it in really.
But whichever way you go, check your assumptions again. There seems to be a few issues in your question:
2s-complement applies to signed numbers only. There are no negative numbers in your question.
Assuming you mean C's short - it doesn't have to be 16 bits long.
"I get is fffff842 I want the data truncated to 4 bytes" - fffff842 is 4 bytes long. f842 is 2 bytes long.
2-bytes long value 842 does not have the MSB set.
I'm assuming C (or possibly C++) as the language here.
Because of the default argument promotions involved when calling a variable argument function (such as printf), your use of a short will result in an integer promotion, which states that "If an int can represent all values of the original type (as restricted by the width, for a
bit-field), the value is converted to an int".
A short is converted to an int by means of sign-extension, and 0xf842 sign-extended to 32 bits is 0xfffff842.
You can use a bitwise AND to mask off the most significant word:
printf("%04x", data & 0xffff);
You could also add the h length specifier to state that you only want to print an (unsigned) short worth of bits from an int:
printf("%04hx", data);

How can I convert the tiger hash values from the official implementations into the form used by Direct Connect?

I am trying to implement a Direct Connect Client, and I am currently stuck at a point where I need to hash the files in order to be able to upload them to other clients.
As the all other clients require a TTHL (Tiger Tree Hashing Leaves) support for verification of the downloaded data. I have searched for implementations of the algorithm, and found tiger-hash-python.
I have implemented a routine that uses the hash function from before, and is able to hash large files, according to the logic specified in Tree Hash EXchange format (THEX) (basically, the tree diagram is the important part on that page).
However, the value produced by it is similar to those shown on Wikipedia, a hex digest, but is different from those shown in the DC clients I'm using for reference.
I have been unable to find out how the hex digest form is converted to this other one (39 characters, A-Z, 0-9). Could someone please explain how that is done?
Well ... I tried what Paulo Ebermann said, using the following functions:
def strdivide(list,length):
result = []
# Calculate how many blocks there are, using the condition: i*length = len(list).
# The additional maths operations are to deal with the last block which might have a smaller size
for i in range(0,int(math.ceil(float(len(list))/length))):
result.append(list[i*length:(i+1)*length])
return result
def dchash(data):
result = tiger.hash(data) # From the aformentioned tiger-hash-python script, 48-char hex digest
result = "".join([ "".join(strdivide(result[i:i+16],2)[::-1]) for i in range(0,48,16) ]) # Representation Transform
bits = "".join([chr(int(c,16)) for c in strdivide(result,2)]) # Converting every 2 hex characters into 1 normal
result = base64.b32encode(bits) # Result will be 40 characters
return result[:-1] # Leaving behind the trailing '='
The TTH for an empty file was found to be 8B630E030AD09E5D0E90FB246A3A75DBB6256C3EE7B8635A, which after the transformation specified here, becomes 5D9ED00A030E638BDB753A6A24FB900E5A63B8E73E6C25B6. Base-32 encoding this result yielded LWPNACQDBZRYXW3VHJVCJ64QBZNGHOHHHZWCLNQ, which was found to be what DC++ generates.
The only mention of the format of the hash in the Direct Connect protocol I found is on the $SR page on the NMDC Protocol wiki:
For files containing TTH, the <hub_name> parameter is replaced with TTH:<base32_encoded_tth_hash> (ref: TTH_Hash).
So, it is Base32-encoding. This is defined in RFC 4648 (and some earlier ones), section 6.
Basically, you are using the capital letters A-Z and the decimal digits 2 to 7, and one base32 digit represents 5 bits, while one base16 (hexadecimal) digit represents only 4 ones.
This means, each 5 hex digits map to 4 base32-digits, and for a Tiger hash (192 bits) you will need 40 base32-digits (in the official encoding, the last one would be a = padding, which seems to be omitted if you say that there are always 39 characters).
I'm not sure of an implementation of a conversion from hex (or bytes) to base32, but it shouldn't be too complicated with a lookup table and some bit-shifting.