could anyone explain what does "errorStatus.prettyPrint()" in pysnmp mean? - pysnmp

I'm not able to understand what does "errorStatus.prettyPrint()" in pysnmp mean. Could anyone explain in a simple language? currently I'm working on pysnmp, so I need to understand what does this mean

SNMP packet contains the error-status integer field that is used by SNMP agent to communicate certain class of errors, occurred when processing request, back to SNMP manager. The errors are enumerated so that each integer value has definite semantics. That is:
error-status -- sometimes ignored
INTEGER {
noError(0),
tooBig(1),
noSuchName(2), -- for proxy compatibility
badValue(3), -- for proxy compatibility
readOnly(4), -- for proxy compatibility
genErr(5),
...
Note that no error is manifested by the value of 0 which evaluates to False in Python.
So pysnmp errorStatus is just an integer, when you call .prettyPrint() on it, it prints out human-friendly description of enumerated error.

Related

How to manage the error queue in openssl (SSL_get_error and ERR_get_error)

In OpenSSl, The man pages for The majority of SSL_* calls indicate an error by returning a value <= 0 and suggest calling SSL_get_error() to get the extended error.
But within the man pages for these calls as well as for other OpenSSL library calls, there are vague references to using the "error queue" in OpenSSL - Such is the case in the man page for SSL_get_error:
The current thread's error queue must be empty before the TLS/SSL I/O
operation is attempted, or SSL_get_error() will not work reliably.
And in that very same man page, the description for SSL_ERROR_SSL says this:
SSL_ERROR_SSL
A failure in the SSL library occurred, usually a protocol error.
The OpenSSL error queue contains more information on the error.
This kind of implies that there is something in the error queue worth reading. And failure to read it makes a subsequent call to SSL_get_error unreliable. Presumably, the call to make is ERR_get_error.
I plan to use non-blocking sockets in my code. As such, it's important that I reliably discover when the error condition is SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE so I can put the socket in the correct polling mode.
So my questions are this:
Does SSL_get_error() call ERR_get_error() implicitly for me? Or do I need to use both?
Should I be calling ERR_clear_error prior to every OpenSSL library call?
Is it possible that more than one error could be in the queue after an OpenSSL library call completes? Hence, are there circumstances where the first error in the queue is more relevant than the last error?
SSL_get_error does not call ERR_get_error. So if you just call SSL_get_error, the error stays in the queue.
You should be calling ERR_clear_error prior to ANY SSL-call(SSL_read, SSL_write etc) that is followed by SSL_get_error, otherwise you may be reading an old error that occurred previously in the current thread.

How to correlate WSAGetLastError with Socket error code

I see a list of Winsock error codes here
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740668(v=vs.85).aspx
But when I call WSAGetLastError() the result is -2147014848 (or 0x80072740)
How do you correlate the two ?
thanks
This is a HRESULT style Microsoft error code. The low 16 bits is the error code. The high bit is set which is the severity bit. That indicates a failure, and of course it makes the value negative if interpreted as a signed 32 bit integer.
The upper 16 bits (minus for the upper 5 bits, which are flags) is a facility code.
See here: http://en.wikipedia.org/wiki/HRESULT
So this is an error, in facility 7, whose number is 0x2740, or 10048.
And that would be (thanks to http://msdn.microsoft.com/en-us/library/windows/desktop/ms740668%28v=vs.85%29.aspx)
ta daa: {WSA}EADDRINUSE
There you go.
According to Microsoft's documentation for Windows Socket 2,
"[...] a specific error number can be retrieved by calling the WSAGetLastError function [and] a Winsock error code can be converted to an HRESULT for use in a remote procedure call (RPC) using HRESULT_FROM_WIN32."
I agree with #Kaz's answer that the error code you received of 0x80072740 appears to be an HRESULT, however, something feels off here with the sole fact that you are even getting an HRESULT. When calling WSAGetLastError(), you should essentially be getting back a Win32 status code in all cases from my understanding. I don't see any posted code, so I can't be entirely certain you didn't convert the code to an HRESULT first.
However, you are safest using the following statement when retrieving back an Windows Socket API (WSA) error code:
/* A WSA function indicated an error above. */
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
This is analogous when using the normal function GetLastError(), which returns a definite Win32 status code.
By using this statement, you can guarentee that you are always dealing with an HRESULT. Also, even if WSAGetLastError() returns an HRESULT sometimes, calling the macro function HRESULT_FROM_WIN32 will just return back the same HRESULT unmodified (see actual HRESULT_FROM_WIN32 definition here).
Lastly, when trying to figure out a specific Microsoft Windows specific error code, I recommend using the following error code look-up site: https://errorcodelookup.com/. The error code you provided refers to the error code WSAEADDRINUSE (0x80072740):
"Only one usage of each socket address (protocol/network address/port) is normally permitted."

Inconsistent behavior between local actor and remote actor

This is sort of a follow up to an earlier question at Scala variable binding when used with Actors
Against others' advice, I decided to make a message containing a closure and mutate the variable that closure is closed under between messages.. and explicitly wait for them.
The environment is akka 1.2 on scala 2.9
Consider the following
var minAge = 18
val isAdult = (age: Int) => age >= minAge
println((actor ? answer(19, isAdult)).get)
minAge = 20
println((actor ? answer(19, isAdult)).get)
The message handler for answer essentially applies isAdult to the first parameter (19).
When actor is local, I get the answers I expect.
true
false
But when it is remote, I get
false
false
I am simply curious why this would be the behavior? I would have expected consistent behavior between the two..
Thanks in advance!
Well, you have come across what may (or may not) be considered a problem for a system where the behaviour is specified by rules which are not enforced by the language. The same kind of thing happens in Java. Here:
Client: Data d = rmiServer.getSomeData();
Client: d.mutate()
Do you expect the mutation to happen on the server as well? A fundamental issue with any system which involves remote communication, especially when that communication is transparent to a client, is understanding where that communication is occurring and what, exactly, is going on.
The communication with the actor takes the form of message-passing
An effect can pass a boundary only by the mechanism of message-passing (that is, the effect must reside within the returned value)
The actor library may transparently handle the transmission of a message remotely
If your effect is not a message, it is not happening!
What you encounter here is what I would call “greediness” of Scala closures: they never close “by-value”, presumably because of the uniform access principle. This means that the closure contains an $outer reference which it uses to obtain the value of minAge. You did not give enough context to show what the $outer looks like in your test, hence I cannot be more precise in how it is serialized, from which would follow why it prints what you show.
One word, though: don’t send closures around like that, please. It is not a recipe for happiness, as you acknowledge yourself.

How to set a timeout in connect/send ? ( as400 iseries v5r4, rpg )

From this rpg socket tutorial we created a socket client in rpg that calls a java server socket
The problem is that connect()/send() operations blocks and we have a requirement that if the connect/send couldn't be done in a matter of a second per say, we have to just log it and finish.
If I set the socket to non-blocking mode (I think with fnctl), we are not fully understanding how to proceed, and can't find any useful documentation with examples for it.
I think if I do connect to a non-blocking socket I have to do select(..., timeout) which tells us if the connect succeed and/ we are able to send(bytes). But, if we send(bytes) afterwards, as it is now a non-blocking socket (which will immediately return after the call), how do I know that send() did the actual sending of the bytes to the server before closing the socket ?
I can fall back to have the client socket in AS400 as a Java or C procedure, but I really want to just keep it in a simple RPG program.
Would somebody help me understand how to do that please ?
Thanks !
In my opinion, that RPG tutorial you mention has a slight defect. What I believe is causing your confusion is the following section's code:
...
Consequently, we typically call the
send() API like this:
D miscdata S 25A
D rc S 10I 0
C eval miscdata = 'The data to send goes here'
C eval rc = send(s: %addr(miscdata): 25: 0)
c if rc < 25
C* for some reason we weren't able to send all 25 bytes!
C endif
...
If you read the documentation of send() you will see that the return value does not indicate an error if it is greater than -1 yet in the code above it seems as if an error has occurred. In fact, the sum of the return values must equal the size of the buffer assuming that you keep moving the pointer into the buffer to reflect what has been sent. Look here in Beej's Guide to Network Programming. You might also like to look at Richard Stevens' book UNIX Network Programming, Volume 1 for really detailed explanations.
As to the problem of determining if the last send before close() did the actual send ... well the paragraph above explains how to determine what portion of the data was sent. However, calling close() will attempt to send all unsent data unless SO_LINGER is set.
fnctl() is used to control blocking while setsockopt() is used to set SO_LINGER.
The abstraction of network communications being used is BSD sockets. There are some slight differences in implementations across OS's but it is generally quite homogeneous. This means that one can generally use documentation written for other OS's for the broad overview. Most of the time.

How can I get Perl's Jabber::SimpleSend to work with Gmail chat?

I'm trying to write a simple Perl script to send an Instant Message. Jabber seemed like it might be the most conducive protocol. But the following script fails:
#!/usr/bin/env perl
use Jabber::SimpleSend qw(send_jabber_message);
send_jabber_message('me#gmail.com',
'CENSORED',
'you#gmail.com',
'subject test',
"body test");
It says:
Can't call method "can_read" on an undefined value at
/opt/local/lib/perl5/site_perl/5.8.9/XML/Stream.pm line 1421.
As cartman's answer points out, the code should actually be
#!/usr/bin/env perl
use Jabber::SimpleSend qw(send_jabber_message);
send_jabber_message('me%40gmail.com#talk.google.com',
'CENSORED',
'you%40gmail.com#talk.google.com',
'subject test',
"body test");
But that fails with the following error:
No SASL mechanism found
at /usr/local/lib/perl5/site_perl/5.10.0/Authen/SASL.pm line 74
I do have the Authen::SASL cpan module installed.
Jabber::SimpleSend is the easier way to interact with a standard Jabber server but don't let the module name mislead you: gtalk is indeed a bit different, requiring TLS encryption (that Jabber::SimpleSend won't do) and a hostname change. You will get better results using Net::XMPP and dealing directly with its API.
See http://www.gridpp.ac.uk/wiki/Nagios_jabber_notification for a well-commented, fully working implementation in 75 lines of perl using Net::XMPP. It's inteded to send nagios notifications but it does exactly what you need.
I'm not familiar with the code, but that line in XML::Stream is where the module begins a select() loop. Line 523-524 is where is passes IO::Select a socket to the destination server, and IO::Select itself passes a blessed reference, which should never be undef the way XML::Stream uses it.
Something is probably modifying the "SELECT" element of the XML::Stream object in the Jabber modules, possibly in a misguided attempt to correct a server connection error. I'm sorry I couldn't be more specific.
In response to the update:
These are odd errors, and I've been meaning to look inside the Jabber modules anyway, so I took at look at the source. The following is based on looking at the latest versions of the modules used available from CPAN. This is probably not very useful unless you want to start subclassing these modules and adding code to see where something unexpected happens. (You can skip the next paragraph if you're not interested in the Jabber modules' internals.)
From the updated information, I've traced it to the point where Authen::SASL::Perl croaks on line 41. It needs a result from $parent->mechanism, and there are two possible causes, assuming Authen::SASL isn't broken. Either it's being called incorrectly with no arguments from Net::XMPP::Protocol (line 2968), which seems unlikely, or the "mechanisms" it set in the constructor for Authen::SASL don't exist. Net::XMPP::Protocol defines the "mechanisms" (GetStreamFeature called, line 2958; that method defined around line 3340) with return $self->{STREAM}->GetStreamFeature($self->GetStreamID(),$feature);, where $feature is just a string passed from the callee and the id part of the XML::Stream object's session.
Based on the original XML error and the possibility of the session id going bad, it appears that the server either sends bad data at some point unexpected to XML::Stream and unaccounted for by the modules using it. I'm not convinced that foo%40gmail.com#talk.google.com is the right user name format, but I don't know how that could be causing these errors without the Jabber server doing something wrong.
I would start fresh with different user names on a different server and see if Jabber::SimpleSend works at all, then try to capture the server's output somehow to see what XML::Stream is choking on.
Update: For what it's worth, I installed the module and I'm getting the exact same errors. Authen::SASL::Perl::PLAIN and all other prerequisites do exist. And when I set the user name to gmailaccountname#talk.google.com and enabled global warnings (eg, #!/usr/bin/perl -w or perl -w filename.pl), XML::Stream reveals a bunch of undefined value problems, and SimpleSend actually spits out the warning "Could not connect to Jabber server"! (No, I don't know what that really means :().
Update: I was trying to install Net::Jabber::Bot (I gave up after some ssl module errors) to see if it would solve anything, and I noticed its constructor has this option and note:
gtalk => 0 # Default to off, 1 for on. needed now due to gtalk differences from std jabber server.
which reinforces the idea that the server's doing something unusual, which XML::Stream doesn't bother to throw an exception for.
Your username should be me#gmail.com but the server name is talk.google.com. So the first parameter should be me#gmail.com#talk.google.com, but I am not sure if Perl can grok that double # signs. You may try to escape first # with %40 so that the first parameter is me%40gmail.com#talk.google.com .
Update I:
About the second error, looks like you are missing SASL authentication modules. GMail uses SASL Plain authentication. So do you have /usr/local/lib/perl5/site_perl/5.10.0/Authen/SASL/Perl/PLAIN.pm file ?
Looks like you require Authen::SASL::Cyrus (the C implementation) or Authen::SASL::Perl (the Perl implementation) to be installed as well as Authen::SASL (which simply tries to find the best option installed on your machine, and, for you, finds neither).
Check to see if you have one of them installed.
That's my reading of the source and the manual - I've not tested this, ymmv.