Mqseries queuemanager Name error(Reason code 2058) - perl

I am trying to connect to my local queue by using cpan mqseries lib through perl script, in solaris environment.When i am executing my script it is giving Reson code as 2058.which means Queuemanager name error.
I have done following thing to analysis this issue,but still getting the reson code 2058.
1)Stop and started the queue manager.
2)checked the queuemanager name in my code.
3)sucessfully put and get the message in my queue by using amqget and amqput command,but it not working with my script.
Could anybody please help me in this,what kind of environment i have to set or any configuration setting i am missing.
my $qm_name = "MQTEST";
my $compCode = MQCC_WARNING;
my $Reason = MQRC_UNEXPECTED_ERROR;
my $Hconn = MQCONN($qm_name,
$compCode,
$Reason,
) || die "Unable to Connect to Queuemanager\n";

Maybe you are running to this issue?
"By default, the MQSeries module will try to dynamically determine
whether or not the localhost has any queue managers installed, and if
so, use the "server" API, otherwise, it will use the "client" API.
This will Do The Right Thing (tm) for most applications, unless you want to connect >directly to a remote queue manager from a host
which is running other queue managers locally. Since the existence of
locally installed queue managers will result in the use of the
"server" API, attempts to connect to the remote queue managers will
fail with a Reason Code of 2058."

Related

Failure/timeout invoking Lambda locally with SAM

I'm trying to get a local env to run/debug Python Lambdas with VSCode (windows). I'm using a provided HelloWorld example to get the hang of this but I'm not being able to invoke.
Steps used to setup SAM and invoke the Lambda:
I have Docker installed and running
I have installed the SAM CLI
My AWS credentials are in place and working
I have no connectivity issues and I'm able to connect to AWS normally
I create the SAM application (HelloWorld) with all the files and resources, I didn't change anything.
I run "sam build" and it finishes sucessfully
I run "sam local invoke" and it fails with timeout. I increased the timeout to 10s, still times out. The HelloWorld Lambda code only prints and does nothing else, so I'm guessing the code isn't the problem, but something else relating to the container or the SAM env itself.
C:\xxxxxxx\lambda-python3.8>sam build Your template contains a
resource with logical ID "ServerlessRestApi", which is a reserved
logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri:
C:\xxxxxxx\lambda-python3.8\hello_world runtime: python3.8 metadata:
{} architecture: x86_64 functions: ['HelloWorldFunction'] Running
PythonPipBuilder:ResolveDependencies Running
PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam\build Built Template :
.aws-sam\build\template.yaml
C:\xxxxxxx\lambda-python3.8>sam local invoke Invoking
app.lambda_handler (python3.8) Skip pulling image and use local one:
public.ecr.aws/sam/emulation-python3.8:rapid-1.51.0-x86_64.
Mounting C:\xxxxxxx\lambda-python3.8.aws-sam\build\HelloWorldFunction
as /var/task:ro,delegated inside runtime container Function
'HelloWorldFunction' timed out after 10 seconds
No response from invoke container for HelloWorldFunction
Any hints on what's missing here?
Thanks.
Mostly, a lambda function gets timed out because of some resource dependency. Are you using any external resource, maybe db connection or some REST API call ?
Please put more prints in lambda_handler(your function handler), before calling any resource, then you might know where exactly it is waiting. Also increase the timeout to 1 minute or more because most of the external resource call over HTTPS will have 30 secs timeouts.
The log suggests that either the container wasn't started, or SAM couldn't connect to it.
Sometimes the hostname resolution on Windows can be affected by hosts file or system settings.
Try running the invoke command as follows (this will make the container ports bind to all interfaces):
sam local invoke --container-host-interface 0.0.0.0
...additionally try setting the container-host parameter (set to localhost by default):
sam local invoke --container-host-interface 0.0.0.0 --container-host host.docker.internal
The next piece of puzzle is incorporating these settings into VSCODE. This can to be done in two places:
create samconfig.toml in the root dir of the project with the following contents. This will allow running sam local invoke from the terminal without having to add the command line argument:
version=0.1
[default.local_invoke.parameters]
container_host_interface = "0.0.0.0"
update launch configuration as follows to enable VSCode debugging:
...
"sam": {
"localArguments": ["--container-host-interface","0.0.0.0"]
}
...

Log4cplus: SocketAppender logging server

I wish to better understand the way the Log4cplus SocketAppender works with regard to the logging server that recieves this appender events.
I read the Log4cplus src code for loggingserver and socketappender and I will be glad to be clarified:
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
and if this is the case: does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
I would also like to know- does this Log4cplus logging-server run as a service? and does it require special configuration and pre-setup in order to use it?
Can the SocketAppender only send events to the Log4cplus logging server, and not to any other server?
Yes and yes.
does it mean that if I want to send log messages to remote machine, that machine must be installed with the Log4cplus lib?
Well, sort of. If you want to use only SocketAppender, you will have to use the logging server. You could also use SysLogAppender and send to remote server using that. Obviously, you have to have syslog service and allow receiving from network in it. You could also write your own custom appender that sends the events to whatever server you desire.
I would also like to know- does this Log4cplus logging-server run as a service?
No, it is a simple executable that listens on a socket.
and does it require special configuration and pre-setup in order to use it?
It requires configuration file so that it knows where to log the events.
I just wanted to share how I used SocketAppender (this setup also works for docker containers being in the same network).
/usr/share/elasticsearch/config/log4j2.properties
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.socket.type=Socket
appender.socket.name=socket
appender.socket.port=601
appender.socket.host=api
appender.socket.reconnectDelayMillis=10000
appender.socket.layout.type = PatternLayout
appender.socket.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.socket.ref = socket
in the second container I used syslog-ng:
apk add syslog-ng
vi /etc/syslog-ng/syslog-ng.conf
syslog-ng -f /etc/syslog-ng/syslog-ng.conf
/etc/syslog-ng/syslog-ng.conf
#version: 3.13
source s_network {
network(
transport(tcp)
port(601)
);
};
log {
source(s_network);
destination(d_network);
};
destination d_network {
file("/var/log/es_slowlog.log", template("${MSGHDR}${MESSAGE}\n"));
};
Notice that the #version: has to correspond to your version of syslog-ng. You can check it by invoking syslog-ng -V.

Issues in converting the mod_confirm_delivery module for newer binarized ejabberd versions

I have tried making a module hosted at :
https://github.com/johanvorster/ejabberd_confirm_delivery
I am using ejabberd ver 14.07.
The changes i did:
1. Removed all the ?INFO_MSG statements
2. binarised all the strings. Every occurence of "abc" has been replaced by <<"abc">> and so on.
What else is required?
I have been able to compile the module just fine however it doesn't work.
Inputs?
Would be great if anybody on the project branch could update the git project as per the newer versions of ejabberd.
I intend to receive xmpp stanzas from every client connected to saya group whenever they receive a message sent by the server.
Thanks
I think this module will generate an undef error for send_packet function inside the mod_confirm_delivery.erl. Check your error log in:
//var/log/ejabberd/ejabberd.log
In this module:
ejabberd_hooks:add(user_send_packet, _Host, ?MODULE, send_packet, 50),
This Hook is calling mod_confirm_delivery:send_packet/4 function but in your module send_packet/4 is not define. Hence You have to update the code to match new signature for user_send_packet hook, that is:
user_send_packet(Packet, C2SState, From, To) -> Packet
Follow the link: https://docs.ejabberd.im/developer/hooks/

How with perl and Net::OpenSSH can I detect if the remote side only handles protocol 1?

tl;dr; How do I capture stderr from within a script to get a more specific error, rather than just relying on the generic error from Net::OpenSSH?
I have a tricky problem I'm trying to resolve. Net::OpenSSH only works with protocol version 2. However we have a number of devices of the network that only support version 1. I'm trying to find an elegant way of detecting if the remote end is the wrong version.
When connecting to a version 1 device, the following message shows up on the stderr
Protocol major versions differ: 2 vs. 1
However the error that is returned by Net::OpenSSH is as follows
unable to establish master SSH connection: bad password or master process exited unexpectedly
This particular error is too general,and doesn't address just a protocol version difference. I need to handle protocol differences by switching over to another library, I don't want to do that for every connection error.
We use a fairly complicated process that was originally wired for telnet only access. We load up a "comm" object, that then determines stuff like the type of router, etc. That comm object invokes Net::OpenSSH to pass in the commands.
Example:
my $sshHandle = eval { $commsObject->go($router) };
my $sshError = $sshHandle->{ssh}->error;
if ($sshError) {
$sshHandle->{connect_error} = $sshError;
return $sshHandle;
}
Where the protocol error shows up on stderr is here
$args->{ssh} = eval {
Net::OpenSSH->new(
$args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
master_opts => [ -o => "StrictHostKeyChecking=no" ]
);
};
What I would like to do is pass in the stderr protocol error instead of the generic error passed back by Net::OpenSSH. I would like to do this within the script, but I'm not sure how to capture stderr from within a script.
Any ideas would be appreciated.
Capture the master stderr stream and check it afterwards.
See here how to do it.
Another approach you can use is just to open a socket to the remote SSH server. The first thing it sends back is its version string. For instance:
$ nc localhost 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-8
^C
From that information you should be able to infer if the server supports SSH v2 or not.
Finally, if you also need to talk to SSH v1 servers, the development version of my other module Net::SSH::Any is able to do it using the OS native SSH client, though it establishes a new SSH connection for every command.
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
backends => 'SSH_Cmd',
strict_host_key_checking => 0);
Update: In response to Bill comment below on the issue of sending multiple commands over the same session:
The problem of sending commands over the same session is that you have to talk to the remote shell and there isn't a way to do that reliably in a generic fashion as every shell do things differently, and specially for network equipment shells that are quite automation-unfriendly.
Anyway, there are several modules on CPAN trying to do that, implementing a handler for every kind of shell (or OS). For instance, check Oliver Gorwits's modules Net::CLI::Interact, Net::Appliance::Session and Net::Appliance::Phrasebook. The phrasebook approach seems quite suitable.

Torque pbs_python submit job error (15025 queue already exists)

I try to execute this example script (https://oss.trac.surfsara.nl/pbs_python/wiki/TorqueUsage/Scripts/Submit)
#!/usr/bin/env python
import sys
sys.path.append('/usr/local/build_pbs/lib/python2.7/site-packages/pbs/')
import pbs
server_name = pbs.pbs_default()
c = pbs.pbs_connect(server_name)
attropl = pbs.new_attropl(4)
# Set the name of the job
#
attropl[0].name = pbs.ATTR_N
attropl[0].value = "test"
# Job is Rerunable
#
attropl[1].name = pbs.ATTR_r
attropl[1].value = 'y'
# Walltime
#
attropl[2].name = pbs.ATTR_l
attropl[2].resource = 'walltime'
attropl[2].value = '400'
# Nodes
#
attropl[3].name = pbs.ATTR_l
attropl[3].resource = 'nodes'
attropl[3].value = '1:ppn=4'
# A1.tsk is the job script filename
#
job_id = pbs.pbs_submit(c, attropl, "A1.tsk", 'batch', 'NULL')
e, e_txt = pbs.error()
if e:
print e,e_txt
print job_id
But shell shows error "15025 Queue already exists". With qsub job submits normally. I have one queue 'batch' on my server. Torque version - 4.2.7. Pbs_python version - 4.4.0.
What I should to do to start new job?
There are two things going on here. First there is an error in pbs_python that maps the 15025 error code to "Queue already exists". Looking at the source of torque we see that 15025 actually maps to the error "Bad UID for job execution", this means that on the torque server, the daemon cannot determine if the user you are submitting as is allowed to run jobs. This could be because of several things:
The user you are submitting as doesn't exist on the machine running pbs_server
The host you are submitting from is not in the "submit_hosts" parameter of the pbs_server.
Solution For 1
The remedy for this depends on how you authenticate users across systems, you could use /etc/hosts.equiv to specify users/hosts allowed to submit, this file would need to be distributed to all the torque nodes as well as the torque server machine. Using hosts.equiv is pretty insecure, I haven't actually used it in this. We use a central LDAP server to authenticate all users on the network and do not have this problem. You could also manually add the user to all the torque nodes and the torque server, taking care to make sure the UID is the same on all systems.
Solution For 2
If #1 is not your problem (which I doubt it is), you probably need to add the hostname of the machine you're submitting from to the "submit_hosts" parameter on the torque server. This can be accomplished with qmgr:
[root#torque_server ]# qmgr -c "set server submit_hosts += hostname.example.com"
The pbs python library that you are using was written for torque 2.4.x.
The internal api's for torque were largely rewritten in torque 4.0.x. The library will most likely need to be written for thew new API.
Currently the developers of torque do not test any external libraries. It is possible that they could break at any time.