Perl: Net::Appliance::Session ssh connection gives weird characters on the output - perl

I'm using perl 5.8.8 with Net::Appliance::Session 1.36 and vim as editor.
On dev environment when I connect to the linux based network box and send commands with catching the output I get weird characters here and there plus doubled prompt:
Cisco Mobility Service Engine
root#xxx.xxx.xxx.xxx's password:
Last login: Thu May 17 14:19:01 from xxx.xxx.xxx.xxx^M
^[]0;root#HOSTNAME:~^G^[[?1034h[root#HOSTNAME ~]# #no-paging-command^M
^[]0;root#HOSTNAME:~^G[root#HOSTNAME ~]# $TIMESTAMP$=1526592421 ^Gecho no-paging-command^M
no-paging-command^M
^[]0;root#HOSTNAME:~^G[root#HOSTNAME ~]# ^G^M
^[]0;root#HOSTNAME:~^G[root#HOSTNAME ~]# $TIMESTAMP$=1526592421 ^Ggetserverinfo^M
the strange characters I wrote about are:
^[]0;
This issue looks better on test and production environments:
Cisco Mobility Service Engine
root#xxx.xxx.xxx.xxx's password:
Last login: Tue May 15 10:59:10 from xxx.xxx.xxx.xxx^M
[root#HOSTNAME ~]# #no-paging-command^M
[root#HOSTNAME ~]# $TIMESTAMP$=1526517216 ^Gecho no-paging-command^M
no-paging-command^M
[root#HOSTNAME ~]# ^G^M
[root#HOSTNAME ~]# $TIMESTAMP$=1526517216 ^Ggetserverinfo^M
What could be a root cause of different output in this case?
Thank you in advice!

You were right about those strange characters, those are ASCI chars that are setting additional terminal features which are very annoying for perl script to parse.
The way I got rid of them was to use the crontab the same way scripts are being run on production.

Related

Perl modules not installing. sh: 1: gzip: Exec format error

I am getting issues when trying to install Perl modules on Ubuntu running on Windows 10.
I am not sure the best way to post the output, I am guessing maybe as "code"
$ cpan HTTP::Tiny
Loading internal logger. Log::Log4perl recommended for better logging
Reading '/home/user/.cpan/Metadata'
Database was generated on Sun, 06 Nov 2022 23:29:01 GMT
Running install for module 'HTTP::Tiny'
Fetching with HTTP::Tiny:
http://www.cpan.org/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz
Fetching with HTTP::Tiny:
http://www.cpan.org/authors/id/D/DA/DAGOLDEN/CHECKSUMS
Checksum for /home/user/.cpan/sources/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz ok
sh: 1: /usr/bin/gzip: Exec format error
/usr/bin/tar: This does not look like a tar archive
/usr/bin/tar: Exiting with failure status due to previous errors
Uncompressed /home/user/.cpan/sources/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz successfully
Using Tar:/usr/bin/tar xf "HTTP-Tiny-0.082.tar":
Untarred HTTP-Tiny-0.082.tar successfully
'YAML' not installed, will not store persistent state
Package contains both files[HTTP-Tiny-0.082.tar] and directories[HTTP-Tiny-0.082]; not recognized as a perl package, giving up
Configuring D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz with Makefile.PL
Running make for D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz
make: *** No targets specified and no makefile found. Stop.
DAGOLDEN/HTTP-Tiny-0.082.tar.gz
/usr/bin/make -- NOT OK
I am really suck on how to fix this, I have been scratching my head for a few days now and any assistance would be really appreciated.
Based on the log it seems to be an issue specifically with Ubuntu 22.04 on WSL1:
sh: 1: gzip: Exec format error
see: gzip from Ubuntu Jammy doesn't execute #8219
You could install an earlier version of gzip via. apt that works well – however the best route would be to update to WSL 2, as it has many benefits and it won't have this issue.
See the comparison page including some rare cases where you might want to stay on WSL 1.

How to debug the problem not able to translate OID with a new MIB file for UPS-MIB?

On Centos, I ran into the following error:
sudo snmptrap -v 2c -c read localhost '' UPS-MIB::upsTraps
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs
Cannot find module (UPS-MIB): At line 0 in (none)
UPS-MIB::upsTraps: Unknown Object Identifier
The above error happened after
Copied UPS-MIB.txt to /usr/share/snmp/mibs
I started snmptrapd:
snmptrapd -f -Lo -Dread-config -m ALL
The version of the Net-SNMP is 5.2.x.
The same procedures work fine with Ubuntu 18.04/Net-SNMP 5.3.7.
I wonder how to debug and fix the problem?
Besides the Net-SNMP version difference, on Ubuntu, I found an instruction to install mib-download-tool, and execute it after the installation of Net-SNMP, and comment out the lines beginning with min: in snmp.conf in order to fix the error of missing MIB's.
However, for the Centos, I had no need and found no such instruction, thus I have not done it yet, as there is no error message of missing MIB's.
The MIB file is downloaded from https://tools.ietf.org/rfc/rfc1628.txt
renamed to UPS-MIB.txt (It seems to me that the name of the MIB file does not matter, as long as it's unique? I tried to use a different names, upsMIB.txt, rfc1628.txt, but it does not help to improve.)
I solved the problem as follows:
manually copied /usr/share/snmp/mibs/ietf/UPS-MIB on an Ubuntu with Net-SNMP 5.7.3 installed to the Centos /usr/share/snmp/mibs/UPS-MIB
then restart the snmpd
by the command:
service snmpd restart
then the OID of UPS-MIB becomes visible and accessible.
Maybe, the version that I downloaded from https://tools.ietf.org/rfc/rfc1628.txt is not suitable??

Custom Munin plugin won't report

I've built my first Munin plugin to give us the size of our Redis queue, but it won't report for some reason. Every other plugin on the node, including other Redis-centric plugins work fine.
Here's the plugin code:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
multigraph redis_queue_size
graph_title Redis Queue Size
graph_info The size of Redis queue
graph_category redis
graph_vlabel Messages
redisqueue.label redisqueue
redisqueue.type GAUGE
redisqueue.min 0
EOM
exit 0;;
esac
queuelength=`redis-cli llen mykeyname`
printf "redisqueue.value "
echo $queuelength
The plugin is in /usr/share/munin/plugins/redis_queue_
The plugin is symlinked to /etc/munin/plugins/redis_queue_
I made sure to restart the service
$ sudo service munin-node force-reload
If I run sudo munin-run redis_queue_ I get the correct output:
redisqueue.value 1567595
If I run munin-node-config I get the following:
redis_queue_ | yes |
If I connect to the instance from the master using telnet to fetch the plugin, I get:
$ telnet 10.101.21.56 4949
Trying 10.101.21.56...
Connected to 10.101.21.56.
Escape character is '^]'.
# munin node at redis01.example.com
fetch redis_queue_
redisqueue.value 1035336
The master shows an empty graph for it, but the "last updated" time isn't increasing. I initially had the plugin configured a little differently (it wasn't producing good output) so all the values are -nan. Once I fixed the output, I expected the plugin to start working, but all efforts have failed.
Everything looks right, but yet still no values in the graph.
Edit: Munin v1.4.6

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)

Fabric take long time with ssh

I am running fabric to automate deployment. It is painfully slow.
My local environment:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > uname -a
Darwin sh.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May 1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
My fab file:
#!/usr/bin/env python
import logging
import paramiko as ssh
from fabric.api import env, run
env.hosts = [ 'examplesite']
env.use_ssh_config = True
#env.forward_agent = True
logging.basicConfig(level=logging.INFO)
ssh.util.log_to_file('/tmp/paramiko.log')
def uptime():
run('uptime')
Here is the portion of the debug logs:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > date;fab -f /Users/bob/code/somenv/somenv/fabfile/pefabfile.py uptime
Sun Aug 11 22:25:03 EDT 2013
[examplesite] Executing task 'uptime'
[examplesite] run: uptime
DEB [20130811-22:25:23.610] thr=1 paramiko.transport: starting thread (client mode): 0x13e4650L
INF [20130811-22:25:23.630] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.9p1)
DEB [20130811-22:25:23.641] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-grou
It takes 20 seconds before paramiko is even starting the thread. Surely, Executing task 'uptime' does not take that long. I can manually log in through ssh, type in uptime, and exit in 5-6 seconds. I'd appreciate any help on how to extract mode debug information. I made the changes mentioned here, but no difference.
Try:
env.disable_known_hosts = True
See:
https://github.com/paramiko/paramiko/pull/192
&
Slow public key authentication with paramiko
Maybe it is a problem with DNS resolution and/or IPv6.
A few things you can try:
replacing the server name by its IP address in env.hosts
disabling IPv6
use another DNS server (e.g. OpenDNS)
For anyone looking at this post-2014, paramiko, which was the slow component when checking known hosts, introduced a fix in March 2014 (v1.13), which was allowed as requirement by Fabric in v1.9.0, and backported to v1.8.4 and v1.7.4.
So, upgrade !