I have an expect script that is logging into a pfSense/BSD box over SSH, it's called by a Perl script and passes the output back to the Perl script (a RANCID plugin).
Currently I am getting the following output;
+ spawn ssh -2 -x -l rancid my-pfsense-device.fqdn.com
+ Password:
+ Last login: Wed Dec 19 10:28:47 2012 from 89.21.224.35
+ Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
+ The Regents of the University of California. All rights reserved.
+
+
+ [0;1;33m[ [0;1;37m2.0.1-RELEASE [0;1;33m] [0;1;33m[ [0;1;37mrancid [0;1;31m# [0;1;37mmy-pfsense-device.fqdn.com [0;1;33m] [0;1;32m/home/rancid [0;1;33m( [0;1;37m1 [0;1;33m) [0;1;36m [0;1;31m: [0;40;37m
uname -a
+ FreeBSD my-pfsense-device.fqdn.com 8.1-RELEASE-p6 FreeBSD
8.1-RELEASE-p6 #0: Mon Dec 12 18:59:41 EST 2011
root#FreeBSD_8.0_pfSense_2.0-snaps.pfsense.org:/usr/obj./usr/pfSensesrc/src/sys/pfSense_wrap.8.i386
i386
+ [0;1;33m[ [0;1;37m2.0.1-RELEASE [0;1;33m] [0;1;33m[ [0;1;37mrancid [0;1;31m# [0;1;37mmy-pfsense-device.fqdn.com [0;1;33m] [0;1;32m/home/rancid [0;1;33m( [0;1;37m2 [0;1;33m) [0;1;36m [0;1;31m: [0;40;37m
cat /cf/conf/config.xml
+ <?xml version="1.0"?>
+ <pfsense>
The problem here is that the login prompt looks like this when logged in via SSH from my desktop;
[2.0.1-RELEASE][username#my-pfsense-device.fqdn.com]/home/username(1):
But this is in a variety of colours. As you can see in the output above, SSH is being passed all the colouring information, which then gets output to my expect script. The same line looks like this in the above output;
[0;1;33m[ [0;1;37m2.0.1-RELEASE [0;1;33m] [0;1;33m[ [0;1;37musername [0;1;31m# [0;1;37mmy-pfsense-device.fqdn.com [0;1;33m] [0;1;32m/home/username [0;1;33m( [0;1;37m1 [0;1;33m) [0;1;36m [0;1;31m: [0;40;37m
Is there a way I can script this off? Is it a standard colour format that and be regex'ed out, or perhaps I need to change an option on my SSH client to ignore colouring info?
Whether or not a terminal can display colors is determined by the TERM environment variable. It's usually set to something like xterm, linux or screen (maybe with a -256color postfix for even more goodness). Hopefully the shell (and more importantly: the shell initialization scripts) on the other side pay attention to TERM and only try to use color if the terminal on the caller's side actually supports colors.
You can try setting that variable right before your call to ssh. The usual value to set it to for "terminal with no interactive and no color capabilities" is dumb.
Related
I "borrowed" the LPINFOX REXX program from this url: [http://www.longpelaexpertise.com/toolsLPinfoX.php]
When I run it "directly" (EX 'hlq.EXEC(LPINFOX)') it runs fine:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node nnnn)
Security Software: RACF
CEC: 3907-Z02 (IBM Z z14 ZR1)
CEC Serial: ssssss
CEC Capacity mmmm MSU
LPAR name: llll
LPAR Capacity mmm`enter code here` MSU
Not running under a z/VM image
But, if I insert the call into another exec, I get a RC -2 from the address LINKPGM call:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node N1)
Security Software: RACF
79 - Address Linkpgm 'IWMQVS QVS_Out'
+++ RC(-2) +++
CEC: -
CEC Serial:
LPAR name:
Not running under a z/VM image
I'm sure this has to do with the second level of REXX program running, but what can I do about the error (besides queueing up the EXecution of the second REXX)? I'm also stumped on where this RC is documented...my Google search for "REXX ADDRESS RC -2" comes up short.
Thanks,
Scott
PS(1), per answer from #phunsoft:
Interesting. I didn't copy the code to my other REXX. I invoked LPINFOX from within another rexx: I have a hlq.LOGIN.EXEC that has a "EX 'hlq.LPINFOX.EXEC" statement within it. When I reduce the first exec to "TEST1" (follows), it fails the same way:
/* REXX */
"EXECUTIL TS"
"EX 'FAGEN.LPINFOX.EXEC'"
exit 0
When I run TEST1, this is the output from the EXECUTIL from around the IWMQVS call:
When I run LPINFOX.EXEC directly from the command line, the output is the same, except the address LINKPGM IWMQVS works fine:
I can only surmise that there is some environmental difference when I run the exec "standalone" vs. when I run the exec from another exec.
PS(2), per question about replacing IWMQVS with IEFBR14 from phunsoft:
Changing the program to IEFBR14 doesn't change the result, RC=-2.
LINKPGM is a TSO/E REXX host command environment, so you need to search in the TSO/E REXX Reference. From that book:
Additionally, for the LINKMVS, ATTCHMVS, LINKPGM, and ATTCHPGM
environments, the return code set in RC may be -2, which indicates that processing
of the variables was not successful. Variable processing may have been
unsuccessful because the host command environment could not:
o Perform variable substitution before linking to or attaching the program
o Update the variables after the program completed
Difficult to say what th problem is without seeing the code.
You may want to use REXX's trace feature to debug. Do you run this REXX from TSO/E foreground? If so, you might run TSO EXECUTIL TS just before you start that REXX. It will then run as if trace ?i wa specified as the fist line of the code.
I've had look at the LPINFOX EXEC and saw that variable QVS_Out is set as follows just before linking to IWMQVS:
QVS_Outlen = 500 /* Output area length */
QVS_Outlenx = Right(x2c(d2x(QVS_Outlen)),4,d2c(0))
/* Get length as fullword */
QVS_Out = QVS_Outlenx || Copies('00'X,QVS_Outlen-4)
Did you do this also when you copied the call to your other REXX?
I've been testing a UMDF IddCx video driver, and this message just started appearing (after devcon.exe install ...) along with a breakpoint in WinDbg:
(DriverEntry and EVT_WDF_DRIVER_DEVICE_ADD handlers succeed as they did prior to this error message)
.
.
.
<==CDriver::OnWdfDriverDeviceAdd [status: STATUS_SUCCESS]
A mismatch between the PNP/INF version and the KMD file version on the graphics adapter has been detected. The adapter will fail to start.
(WinDbg breaks here -- see stack below)
==>CAdapter::OnWdfDeviceD0Entry(hWdfDevice: <hWdfAdapterDevice>, previousState: 5)
.
.
.
Stack info (Windows 10 Pro | Test Mode | Build 19041.vb_release.191206-1406):
[0x0] dxgkrnl!DpiFdoValidateKmdAndPnpVersionMatch + 0x88e5c
[0x1] dxgkrnl!DpiFdoInitializeFdo + 0x313
[0x2] dxgkrnl!DpiAddDevice + 0x1942
[0x3] nt!PpvUtilCallAddDevice + 0x3b
[0x4] nt!PnpCallAddDevice + 0x94
[0x5] nt!PipCallDriverAddDevice + 0x827
[0x6] nt!PipProcessDevNodeTree + 0x333
[0x7] nt!PiRestartDevice + 0xba
[0x8] nt!PnpDeviceActionWorker + 0x46a
[0x9] nt!ExpWorkerThread + 0x105
[0xa] nt!PspSystemThreadStartup + 0x55
[0xb] nt!KiStartSystemThread + 0x28
I don't understand what this means; I haven't changed anything in the INF, and this is a UMDF driver, so what "KMD file version" is it referring to? I searched for the message itself and also DpiFdoValidateKmdAndPnpVersionMatch, but came up empty.
EDIT: (adding version info)
Windows Version Info:
---------------------
Edition ....... Windows 10 Pro
Version ....... 20H2
Installed on .. 1/5/2021
OS build ...... 19042.685
Experience .... Windows Feature Experience Pack 120.2212.551.0
Can anyone shed light on this?
the Symbol Doesn't Exist in 1909 so that symbol must be a new addition to 20H2
anyway the string in question does exist in 1909
the Failure is supposedly propagated after IoQueryFullDriverPath() and GetFileVersion()
the int3 is Hardcoded after the DebugPrintEx()
the function in question ADAPTER_RENDER::Initialize() is doing a lot of comparisons with hardcoded DWORDS like 'QCOM' etc
C:\> radare2 -Q -qq -c "fs strings;f~mismatch" c:\Windows\System32\drivers\dxgkrnl.sys
0x1c0076940 139 str.A_mismatch_between_the_PNP_INF_version_and_the_KMD_file_version_on_the_graphics_adapter_has_been_detected._The_adapter_will_fail_to_start.
C:\> radare2 -A -Q -qq -c "axt 0x1c0076940" c:\Windows\System32\drivers\dxgkrnl.sys
fcn.1c015be84 0x1c0181f01 [DATA] lea r8, str.A_mismatch_between_the_PNP_INF_version_and_the_KMD_file_version_on_the_graphics_adapter_has_been_detected._The_adapter_will_fail_to_start.
i was just googling around look for something related to inf and GetKmdFileVersion and it seems you need to provide a specific Version String
see if you comply with this
specifically quoting from the doc
Drivers will report WDDM 2.1 support through
DXGK_DRIVERCAPS::WDDMVersion with a new version constant:
DXGK_WDDMVERSION::DXGKDDI_WDDMv2_1 = 0x2100 Dxgkrnl will not use the
WDDMVersion cap as a way to determine which new features are
supported—that task will be left to other caps or DDI presence.
However, if the driver reports WDDM 2.1 support through the
WDDMVersion cap, dxgkrnl will validate that caps or DDIs required by
WDDM 2.1 are present and fail to create the adapter if they are not.
Inconsistent caps will result in failure to create adapter or segment.
Please try adding the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers\DisableVersionMismatchCheck = 1 [DWORD Type]
I am using powershell to run sqlplus and I would like PowerShell to detect if there are error after the script was run and to perform some action instead of me looking at the result file.
& 'sqlplus' 'system/myOraclePassword' '#Test' | out-file 'result.txt';
Normally in DOS, there is %errorlevel% when the command encounters error and I wonder if there is similar stuff in PowerShell?
Of course, I can read the log file myself but sometimes, thing got too routine and I may forget.
My Test.sql:
select level from dual
connect by level<5;
select 10/0 from dual;
quit;
There is clearly a division by zero error. The result.txt captures it but I would like powershell to detect it as well
SQL*Plus: Release 12.1.0.2.0 Production on Thu Apr 27 16:24:30 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Thu Apr 27 2017 16:17:34 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
LEVEL
----------
1
2
3
4
select 10/0 from dual
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
Does the powershell statement return an errorlevel after the statement is executed like DOS?
I have tried:
& 'sqlplus' 'system/myOraclePassword' '#Test' | out-file 'result.txt';
if (errorlevel 1)
{ write-host error;
}
else
{ write-host ok;
}
But that has caused syntax error?
errorlevel : The term 'errorlevel' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
What is a proper way to check error in powershell?
UPDATE
I used this:
if ($LASTEXITCODE -ne 0 )
{
write-host error;
}
else
{
write-host ok;
}
Since you are invoking an executable, you probably want to check for the $LASTEXITCODE variable or the return value of sqlplus. In PowerShell each variable has a $ prefix.
I'm currently facing a problem with this team of 4.
Using binaries I downloaded on kiska's site. I'm able to compile cobol to C and run it with cobcrun or compile it to an executable. However I can 't get opencobol to find the postgres commands.
Here is the strat of my cobol script :
identification division.
program-id. pgcob.
data division.
working-storage section.
01 pgconn usage pointer.
01 pgres usage pointer.
01 resptr usage pointer.
01 resstr pic x(80) based.
01 result usage binary-long.
01 answer pic x(80).
procedure division.
display "Before connect:" pgconn end-display
call "PQconnectdb" using
by reference "dbname = postgres" & x"00"
by reference "host = 10.37.180.146" & "00"
returning pgconn
end-call
...
the call PQconnectdb fail with module ont found : PQconnectdb
I noticed that if i rename the libpq.dll the error message change to can't find entry point. So at least I'm sure it can get my dll.
After digging into the code of the call method of the libcob library. I found it it was possible to pre load some dll using an environment variable COB_PRE_LOAD but sitll no results.
Here is what look the script to compile the cobol :
call "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\vcvarsamd64.bat"
set COB_CONFIG_DIR=C:\OpenCobol\config
set COB_COPY_DIR=C:\OpenCobol\Copy
set COB_LIBS=%COB_LIBS% c:\OpenCobol\libpq.lib
set COB_LIBRARY_PATH=C:\OpenCobol\bin
set COB_PRE_LOAD=C:\OpenCobol\libpq.dll
#echo on
cobc -info
cobc -free -o pgcob -L:"C:\OpenCobol" -llibpq.lib test_cobol\postgres.cob
call cobcrun pgcob
I don't see anything missing, I'm using the 64-bit binaries from kiska's site and use the 64-bit cl.exe from Visual Studio, Postgres is a 64 bit version too (checked with dependencyChecker).
I even tryed to compile the generated C from Visual Studio, same result, but I may miss something, I'm pretty rotten in C and never really had to manage DLL or use Visual Studio.
What am I missing ?
COB_PRE_LOAD doesn't take any path or extension, see the short documentation for the available runtime configurations. I guess
set COB_LIBRARY_PATH=C:\OpenCobol\bin;C:\OpenCobol
set COB_PRE_LOAD=libpq
Will work. You can omit the C:\OpenCobol\bin if you did not placed any additional executables there.
If it doesn't work (even if it does) I'd try to get the C functions resolved at compile time. Either use
CALL STATIC "PQconnectdb" using ...
or an appropriate CALL-CONVENTION or leave the program as-is and use
cobc -free -o pgcob -L"C:\OpenCobol" -llibpq -K PQconnectdb test_cobol\postgres.cob
From cobc --help:
-K generate CALL to <entry> as static
In general: the binaries from kiska.net are quite outdated. I highly suggest getting newer ones from the official download site or ideally build them on your own from source, see the documentation for building GnuCOBOL with VisualStudio.
I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?