What does it mean when I get a RC (-2) from LINKPGM in a REXX exec? - zos

I "borrowed" the LPINFOX REXX program from this url: [http://www.longpelaexpertise.com/toolsLPinfoX.php]
When I run it "directly" (EX 'hlq.EXEC(LPINFOX)') it runs fine:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node nnnn)
Security Software: RACF
CEC: 3907-Z02 (IBM Z z14 ZR1)
CEC Serial: ssssss
CEC Capacity mmmm MSU
LPAR name: llll
LPAR Capacity mmm`enter code here` MSU
Not running under a z/VM image
But, if I insert the call into another exec, I get a RC -2 from the address LINKPGM call:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node N1)
Security Software: RACF
79 - Address Linkpgm 'IWMQVS QVS_Out'
+++ RC(-2) +++
CEC: -
CEC Serial:
LPAR name:
Not running under a z/VM image
I'm sure this has to do with the second level of REXX program running, but what can I do about the error (besides queueing up the EXecution of the second REXX)? I'm also stumped on where this RC is documented...my Google search for "REXX ADDRESS RC -2" comes up short.
Thanks,
Scott
PS(1), per answer from #phunsoft:
Interesting. I didn't copy the code to my other REXX. I invoked LPINFOX from within another rexx: I have a hlq.LOGIN.EXEC that has a "EX 'hlq.LPINFOX.EXEC" statement within it. When I reduce the first exec to "TEST1" (follows), it fails the same way:
/* REXX */
"EXECUTIL TS"
"EX 'FAGEN.LPINFOX.EXEC'"
exit 0
When I run TEST1, this is the output from the EXECUTIL from around the IWMQVS call:
When I run LPINFOX.EXEC directly from the command line, the output is the same, except the address LINKPGM IWMQVS works fine:
I can only surmise that there is some environmental difference when I run the exec "standalone" vs. when I run the exec from another exec.
PS(2), per question about replacing IWMQVS with IEFBR14 from phunsoft:
Changing the program to IEFBR14 doesn't change the result, RC=-2.

LINKPGM is a TSO/E REXX host command environment, so you need to search in the TSO/E REXX Reference. From that book:
Additionally, for the LINKMVS, ATTCHMVS, LINKPGM, and ATTCHPGM
environments, the return code set in RC may be -2, which indicates that processing
of the variables was not successful. Variable processing may have been
unsuccessful because the host command environment could not:
o Perform variable substitution before linking to or attaching the program
o Update the variables after the program completed
Difficult to say what th problem is without seeing the code.
You may want to use REXX's trace feature to debug. Do you run this REXX from TSO/E foreground? If so, you might run TSO EXECUTIL TS just before you start that REXX. It will then run as if trace ?i wa specified as the fist line of the code.
I've had look at the LPINFOX EXEC and saw that variable QVS_Out is set as follows just before linking to IWMQVS:
QVS_Outlen = 500 /* Output area length */
QVS_Outlenx = Right(x2c(d2x(QVS_Outlen)),4,d2c(0))
/* Get length as fullword */
QVS_Out = QVS_Outlenx || Copies('00'X,QVS_Outlen-4)
Did you do this also when you copied the call to your other REXX?

Related

Program and Run PIC18 with pickit4 on linux

I am on linux ubuntu and target is a PIC18F47J53.
I basically want to program the chip and then let it run, using command lines and using pickit4.
using ipecmd (from mplab x ide v5.45), this is my command:
/opt/microchip/mplabx/v5.45/sys/java/zulu8.40.0.25-ca-fx-jre8.0.222-linux_x64/bin/java -jar /opt/microchip/mplabx/v5.45/mplab_platform/mplab_ipe/ipecmd.jar -TPPK4 /P18F47J53 -M -F"/path_to_myfile.hex" -W
This is my output
DFP Version Used : PIC18F-J_DFP,1.4.41,Microchip
*****************************************************
Connecting to MPLAB PICkit 4...
Currently loaded versions:
Application version............00.06.66
Boot version...................01.00.00
Script version.................00.04.17
Script build number............db473af2f4
Tool pack version .............1.6.961
PICkit 4 is supplying power to the target (3.25 volts).
Target device PIC18F47J53 found.
Device Revision Id = 0x1
*****************************************************
Calculating memory ranges for operation...
Erasing...
The following memory area(s) will be programmed:
program memory: start address = 0x0, end address = 0x3ff
program memory: start address = 0x1fc00, end address = 0x1fff7
configuration memory
Programming/Verify complete
Program Report
30-Jan-2021, 12:54:41
Device Type:PIC18F47J53
Program Succeeded.
Operation Succeeded
All good, and takes about 12 seconds, however, after that the pickit4 turns off the power target, and the pickit LED is BLUE (I guess state "ready")
The main question is how can I let the pickit4 powering the boards? any specific parameter? (I cannot find on the readme.html)
If I use MPLAB X IPE GUI to program, the programming is much quicker (3 or 4 seconds), the pickit LED is YELLOW and the target is left powered on. (I selected "release from reset")
I have tried to get the log out with as many details as possible, but I cannot see the commands sent to the pickit4.
Any idea? thanks
I realize that it's been a while since you asked, but i put the answer here for anyone who needs it. Add -OL to your command line options.

Where does dev_dbg writes log to?

In a device driver source in the Linux tree, I saw dev_dbg(...) and dev_err(...), where do I find the logged message?
One reference suggest to add #define DEBUG . The other reference involves dynamic debug and debugfs, and I got lost.
dev_dbg() expands to dynamic_dev_dbg(), dev_printk(), or no-op depending on the compilation flags.
#if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
#elif defined(DEBUG)
#define dev_dbg(dev, format, arg...) \
dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
if (0) \
dev_printk(KERN_DEBUG, dev, format, ##arg); \
})
#endif
dynamic_dev_dbg() and dev_printk() call dev_printk_emit() which calls vprintk_emit().
This very same function is called in a normal mode when you just do a printk(). Just note here, that the rest functions like dev_err() will end up in the same function.
Thus, obviously, the buffer is all the same, i.e. kernel intrenal buffer.
The logged message at the end is printed to
Current console if kernel loglevel value (can be changed via kernel command line or via procfs) is high enough for certain message, here KERN_DEBUG.
Internal buffer which can be read by running dmesg command.
Note, data in 2 is kept as long as there still room in the buffer. Since it's limited and circular, newer data preempts old one.
Additional information how to enable Dynamic Debug.
First of all, be sure you have CONFIG_DYNAMIC_DEBUG=y in the kernel configuration.
Assume we would like to enable all debug prints in the built-in module with name 8250. To achieve that we simple add to the kernel command line the following 8250.dyndbg=+p.
If the same driver is compiled as loadable module we may either add options 8250 dyndbg to the modprobe configuration or to the shell command line when do it manually, like modprobe 8250 dyndbg.
More details are described in the Dynamic Debug documentation.
The "How certain debug prints are automatically enabled in linux kernel?" raises the question why some debug prints are automatically enabled and how DEBUG affects that when CONFIG_DYNAMIC_DEBUG=y. The answer is lying in the dynamic_debug.h and since it's used during compilation the _DPRINTK_FLAGS_DEFAULT defines the certain message appearence.
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
#else
#define _DPRINTK_FLAGS_DEFAULT 0
#endif
you can find dev_err(...) in kernel messages. As the name implies, dev_err(...) messages are error messages, so they will definitely be printed if the execution comes to that point. dev_dbg(...) are debug messages which are more generously used in the kernel driver code and they are not printed by default. So everything you have read about dynamic_debugging comes into play with dev_dbg(...).
There are several pre-conditions to have dynamic debugging working, below 1. and 2. are general preconditions for dynamic debugging. 3. and later are for your particular driver/module/subsystem and can be .
Dynamic debugging support has to be in your kernel config CONFIG_DYNAMIC_DEBUG=y. You may check if it is the case zgrep DYNAMIC_DEBUG /proc/config.gz
debugfs has to be mounted. You can check with sudo mount | grep debugfs and if not existing, you can mount with sudo mount -t debugfs /sys/kernel/debug
refer to dynamic_debugging and enable the particular file/function/line you are interested

Using __END__ and DATA in Chef recipes (to run legacy shell scripts)

I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!
It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds

Where to find logs for a cloud-init user-data script?

I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?

DBD::Informix connection issues

I'm having somewhat weird problem with DBD::Informix. If I run a simple script like that:
use DBI;
my $dbh = DBI->connect_cached('dbi:Informix:database', '', '');
my $sth = $dbh->prepare('select foo from bar');
...
It works all right. But if I try to do exactly the same from a test script it fails with the following message:
SQL: -25588: The appl process cannot connect to the database server cms_ol.
ISAM: 22: Invalid argument
The only difference I see is that test script is quite heavy on module usage; it is based on Test::More and loads a lot of submodules that are to be tested.
Turning on DBI trace does not provide anything useful (for me, at least). Simple script runs along just fine:
DBI 1.616-nothread default trace level set to 0x0/1 (pid 9685 pi 0) at test_ifx.pl line 6
Note: perl is running without the recommended perl -w option
-> DBI->connect(dbi:Informix:cms#cms_ol, , ****, HASH(0x13fad0))
-> DBI->install_driver(Informix) for solaris perl=5.008009 pid=9685 ruid=106 euid=106
install_driver: DBD::Informix version 2011.0612 loaded from /cms/webdash/lib/arch/DBD/Informix.pm
<- install_driver= DBI::dr=HASH(0x1c8070)
!! warn: 0 CLEARED by call to connect method
-->> DBD::Informix::dbd_ix_db_connect()
CONNECT TO 'cms#cms_ol' - no user info
-->> DBD::Informix::dbd_ix_db_check_for_autocommit()
... and the only difference in trace of the problematic script I see is that it just fails:
DBI 1.616-nothread default trace level set to 0x0/1 (pid 9687 pi 0) at 22_report.t line 5 via 22_report.t line 6
Note: perl is running without the recommended perl -w option
-> DBI->connect_cached(dbi:Informix:cms, , ****)
-> DBI->install_driver(Informix) for solaris perl=5.008009 pid=9687 ruid=106 euid=106
install_driver: DBD::Informix version 2011.0612 loaded from /cms/webdash/lib/arch/DBD/Informix.pm
<- install_driver= DBI::dr=HASH(0xb619bc)
!! warn: 0 CLEARED by call to connect_cached method
-->> DBD::Informix::dbd_ix_db_connect()
CONNECT TO 'cms' - no user info
***ERROR***
SQL: -25588: The appl process cannot connect to the database server cms_ol.
ISAM: 22: Invalid argument
<<-- dbd_ix_db_connect (**ERROR-1**)
<<-- DBD::Informix::dbd_ix_db_connect()
I'm running custom Perl 5.8.9 build in Solaris 9, with latest DBI and DBD::Informix versions, against Informix IDS 9.40UC.
Update: If I try to be a smartass and put a block like that at the top of the heavy test script:
use DBI;
BEGIN { my $dbh = DBI->connect_cached( ... ); print "Connected!\n" if $dbh; }
... it prints like this:
Connected!
Out of memory!
Callback called exit.
END failed--call queue aborted at t/22_report.t line 20.
Callback called exit at t/22_report.t line 20.
BEGIN failed--compilation aborted at t/22_report.t line 24.
My guess is that DBD::Informix conflicts with some of the modules loaded after the connection is made. But which one? That's the question...
Another update: It appears that the above trick does something unwieldy. I tried to load all the modules explicitly by replacing 'use Module' with 'require Module; Module->import'. Pure Perl modules are OK but whenever XS module using XSLoader appears, Perl goes boom with friendly 'Out of memory' message. And if I move Informix connection below module initialization, it works all right - except DBD::Informix fails with the same -25588 error. Boomer. I'm at loss. :(
Another another update: I tried to run the same script with standard Perl 5.6.1 shipped with Solaris 9, using DBI 1.601 (the latest that would compile with Perl 5.6) and DBD::Informix 2011.0612. Same thing, so it's not custom Perl giving me trouble.
I can also add that the test module in question was prototyped using DBD::SQLite and fully works. It is the final test with DBD::Informix that is failing... As usual. :/
Workaround: following e-mail discussion with Jonathan, a workaround was found: addition of streams-based 'onipcstr' connection to Informix server allowed DBD::Informix to connect. Apparently, some XS modules interfere with default shmem-based connection method, although the culprit is unknown at the moment.
Further discussion
Custom-built Perl is, in my experience, easier than the system Perl. I never modify the system's Perl installation (I don't want to break it) so I always build my own.
You appear to have:
Solaris 9 (SPARC?)
Perl 5.8.9
DBI 1.616
DBD::Informix 2011.0612
ESQL/C (CSDK) 2.81
Informix Dynamic Server 9.40
We don't have the detailed sub-version of ESQL/C and IDS (2.81.UC2, 9.40.UC5, or whatever). There's a hint that you are using a 32-bit version of IDS, so probably everything is 32-bit. You are probably aware that 9.40 is no longer supported by IBM (and, indeed, its successor version 10.00 is also out of support). However, superficially, none of that should matter very much. The failing t91lvarchar.t is not a big issue.
Can you run the connect in working and non-working modes with DBI_TRACE=9 set in the environment.
If the trace for the connect operation is too voluminous to go into an update to the question, we'd better take this off-line to the DBD::Informix support channels (that's me, but by email).
The 'ISAM' error of 22 (Invalid argument) is puzzling. I'm curious about what is in your sqlhosts file for this server - the entry for cms_ol specifically. I'm not sure it will reveal anything, not least because you say the sample ESQL/C below (in the 'First hypothesis' section) works OK, and sometimes the Perl connects and sometimes it does not.
I wonder if there is a name conflict somewhere between functions in the shared libraries? That sort of thing will be hell to track.
First hypothesis
Further information received shows that this was not the crucial distinction.
The difference appears to be:
Works: CONNECT TO 'cms#cms_ol' - no user info
Fails: CONNECT TO 'cms' - no user info
The tricky part to explain is why the second fails, especially as the error goes on to mention cms_ol.
The workaround is to specify the server name in the connect string:
DBI->connect(dbi:Informix:cms#cms_ol, , ****, HASH(0x13fad0))
DBI->connect_cached(dbi:Informix:cms, , ****)
The underlying problem is more likely at the ESQL/C level than anything to do with other Perl modules. That is, if you compiled and executed this ESQL/C program, it would fail on cms and work on cms#cms_ol:
int main(int argc, char **argv)
{
$ char *dbs = "cms";
if (argc > 1)
dbs = argv[1];
$ whenever error stop;
$ connect to :dbs;
return 0;
}
You could run it without an explicit database name (or with an explicit 'cms'), and I would expect it to fail. You could run it with 'cms#cms_ol' and I would expect it to pass. The program will say nothing if it passes; it will be obvious when it fails (though the messages will not be beautiful).
There is an outside chance it is something to do with connect_cached; that is a service provided by the DBI module and not by the DBD::Informix module. On the whole though, it is more likely something happening at the ESQL/C level.