How can I copy load module using rexx? - copy

I want to copy a load module from one pds to another using REXX.

You could invoke IEBCOPY from within a Rexx, allocating the appropriate datasets to the appropriate ddnames before invoking IEBCOPY.
I'm unable to provide an example as I don't have the facilities/access.
Note that doing so may tie up your terminal/session.
You could also go into a more elaborate solution to build and submit a batch job, perhaps even having a panel front end, driving file tailoring/skeletons.

As #cshneid said you can use IEBCOPY Using IEBCOPY in rexx is basically the same as in JCL but:
use TSO Alloc to allocate the files
Call/invoke the program
if running under ISPF you can use LMCOPY. Roughly the following should work, you may need to issue a LMOPEN / LMClose on the data-ids as well ???
Address ISPEXEC
'LMINIT DATAID(DIDFrom) Dataset(in.data.set)'
'LMINIT DATAID(DIDTo) Dataset(to.data.set)'
'LMCOPY FromId('DIDFrom') FROMMEM(mymem) toId('DIDTo') toMem(newMemberName)'
'LMFREE DATAID(DIDFrom)'
'LMFREE DATAID(DIDto)'
If running foreground, the ISPF services used to have the advantage as they "co-ordinated" there actions with all other ISPF users- Less likely to corrupt the PDS directory. Not sure if this is an advantage any more.

Using just REXX what you want to do is not possible, however, you can invoke IEBCOPY (or your site equivalent) to perform the task for you.
You may want to investigate calling programs like IEBCOPY and passing it the appropriate control cards to perform your task.

Related

Recursive Workflow in Powershell

I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.

AFL fuzzing without root - avoid modifying /proc/sys/kernel/core_pattern

I want to run the American Fuzzy Lop (AFL) fuzzer on a Linux system where I don't have root access. When I do so, the first thing that happens is that it gives me an error message asking me to modify /proc/sys/kernel/core_pattern:
[-] Hmm, your system is configured to send core dump notifications to an
external utility. This will cause issues due to an extended delay
between the fuzzed binary malfunctioning and this information being
eventually relayed to the fuzzer via the standard waitpid() API.
To avoid having crashes misinterpreted as hangs, please log in as root
and temporarily modify /proc/sys/kernel/core_pattern, like so:
echo core >/proc/sys/kernel/core_pattern
[-] PROGRAM ABORT : Pipe at the beginning of 'core_pattern'
Location : check_crash_handling(), afl-fuzz.c:6959
I do understand this error message and why the explanation makes sense.
Unfortunately, modifying /proc/sys/kernel/core_pattern requires root access on the system. I know from experience that the rest of AFL doesn't need root access to work.
Is there a workaround to use AFL without root? (Maybe some alternative user-level way to disable the automatic core-dump handler so it doesn't mess up AFL?) I've read a bunch of questions here about core dumps on Linux, and none of them identified any way to configure the coredump handler on a user-level per-process granularity.
Actually someone request that feature here already:
Source: https://groups.google.com/forum/m/#!msg/afl-users/7arn66RyNfg/BsnOPViuCAAJ
so you just need to set this env variable AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES - as the name suggests you may miss something : )
also see 3) in /docs/env_variables.txt for reference
https://github.com/mirrorer/afl/blob/master/docs/env_variables.txt

Sharing variables\data between Powershell processes

I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.

System call or function call - performance-wise

In Linux, when you can choose between a system call or a function call to do a task, which option is the better one due to a better performance?
We should note that in most of the cases we do not directly use system call. We use the interface provided by glibc.
http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html
http://www.gnu.org/software/libc/manual/html_node/System-Calls.html
Now in cases like File Mangement/IPC/ process management etc which are the core resource management activities of the Operating System the only option is system call and not library functions.
In these cases, typically we use Library function which works as a wrapper over a system call. That is say for reading a file, we have many library functions like
fgetc/fgets/fscanf/fread - all should invoke read system call.
So shall we use read system call? or the other library functions?
This should depend on the particular application.If we are using read, then we again need to change the code to run this, on some other operating system where read is not available.
We are losing some flexibilty. It may be useful when we are sure of the platform and we can do some optimisations by using read only or may be the application must use only file descriptors and not file pointer etc.
Now in cases where we need to consider only say user level operations and invoke
no service from operating system , like say copying a string.(strcpy).
In this case definitely we shall not use any system call unnecessarily, if at
all something is there, since it should be an extra overhead due to operating
system intervention, which is not needed in this case.
So I feel choosing between a system call and a library function only occurs for cases where we have a library function built on top of a system call.
(like adding to examples above we can have say malloc which calls system call brk).
Here the choice will depend on the particular type of software, the platform on which it should run, the precise non functional requirements like speed (Though you cannot say with certainty that your code will run faster if you are using brk instead of malloc), portability etc.

How to preserve data between executions of program

I am running a perl script on a HP-UX box. The script will execute every 15 minutes and will need to compare it's results with the results of the last time it executed.
I will need to store two variables (IsOccuring and ErrorCount) between the executions. What is the best way to do this?
Edit clarification:
It only compares the most recent execution to the current execution.
It doesn't matter if the value is lost between reboots.
And touching the filesystem is pretty much off limits.
If you can't touch the file system, try using a shared memory segment. There are helper modules for that like IPC::ShareLite, or you can use the shmget and related functions directly.
You'll have to store them in a file. This sort of file is often kept in /tmp, but any place where the user running the cron job has access would do. Make sure your script can handle the case where the file is missing.
You could create a separate process running a "remember stuff" service over your choice of IPC mechanism. This sounds like a rather tortured solution to "I don't want to touch the disk" but if it's important enough to offset a couple of days of development work (realistically, if you are new to IPC, and HP-SUX continues to live up to its name) then by all means read man perlipc for a start.
Does it have to be completely re-executed? Can you just have it running in a loop and sleeping for 15 minutes between iterations? Than you don't have to worry about saving the values externally, the program never stops.
I definitely think IPC is the way to go here.
I'd save off the data in a file. Then, inside the script I'd load the last results if the file exists.
Use module Storable to serialize Perl data structures, save them anywhere you want and deserialize them during next script execution.