I don't get clearly where to use skip command and where to use ignore command. I checked the Jaseci Bible, but I couldn't find any details about ignore command. I understand that skip command tells a walker to halt and abandon any unfinished work on the current node and move to the next node (or complete computation if no nodes are queued up). As I understood the ignore command completely ignore the execution of the node? isn't it?
I'm confused already.
Please, someone explain me more details.
You are 100% correct on the way skip works. ignore is a little bit more nuanced. In a nutshell once a node/edge or set of node/edges are "tagged" ignore, all future take commands will not be able to see that node/edge (ie walk paths through that node). You can think of using ignore commands as pruning a graph path.
Related
Steve Ives provided ALLMEM code to run an edit macro against all members of a PDS, see here: How can I run ISPF Edit Macros in Batch
Some members in my PDS are too large (by default) for edit/view and suffer "Browse substituted" on the line:
Address 'ISPEXEC' 'EDIT DATAID('data1')',
'MEMBER('member1') MACRO('workmac')'
Since browse cannot run edit-macros the MACRO('workmac') bit does not come into play, there is no END command issued to return execution to the loop in ALLMEM, and the overall batch execution stops until I manually hit my PF3.
Is there any way I can force TSO to keep in EDIT mode for these large members?
Is there any way I can force TSO to keep in EDIT mode for these large
members?
Maybe.
ISPF Edit has an LRECL limit. If your members that are too large exceed this there isn't anything much you can do about that. If you want to engage in radical notions like splitting each record in two so they are editable, editing them, then reassembling each record pair back into a single record, that's a separate issue.
But maybe the problem isn't your LRECL, but the number of records. You might be able to do something about that.
You could try increasing the REGION parameter for your batch job in which you are running your ISPF Edit macro. I don't know if your personal ISPF settings matter in an ISPF batch job, but you could type EDITSET in an ISPF Edit session and ensure that the value for "Maximum initial storage allowed for Edit and View" is 0, just in case it matters.
Be advised that this may fix your problem, but it's possible your members are simply too big for ISPF Edit. In that case, you must find an alternate mechanism. Since you already have an edit macro, perhaps you could alter it, substituting your own code for ISPF Edit services, and run that code against your data. Perhaps this is an opportunity to learn the marvelous features of your SORT utility. Or awk. Lots of options.
If it is only certain members then it is not an LRECL issue, but strictly size. As cschneid mentioned you can try and maximize the storage available to Edit. However, if the member is really large then you wold eventually hit a storage limit. Currently Edit or View will switch to Browse in that case. If you are running Batch then this presents a problem as you describe. There is nothing that will keep it in Edit. RC=4 is already a documented return code for Browse being substituted, but if you are in batch then you probably end up in a display loop. One possible solution would be to have your own copy of ISRBROBA in ISPPLIB and have it set .RESP = END in the )INIT or )PROC section so as to force an END if BROWSE gets used. Since it is a batch job it is unlikely you would need the normal version of ISRBROBA. You would just make sure that your PANEL library is concatenated first.
I have a script, in which this code fails, with an exit code of -2145124322
$new.ExitCode > $null
$filePath = "wusa.exe"
$argumentList = "`"\\PX_SERVER\Rollouts\Microsoft\VirtualPC\Windows6.1-KB958559-x64-RefreshPkg.msu`" /quiet /norestart"
$exitCode = (Start-Process -FilePath:$filePath -argumentList:$argumentList -wait -errorAction:Stop -PassThru).ExitCode
Write-Host $exitCode
Now, the main script has about 15,000 lines of "other stuff going on", and these lines where not originally exactly like this. The variables are pulled from XML, there is data validation and try/catch blocks, all sorts of stuff. So, I started pulling the pertinent lines out, and put them in a tiny separate script, and hard coded the variables. And there, it works, I get a nice 3010 exit code and off to the races. So, I took my working code, hard coded variables and all, and pasted it back into the original script, and it breaks again.
So, I moved the code out of the function where it belongs, and just put it after I initialize everything and before I start working through the main loop. And there it works! Now, I gotta believe it's the usual "polluted pipeline", but dang if I can figure out what could cause this. My next step I guess is to just start stepping through the code, dropping this nugget in somewhere, run the test, if it works move it farther down, try again. Gack!
So, hopping someone has some insights. Either what it might be, or perhaps an improved test protocol. Or some trick to actually see the whole pipeline and somehow recognize the pollution.
FWIW, I normally work with PoSH v2, but I have tried this with v4 with the exact same results. But perhaps there is some pipeline monitoring feature in a later version that could help with the troubleshooting?
Also, my understanding is that PoSH v2 has issues with negative return codes, so they can't be trusted. But I think newer versions fixed this, correct? So the fact that I get the same code in v4 means it is meaningful to Google? Not that I have found any hint of that exit code anywhere thus far.
Crossed fingers.
EDIT: OK, a little more data. I searched on the exit code without the -, and with DuckDuckGo instead of Google, and found this.
0x8024001E -2145124322 WU_E_SERVICE_STOP Operation did not complete because the service or system was being shut down.
OK, that's some direction. And I have some code that would allow me to kill a service temporarily. But that seems a little draconian. Isn't the whole point of this, like 10th way to install updates from Microsoft, supposed to be to make automation easier? In any case, I can't find any indication there are command line flags for WUSA that would avoid the problem, but I have to believe I am doing something wrong.
Solved! After tracking a number of different errors trying different things, including turning off the firewall and such, it turns out the error isn't that a service won't stop, but that a service won't start. See, some of that 15K lines of code suppresses Windows Update for the duration of my script, because Windows Update causes lots of Autodesk deployments to fail, which is the whole point of my code. Well, of course WUSA needs that service. So, it looks like, rather than suppressing Windows Update for the duration of script execution, I need to be less heavy handed and only suppress for the duration of a deployment task. that will take a few hours to implement and test, but is totally doable. And probably more elegant anyway. Woot!
And yeah, for once it wasn't me pooping in my pipeline unintentionally. ;)
I'm using a a few system() commands in my perl script that is running on Linux.
The commands I run with the system() function output their data to a log which I then parse to decide what to do next.
I noticed that sometimes it looks like the code that parses the log file (which comes after the system() function) doesn't use the final log.
For example, I search for a "test pass" phrase in the log file - and the script doesn't find it even though when I open the log it is there.
Another example - I try to delete the folder where the log was placed but it doesn't let me because it's "Not empty". When I try to delete it manually it is deleted with errors.
(These examples happen every now and then, but most of the time they don't)
It looks like some kind of "timing" problem to me. How can I solve it?
If you want to be safe and are on Linux, call system sync; after every command that writes to the disk before reading from the disk. That will force the OS to write everything still buffered to the filesystem and only return afterwards. Thus you can be sure that when it is finished, everything you wrote to files now actually has arrived there.
But be aware, that may be overkill in many situations. There are reasons for those buffers and manually calling sync constantly is most likely not the fastest way of achieving things. See also http://linux.die.net/man/8/sync
If, for example, you have something else that you could do between the writing and the reading, like some calculations or whatever, that would likely be enough and you would not waste time by telling the OS that you know better how and when it has to do it's jobs. ^^
But if perfect efficiency is not your main concern (and you should not be using Perl if it was), system sync; after everything that modifies files and before accessing those files is probably okay and safe.
I have this Perl script for monitoring a folder in Linux.
To continuously check for any updates to the directory, I have a while loop that sleeps for 5 minutes in-between successive loops :
while(1) {
...
sleep 300;
}
Nobody on my other question suggested using cron for scheduling instead of a for loop.
This while construct, without any break looks ugly to me as compared to submitting a cronjob using crontab :
0 */5 * * * ./myscript > /dev/null 2>&1
Is cron the right choice? Are there any advantages of using the while loop construct?
Are there any better ways of doing this except the loop and cron?
Also, I'm using a 2.6.9 kernel build.
The only reasons I have ever used the while solution is if either I needed my code to be run more than once a minute or if it needed to respond immediately to an external event, neither of which appear to be the case here.
My thinking is usually along the lines of: cron has been tested by millions and millions of people over decades so it's at least as reliable as the code I've just strung together.
Even in situations where I've used while, I've still had a cron job to restart my script in case of failure.
My advice would be to simply use cron. That's what it's designed for. And, as an aside, I rarely redirect the output to /dev/null, that makes it too hard to debug. Usually I simply redirect to a file in the /tmp file system so that I can see what's going on.
You can append as long as you have an automated clean-up procedure and you can even write to a more private location if you're worried about anyone seeing stuff in the output.
The bottom line, though, is that a rare failure can't be analysed if you're throwing away the output. If you consider your job to be bug-free then, by all means, throw the output away but I rarely consider my scripts bug-free, just in case.
Why don't you make the build process that puts the build into the directory do the notification? (See SO 3691739 for where that comes from!)
Having cron run the program is perfectly acceptable - and simpler than a permanent loop with a sleep, though not by much.
Against a cron solution, since the process is a simple one-shot, you can't tell what has changed since the last time it was run - there is no state. (Or, more accurately, if you provide state - via a file, probably - you are making life much more complex than running a single script that keeps its state internally.)
Also, stopping the notification service is less obvious. If there's a single process hanging around, you kill it and the notifications stop. If the notifications are run by cron, then you have to know that they're run out of a crontab, know whose crontab it is, and edit that entry in order to stop it.
You should also consider persuading your company to upgrade to a version of Linux where the inotify mechanism is available.
If you go for the loop instead of cron and want your job run at regular intervals, sleep(300) tends to drift. (consider the execution time of the rest of your script)
I suggest using a construct like this:
use constant DELAY => 300;
my $next=time();
while (1){
$next+=DELAY;
...;
sleep ($next-time());
};
Yet another alternative is the 'anacron' utility.
if you don't want to use cron.
this http://upstart.ubuntu.com/ can be used to babysit processes.
or you can use watch whichever is easier.
One I am aware of is Perl::Critic
And my googling has resulted in no results on multiple attempts so far. :-(
Does anyone have any recommendations here?
Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.
In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones.
In terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code.
For your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again.
Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and
Task::Perl::Critic::IncludingOptionalDependencies.
You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests.
I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present.
(Yes, I'm one of the Perl::Critic developers.)
There is perltidy for most stylistic standards. perlcritic can be easily configured using a .perlcritic file. I personally use the it at level one, but I've disabled a few policies.
In addition to 'automated frameworks', I highly recommend Damian Conway's Perl Best Practices. I don't agree with 100% of what he suggests, but most of the time he's bang on.
The post above mentioning Devel::Prof probably really means Devel::Cover (to get the code coverage of a test suite).
Like:
http://metacpan.org/pod/Perl::Critic
http://www.slideshare.net/joshua.mcadams/an-introduction-to-perl-critic/
Looks like a nice tool!
A nice combination is perlcritic with EPIC for Eclipse - hit CTRL-SHIFT-C (or your preferred configured shortcut) and your code is marked up with warning indicators wherever perlcritic has found something to complain about. Much nicer than remembering to run it before checkin. And as normal with perlcritic, it will pick up your .perlcriticrc so you can customise the rules. We keep our .perlcriticrc in version control so everyone gets the same standards.
In addition to the cosmetic best practices, I always find it useful to run Devel::Prof on my unit test suite to check test coverage.