Odd failures with PS v2 remoting - powershell

I have a moderatly complex script made up of a PS1 file that does Import-Module on a number of PSM1 files, and includes a small amount of global variables that define state.
I have it working as a regular script, and I am now trying to implement it for Remoting, and running into some very odd issues. I am seeing a ton of .NET runtime errors with eventID of 0, and the script seems to work intermittently, with time between attempts seeming to affect results. I know that isn't a very good description of the problem, but I haven't had a chance to test more deeply, and I am just wondering if I am perhaps pushing PowerShell v2 further than it can really handle, trying to do remoting with a complex and large script like this? Or does this look more like something I have wrong in code and once I get that sorted I will get consistent script processing? I am relatively new to PowerShell and completely new to remoting.
The event data is
.NET Runtime version : 2.0.50727.5485 - Application ErrorApplication
has generated an exception that could not be handled. Process ID=0x52c
(1324), Thread ID=0x760 (1888). Click OK to terminate the application.
Click CANCEL to debug the application.
Which doesn't exactly provide anything meaningful. Also, rather oddly, if I clear the event log, it seems like I have a better chance of not having an issue. Once there are errors in the event log the chance of the script failing is much higher.
Any thoughts before I dig into troubleshooting are much appreciated. And suggestions on best practices when troubleshooting Remote scripts also much appreciated.

One thing about v2 remoting is that the shell memory limit is set pretty small - 150 MB. You might try bumping that up to say 1gb like so:
Set-Item WSMan:\localhost\shell\MaxMemoryPerShellMB 1024 -force

Related

Recursive Workflow in Powershell

I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.

AFL fuzzing without root - avoid modifying /proc/sys/kernel/core_pattern

I want to run the American Fuzzy Lop (AFL) fuzzer on a Linux system where I don't have root access. When I do so, the first thing that happens is that it gives me an error message asking me to modify /proc/sys/kernel/core_pattern:
[-] Hmm, your system is configured to send core dump notifications to an
external utility. This will cause issues due to an extended delay
between the fuzzed binary malfunctioning and this information being
eventually relayed to the fuzzer via the standard waitpid() API.
To avoid having crashes misinterpreted as hangs, please log in as root
and temporarily modify /proc/sys/kernel/core_pattern, like so:
echo core >/proc/sys/kernel/core_pattern
[-] PROGRAM ABORT : Pipe at the beginning of 'core_pattern'
Location : check_crash_handling(), afl-fuzz.c:6959
I do understand this error message and why the explanation makes sense.
Unfortunately, modifying /proc/sys/kernel/core_pattern requires root access on the system. I know from experience that the rest of AFL doesn't need root access to work.
Is there a workaround to use AFL without root? (Maybe some alternative user-level way to disable the automatic core-dump handler so it doesn't mess up AFL?) I've read a bunch of questions here about core dumps on Linux, and none of them identified any way to configure the coredump handler on a user-level per-process granularity.
Actually someone request that feature here already:
Source: https://groups.google.com/forum/m/#!msg/afl-users/7arn66RyNfg/BsnOPViuCAAJ
so you just need to set this env variable AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES - as the name suggests you may miss something : )
also see 3) in /docs/env_variables.txt for reference
https://github.com/mirrorer/afl/blob/master/docs/env_variables.txt

Does New-Object have a memory leak?

Using Powershell v3, I'm using the .net library class System.DirectoryServices.DirectoryEntry and System.DirectoryServices.DirectorySearcher to query a list of properties from users in a domain. The code for this is basically found here.
The only thing you need to add, is a line $ds.PageSize=1000 in between lines $ds.Filter = '(&(objectCategory=person)(objectClass=user))' and $ds.PropertiesToLoad.AddRange($properties). This will remove the limit of only grabbing 1000 users.
The number of users from one domain (we'll call it domain1) I have has over 80,000. Another domain (we'll call this domain2) has over 200,000 users.
If I run the code on domain1, it takes roughly 12 minutes (which is fantastic compared to the 24 hours Get-QADUser was taking). However, after the script is finished in the PowerShell window, I notice a memory hog is left of about 500mb. Domain2 leaves a memory hog of about 1.5gb.
Note: the memory leak with Get-QADUser is much much MUCH worse. Domain2 leaves a memory hog of about 6gb and takes roughly 72 hours to complete (vs less than an hour with the .net class).
The only way to free the memory is to close the PowerShell window. But what if I want to write a script to invoke all these domains scripts to run them one after the other? I would run out of memory after getting to the 6th script.
The only thing I can think of is New-Object is creating a constructor and does not have a destructor (unlike java). I've tried using the [System.GC]::Collect() during the loop iterations, but this has had no affect.
Is there a reason? Is it solvable or am I stuck with this?
One thing to note: If there actually is a memory leak, you can just run the script in a new shell:
powershell { <# your code here #> }
As for the leak, as long as you have any variables that reference an object that holds on to large data it cannot be collected. You may have luck by using a memory profiler to look at what is still in memory and why. As far as I can see, if you use that code in a script file and execute the script (with &, not with .!), then this shouldn't really happen, though.
I guess it's $table that is using all the memory. Why don't you write the collected data directly to a file, instead of adding it to the array?

Sharing variables\data between Powershell processes

I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.

How can I avoid zombies in Perl CGI scripts run under Apache 1.3?

Various Perl scripts (Server Side Includes) are calling a Perl module with many functions on a website.
EDIT:
The scripts are using use lib to reference the libraries from a folder.
During busy periods the scripts (not the libraries) become zombies and overload the server.
The server lists:
319 ? Z 0:00 [scriptname1.pl] <defunct>
320 ? Z 0:00 [scriptname2.pl] <defunct>
321 ? Z 0:00 [scriptname3.pl] <defunct>
I have hundreds of instances of each.
EDIT:
We are not using fork, system or exec, apart form the SSI directive
<!--#exec cgi="/cgi-bin/scriptname.pl"-->
As far as I know, in this case httpd itself will be the owner of the process.
MaxRequestPerChild is set to 0 which should not let the parents die before the child process is finished.
So far we figured that temporarily suspending some of the scripts help the server coping with the defunct processes and prevent it from falling over however zombie processes are still forming without a doubt.
Apparently gbacon seems to be the closest to the truth with his theory that the server is not being able to cope with the load.
What could lead to httpd abandoning these processes?
Is there any best practice to prevent these from happening?
Thanks
Answer:
The point goes to Rob.
As he says, CGI scripts that generate SSI's will not have those SSI's handled. The evaluation of SSI's happens before the running of CGI's in the Apache 1.3 request cycle. This was fixed with Apache 2.0 and later so that CGI's can generate SSI commands.
Since we were running on Apache 1.3, for every page view the SSI's turned into defunct processes. Although the server was trying to clear them it was way too busy with the running tasks to be able to succeed. As a result, the server fell over and become unresponsive.
As a short term solution we reviewed all SSI's and moved some of the processes to client side to free up server resources and give it time to clean up.
Later we upgraded to Apache 2.2.
More Band-Aid than best practice, but sometimes you can get away with simple
$SIG{CHLD} = "IGNORE";
According to the perlipc documentation
On most Unix platforms, the CHLD (sometimes also known as CLD) signal has special behavior with respect to a value of 'IGNORE'. Setting $SIG{CHLD} to 'IGNORE' on such a platform has the effect of not creating zombie processes when the parent process fails to wait() on its child processes (i.e., child processes are automatically reaped). Calling wait() with $SIG{CHLD} set to 'IGNORE' usually returns -1 on such platforms.
If you care about the exit statuses of child processes, you need to collect them (commonly referred to as "reaping") by calling wait or waitpid. Despite the creepy name, a zombie is merely a child process that has exited but whose status has not yet been reaped.
If your Perl programs themselves are the child processes becoming zombies, that means their parents (the ones that are forking-and-forgetting your code) need to clean up after themselves. A process cannot stop itself from becoming a zombie.
I just saw your comment that you are running Apache 1.3 and that may be associated with your problem.
SSI's can run CGI's. But CGI scripts that generate SSI's will not have those SSI's handled. The evaluation of SSI's happens before the running of CGI's in the Apache 1.3 request cycle. This was fixed with Apache 2.0 and later so that CGI's can generate SSI commands.
As I'd suggested above, try running your scripts on their own and have a look at the output. Are they generating SSI's?
Edit: Have you tried launching a trivial Perl CGI script to simply printout a Hello World type HTTP response?
Then if this works add a trivial SSI directives such as
<!--#printenv -->
and see what happens.
Edit 2: Just realised what is probably happening. Zombies occur when a child process exits and isn't reaped. These processes are hanging around and slowly using up resources within the process table. A process without a parent is an orphaned process.
Are you forking off processes within your Perl script? If so, have you added a waitpid() call to the parent?
Have you also got the correct exit within the script?
CORE::exit(0);
As you have all the bits yourself, I'd suggest running the individual scripts one at a time from the command line to see if you can spot the ones that are hanging.
Does a ps listing show an inordinate number of instances of one particular script running?
Are you running the CGI's using mod_perl?
Edit: Just saw your comments regarding SSI's. Don't forget that SSI directives can run Perl scripts themselves. Have a look to see what the CGI's are trying to run?
Are they dependent on yet another server or service?