Powershell script: -erroraction 'silentlycontinue' and try/Catch blocks script slow execution - powershell

(Resolved: The delay/issues I was having, was not due to the delay of adding in try-catch, or redirecting errors to null. It was a result of using usernames that were not real, for testing the script, as opposed to actual/production names. The AD lookup times are obviously different -- didn't think of that before.)
I've been editing https://github.com/dafthack/DomainPasswordSpray for a work project, with the intent of adding in logging. The script outputs to a logfile about 6 times a second, which throws ioexception errors about accessibility of the file. It's a nonterminating error.
I'm ok with the error and potentially non-logged info. However, it clutters the console.
Since it's a non-terminating error, I tried set-content -erroraction 'silentlycontinue', but the error is still displayed. However, there is no measurable performance impact -- just a cluttered console.
So researching further, I tried a try/catch block, which does eliminate the error from the console, BUT the script now takes a tad over 2x as log to run.
Is there a 3rd alternative?
I can mitigate it by splitting the data lists into two, and running the script twice, concurrently. But it's less than idea, as I'm actually already splitting the data list up to speed up the process. (single list takes 2 hours, and 30m is the goal, so I'd need 8 windows to maintain current timing...)
Anyway. Hope that makes sense. Any thoughts/input appreciated. (attempting to copy the code to a machine where I can upload a portion here for those who want to review, but gmail blocks it. working on it.)
The code causing issues:
$FileLocation2 = "LOG-LastTested_$($Userlist)"
# Write out the last user tried
$tm = get-date
# This file is written to so rapidly, errors can occur, b/c it's still open, "In use by another process. SilentlyContinue not working, using try-catch which surpresses error (but is 2x slower).
Try {Write-output "Num: $cu, Name: $name, Pass: $Password, Time: $tm" | set-content -erroraction 'stop' $FileLocation2}
Catch [System.IO.IOException] {continue}```

Resolved: The delay/issues I was having, was not due to the delay of adding in try-catch, or redirecting errors to null. It was a result of using usernames that were not real, for testing the script, as opposed to actual/production names. The AD lookup times are obviously different -- didn't think of that before.
I was testing the error suppression (try-catch/redirect to null) with a list of dummy users. When I checked the rate of testing, it had dropped from 10 users/second with production accounts, to 3/s with dummy accounts. After reverting all the error suppression it was still at the slower 3/s rate. So I put my try-catch lines back in, and tried with production accounts, and it worked great -- 8-10 users/sec.
So, the issue was due to the only change: using dummy accts instead of production. Once I went back to all production accounts, WITH my error surpression -- worked like a champ. Hope that helps someone else.
Thanks to #RetiredGeek and #Mr.Sven who kept me testing and looking for a solution.

Related

How to debug further this dropped record in apache beam?

I am seeing intermittent dropped records(only for error messages though not for success ones). We have a test case that intermittenly fails/passes because of a lost record. We are using "org.apache.beam.sdk.testing.TestPipeline.java" in the test case. This is the relevant setup code where I have tracked the dropped record too ....
PCollectionTuple processed = records
.apply("Process RosterRecord", ParDo.of(new ProcessRosterRecordFn(factory))
.withOutputTags(TupleTags.OUTPUT_INTEGER, TupleTagList.of(TupleTags.FAILURE))
);
errors = errors.and(processed.get(TupleTags.FAILURE));
PCollection<OrderlyBeamDto<Integer>> validCounts = processed.get(TupleTags.OUTPUT_INTEGER);
PCollection<OrderlyBeamDto<Integer>> errorCounts = errors
.apply("Flatten Roster File Error Count", Flatten.pCollections())
.apply("Publish Errors", ParDo.of(new ErrorPublisherFn(factory)));
The relevant code in ProcessRosterRecordFn.java is this
if(dto.hasValidationErrors()) {
RosterIngestError error = new RosterIngestError(record.getRowNumber(), record.toTitleValue());
error.getValidationErrors().addAll(dto.getValidationErrors());
error.getOldValidationErrors().addAll(dto.getOldValidationErrors());
log.info("Tagging record row number="+record.getRowNumber());
c.output(TupleTags.FAILURE, new OrderlyBeamDto<>(error));
return;
}
I see this log for the lost record of Tagging record row for 2 rows that fail. After that however, inside the first line of ErrorPublisherFn.java, we log immediately after receiving each message. We only receive 1 of the 2 rows SOMETIMES. When we receive both, the test passes. The test is very flaky in this regard.
Apache Beam is really annoying in it's naming of threads(they are all the same name), so I added a logback thread hashcode to get more insight and I don't see any and the ErrorPublisherFn could publish #4 on any thread anyways.
Ok, so now the big question: How to insert more things to figure out why this is being dropped INTERMITTENTLY?
Do I have to debug apache beam itself? Can I insert other functions or make changes to figure out why this error is 'sometimes' lost on some test runs and not others?
EDIT: Thankfully, this set of tests are not testing errors upstream and this line "errors = errors.and(processed.get(TupleTags.FAILURE));" can be removed which forces me to remove ".apply("Flatten Roster File Error Count", Flatten.pCollections())" and in removing those 2 lines, the issue goes away for 10 test runs in a row(ie. can't completely say it is gone with this flaky stuff going on). Are we doing something wrong in the join and flattening? I checked the Error structure and rowNumber is a part of equals and hashCode so there should be no duplicates and I am not sure why it would be intermittently failure if there are duplicate objects either.
What more can be done to debug here and figure out why this join is not working in the TestPipeline?
How to get insight into the flatten and join so I can debug why we are losing an event and why it is only 'sometimes' we lose the event?
Is this a windowing issue? even though our job started with a file to read in and we want to process that file. We wanted a constant dataflow stream available as google kept running into limits but perhaps this was the wrong decision?

How can I detach process from Elixir System.cmd/2 before command ends?

I'm trying to run this (script.exs):
System.cmd("zsh", ["-c" "com.spotify.Client"])
IO.puts("done.")
Spotify opens, but "done." never shows up on the screen. I also tried:
System.cmd("zsh", ["-c" "nohup com.spotify.Client &"])
IO.puts("done.")
My script only halts when I close the spotify window. Is it possible to run commands without waiting for it to end?
One should not spawn system tasks in a blind hope they would work properly. If the system task crashes, some actions should be taken in the calling OTP process, otherwise sooner or later it’ll crash in production and nobody would know what had happened and why.
There are many possible scenarios, I would go with Task.start_link/1 (assuming the calling process traps exits,) or with Task.async/1 accompanied by an explicit Task.await/1 somewhere in the supervision tree.
If despite everything explained above you don’t care about robustness, use Kernel.spawn/1 like below
pid = spawn(System, :cmd, ~w|zsh -c com.spotify.Client|)
# Process.monitor(pid) # yet give it a chance to handle errors
IO.puts("done.")

How do I display specific error messages

Full Code - Pastebin.com
I am attempting to refine my PowerShell script by adding in error messages if they appear.
I am planning for
If a Username already exists in our AD environment
If the "NewUsername" text box is left blank
these are just the first 2 examples I can see causing a problem going forward.
I have been researching around, and I believe i should be using try/catch blocks. However I am unsure of how to incorporate them, and how to pull specific error messages.
Do I do start that by adding the try/catch blocks at this location in the code?
$CreateAccount = New-Object system.windows.Forms.Button
$CreateAccount.Text = "Create Account"
$CreateAccount.Width = 127
$CreateAccount.Height = 27
$CreateAccount.Add_MouseClick({
#add here code triggered by the event
try
{
#code to actually run here
}
catch
{
#code to display error messages if any come up
}
If so, can someone point me in the right direction to grab the 2 error types so I can display them to the user running the script?
I feel I must also point out that I convert the .ps1 script to an .exe file with no console showing when it is ran. Will this cause issues for that?

Powershell download page in 0 milliseconds

I have a strange problem with my script in powershell, I want to examine the average time of downloading page. I write script which fires frequently. But sometimes my script returns result 0, which means it downloads site in 0 ms. If i modified my script to save whole site to the file when the download time is about 0ms it doesn't saves anything. And I'm interesting if I do something wrong, or powershell function isn't too accurate to count such "small" times.
ps. other "good" results are about 4-9 ms.
Here is a part of my script which responds to count the download time:
$StartTime = Get-Date
$PageDownload = $Request.DownloadString("mypage.com")
$TimeTaken = ((Get-Date) - $StartTime).TotalMilliseconds
Get-Date should be as precise as the system clock is.
There could be web caching going on. Unfortunately, disabling caching for WebClient is not possible, from what I see elsewhere. The "do it right" method is to construct your own Http request with the TcpClient class, but that's also pretty complex.
One easy way to make sure you're not being cached is to put an arbitrary value as a GET request. It's a hack, but it is often enough to fool a cache. So, instead of:
"http://mypage.com"
You use:
"http://mypage.com?someUnusedValueName=$([System.Environment]::TickCount)"

Perl CGI gets parameters from a different request to the current URL

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!