I need to create some kind of script/runnable file for monitoring my Freeswitch PBX on Windows Server 2012 that:
checks a number of calls each ~5s and then writes in into a file,
checks % of CPU usage at that point and also writes it (into second column).
For the first part, I figured out how to check for actual number of calls flowing through:
fs_cli.exe -x "show calls count" > testlog.txt
but I have to do this manually and it always overwrites the previous one. I need the script to do this automatically every 5s until I stop the script.
fs_cli.exe -x "show calls count" >> testlog.txt
(notice the additional >) will append text to the file instead of overwriting the file
You can write a loop using this kind of code in PS:
#never-ending loop, condition is always true
while($true) {
#run a command (fs_cli.exe -x "show calls count" >> testlog.txt)
#or maybe several
date
(Get-WmiObject Win32_Processor).LoadPercentage >> c:\cpu_usage.txt
#sleep for 5 seconds
Start-Sleep -Seconds 5
}
Related
I observe a strange behaviour when calling an external script using & cmd /c myscript.cmd
When quite a lot of errors (let's say 200) are raised before the external script is called, the external call is not handled properly : the batch file is not executed at all.
When the number of error is not high (let's say 10), it is working normally (the external batch is called).
Is this a bug ? here is the PoC : change the value of $numberOfError to differents values (10, then 200) and see the output changing.
PoC
# Creation of a temporary Batch
Set-Content foo.cmd '#echo THE BATCH SCRIPT HAS BEEN CALLED !' -Encoding ASCII
$numberOfError = 10
# $numberOfError = 200 # if high value (sometimes only 40 is enough,
# other times > 140 do the trick)
for($i=0;$i -lt $numberOfError;$i++){
Rename-Item c:\not_existing.txt c:\blah.txt # just used to raise an Error
}
Write-Host "Before call"
# the below line won't be called if $numberOfError is >= 87
# (might be a higher value on some other machines maybe)
& cmd /c foo.cmd
Write-Host "Still there"
Here is the kind of output i am getting :
When $numberOfError = 10
[... Bunch of Error Messages ...]
Before call
THE BATCH SCRIPT HAS BEEN CALLED ! <------ THIS LINE CHANGE
Still there
When $numberOfError = 200
[... Bunch of Error Messages ...]
Before call
Still there
Please let me know if this is a known powershell bug, i know i could handle errors but this behaviour don't seems normal to me.
P.S: Tested only with Powershell v2.0 binaries.
For instance, I am creating reboot scripts for about 400 servers to reboot every night. I already have the task scheduler portion done with a script.
What I need is, how to insert "shutdown /r /m \servername-001 /f" into a file called "servername-001_Reboot.bat".
and in the same script change the 001 to 002 into the next batch file and so on and so forth.
Unless someone has a more efficient way of doing an automated reboot schedule.
Have a variable $ServerNumber = 1 and inside your for-loop just keep incrementing it.
The filename for the batchfile can be made like so:
$filename = "servername-$( "{0:D3}" -f $ServerNumber )_Reboot.bat"
(Read here to learn more about formatting numbers in powershell)
Any variable including a string, let's call it $str, can be piped to any file:
$str | Out-File "path/to/file/$filename"
use the -Append flag if you want to append data to the file if it already exists (as opposed to overwriting it).
I’m trying to use AgeStore to remove some expired symbol files. I’ve written a Powershell script in which the AgeStore command works sometimes, but, not always.
For example, my symbol store contains symbol files dating back to 2010. I’d like to clean out the “expired” symbols because they are no longer needed. To that end, I use the -date command line argument to specify “-date=10-01-2010”. Additionally, I use the “-l” switch to force AgeStore to
Causes AgeStore not to delete any files, but merely to list all the
files that would be deleted if this same command were run without the
-l option.
Here’s a snippet of the script code that runs…
$AgeStore = "$DebuggingToolsPath\AgeStore"
$asArgs = "`"$SymbolStorePath`" -date=$CutoffDate -s -y "
if ($WhatIf.IsPresent) { $asArgs += "-l" }
# determine size of the symbol store before delete operation.
Write-Verbose ">> Calculating current size of $SymbolStorePath before deletion.`n" -Verbose
">> $SymbolStorePath currently uses {0:0,0.00} GB`n" -f (((Get-ChildItem -R $SymbolStorePath | measure-object length -Sum ).Sum / 1GB))
Write-Verbose ">> Please wait...processing`n`n" -Verbose
& $AgeStore $asArgs
When the above code runs, it returns the following output…
processing all files last accessed before 10-01-2010 12:00 AM
0 bytes would be deleted
The program 'RemoveOldDebugSymbols.ps1: PowerShell Script' has exited
with code 0 (0x0).
I have verified that there are symbol files with dates earlier than “10-01-2010” in the symbol store. I’ve subsequently tried the same experiment with a different cutoff date, “11-01-2015” and the output indicates that there are several files it would have deleted, but, not those that are from 2010. I’m at a loss as to what may cause the discrepancy.
Has anyone tried to delete symbol files from a symbol store using AgeStore? If so, have you run into this problem? How did you resolve it?
I’ve tried to resolve this many different ways using AgeStore. For the sake of moving forward with a project, I’ve decided to rewrite the script to use the SymStore command with a delete transaction. Basically, I created a list of the debug symbol transactions that should be removed and wrote a loop that iterates over the list and deletes each entry one at a time.
Hope this is helpful for anyone who runs into the same problems.
EDIT: Per request....I cannot post the entire script, but, I used the following code in a loop as a replacement for the AgeStore command.
$ssArgs = ".\symstore.exe del /i $SymbolEntryTransactionID /s `"$SymbolStorePath`""
Invoke-Expression $ssArgs
I have a Perl script which calls 'gsutil cp' to copy a selected from from GCS to a local folder:
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-folder]";
$output = `$cmd 2>&1`;
The script is called via HTTP and hence can be initiated multiple times (e.g. by double-clicking on a link). When this happens, the local file can end up being exactly double the correct size, and hence obviously corrupt. Three things appear odd:
gsutil seems not to be locking the local file while it is writing to
it, allowing another thread (in this case another instance of gsutil)
to write to the same file.
The '-n' seems to have no effect. I would have expected it to prevent
the second instance of gsutil from attempting the copy action.
The MD5 signature check is failing: normally gsutil deletes the
target file if there is a signature mismatch, but this is clearly
not always happening.
The files in question are larger than 2MB (typically around 5MB) so there may be some interaction with the automated resume feature. The Perl script only calls gsutil if the local file does not already exist, but this doesn't catch a double-click (because of the time lag for the GCS transfer authentication).
gsutil version: 3.42 on FreeBSD 8.2
Anyone experiencing a similar problem? Anyone with any insights?
Edward Leigh
1) You're right, I don't see a lock in the source.
2) This can be caused by a race condition - Process 1 checks, sees the file is not there. Process 2 checks, sees the file is not there. Process 1 begins upload. Process 2 begins upload. The docs say this is a HEAD operation before the actual upload process -- that's not atomic with the actual upload.
3) No input on this.
You can fix the issue by having your script maintain an atomic lock of some sort on the file prior to initiating the transfer - i.e. your check would be something along the lines of:
use Lock::File qw(lockfile);
if (my $lock = lockfile("$localfile.lock", { blocking => 0 } )) {
... perform transfer ...
undef $lock;
}
else {
die "Unable to retrieve $localfile, file is locked";
}
1) gsutil doesn't currently do file locking.
2) -n does not protect against other instances of gsutil run concurrently with an overlapping destination.
3) Hash digests are calculated on the bytes as they are being downloaded as a performance optimization. This avoids a long-running computation once the download completes. If the hash validation succeeds, you're guaranteed that the bytes were written successfully at one point. But if something (even another instance of gsutil) modifies the contents in-place while the process is running, the digesters will not detect this.
Thanks to Oesor and Travis for answering all points between them. As an addendum to Oesor's suggested solution, I offer this alternative for systems lacking Lock::File:
use Fcntl ':flock'; # import LOCK_* constants
# if lock file exists ...
if (-e($lockFile))
{
# abort if lock file still locked (or sleep and re-check)
abort() if !unlink($lockFile);
# otherwise delete local file and download again
unlink($filePath);
}
# if file has not been downloaded already ...
if (!-e($filePath))
{
$cmd = "[bin-path]/gsutil cp -n gs://[gcs-file-path] [local-dir]";
abort() if !open(LOCKFILE, ">$lockFile");
flock(LOCKFILE, LOCK_EX);
my $output = `$cmd 2>&1`;
flock(LOCKFILE, LOCK_UN);
unlink($lockFile);
}
I have a program that performs some operations on a specified log file, flushing to the disk multiple times for each execution. I'm calling this program from a perl script, which lets you specify a directory, and I run this program on all files within the directory. This can take a long time because of all the flushes.
I'd like to execute the program and run it in the background, but I don't want the pipeline to be thousands of executions long. This is a snippet:
my $command = "program $data >> $log";
myExecute("$command");
myExecute basically runs the command using system(), along with some other logging/printing functions. What I want to do is:
my $command = "program $data & >> $log";
This will obviously create a large pipeline. Is there any way to limit how many background executions are present at a time (preferably using &)? (I'd like to try 2-4).
#!/bin/bash
#
# lets call this script "multi_script.sh"
#
#wait until there are less then 4 instances running
#polling with interval 5 seconds
while [ $( pgrep -c program ) -gt 4 ]; do sleep 5; done
/path/to/program "$1" &
Now call it like this:
my $command = "multi_script.sh $data" >> $log;
Your perl script will wait if the bash script waits.
positives:
If a process crashes it will be replaced (the data goes, of course, unprocessed)
Drawbacks:
It is important for your perl script to wait a moment between starting instances
(maybe a sleep period of a second)
because of the latency between invoking the script and passing the while loop test. If you spawn them too quickly (system spamming) you will end up with much more processes than you bargained for.
If you are able to change
my $command = "program $data & >> $log";
into
my $command = "cat $data >>/path/to/datafile";
(or even better: append $data to /path/to/datafile directly from perl )
And when your script is finished that the last line will be:
System("/path/to/quadslotscript.sh");
then I have the script quadslotscript.sh here:
4 execution slots are started and stay until the end
all slots get input from the same datafile
when a slot is ready with processing it will read a new entry to process
until the datafile/queue is empty
no processtable lookup during execution, only when all work is done.
the code:
#!/bin/bash
#use the datafile as a queue where all processes get their input
exec 3< "/path/to/datafile"
#4 seperate processes
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
#only exit when 100% sure that all processes ended
while pgrep "program" &>"/dev/null" ; do wait ; done