AgeStore Fails to Remove Expired Debug Symbol Files - powershell

I’m trying to use AgeStore to remove some expired symbol files. I’ve written a Powershell script in which the AgeStore command works sometimes, but, not always.
For example, my symbol store contains symbol files dating back to 2010. I’d like to clean out the “expired” symbols because they are no longer needed. To that end, I use the -date command line argument to specify “-date=10-01-2010”. Additionally, I use the “-l” switch to force AgeStore to
Causes AgeStore not to delete any files, but merely to list all the
files that would be deleted if this same command were run without the
-l option.
Here’s a snippet of the script code that runs…
$AgeStore = "$DebuggingToolsPath\AgeStore"
$asArgs = "`"$SymbolStorePath`" -date=$CutoffDate -s -y "
if ($WhatIf.IsPresent) { $asArgs += "-l" }
# determine size of the symbol store before delete operation.
Write-Verbose ">> Calculating current size of $SymbolStorePath before deletion.`n" -Verbose
">> $SymbolStorePath currently uses {0:0,0.00} GB`n" -f (((Get-ChildItem -R $SymbolStorePath | measure-object length -Sum ).Sum / 1GB))
Write-Verbose ">> Please wait...processing`n`n" -Verbose
& $AgeStore $asArgs
When the above code runs, it returns the following output…
processing all files last accessed before 10-01-2010 12:00 AM
0 bytes would be deleted
The program 'RemoveOldDebugSymbols.ps1: PowerShell Script' has exited
with code 0 (0x0).
I have verified that there are symbol files with dates earlier than “10-01-2010” in the symbol store. I’ve subsequently tried the same experiment with a different cutoff date, “11-01-2015” and the output indicates that there are several files it would have deleted, but, not those that are from 2010. I’m at a loss as to what may cause the discrepancy.
Has anyone tried to delete symbol files from a symbol store using AgeStore? If so, have you run into this problem? How did you resolve it?

I’ve tried to resolve this many different ways using AgeStore. For the sake of moving forward with a project, I’ve decided to rewrite the script to use the SymStore command with a delete transaction. Basically, I created a list of the debug symbol transactions that should be removed and wrote a loop that iterates over the list and deletes each entry one at a time.
Hope this is helpful for anyone who runs into the same problems.
EDIT: Per request....I cannot post the entire script, but, I used the following code in a loop as a replacement for the AgeStore command.
$ssArgs = ".\symstore.exe del /i $SymbolEntryTransactionID /s `"$SymbolStorePath`""
Invoke-Expression $ssArgs

Related

Powershell compare command for 2 folders having corrupt folders

I am using the command below to compare 2 paths and I get an error msg. when it gets to a folder that ends with a period in the name ie "Folder123."
When I manually try to open those folders I get an error, so I think they are corrupt. How can I skip all folders that end with a period or at least ignore the errors so that my processing can finish?
Compare (Get-ChildItem -r Y:\Ftp\BFold\Final) (Get-ChildItem -r Y:\Dest\TFold\Temp)
You're getting that error because it's part of Naming Files, Paths, and Namespaces limitations in Windows. One or severals of the tools you're using are not able to handle this special case.
Do not end a file or directory name with a space or a period. Although the underlying file system may support such names, the Windows shell and user interface does not. However, it is acceptable to specify a period as the first character of a name. For example, ".temp".
You could either filter the list of folders or using the -ErrorAction to change what happens on an error. Depending on what're you're seeing the error migth already by purely cosmetic.
For Filtering you could use Where-Object for example with -NotMatch ".*\.$".

How should I write a Powershell script to execute a single program on multiple files?

I'm using Kalles' Fraktaler on Windows 10 to render images of the Mandelbrot set. Bundled with KF is a program to take a single parameter file and beak it into multiple tiles for easier rendering.
The output for the tiling program is multiple files with the following naming scheme: name-0000-0000.kfr, name-0000-0000.kfs, where the name can be anything and the numbers increment as needed.
The .kfr files are the parameter files.
The .kfs files are the settings files.
After I have these generated parameter and setting files, I can execute KF on the command line with the following arguments:
kf.exe -s name-0000-0000.kfs -l name-0000-0000.kfr -p name-0000-0000.png
Doing this for every pair of parameter and setting files works perfectly fine, taking the input files and saving the render to name-0000-0000.png
I asked the developer for an example PowerShell script to automate the process for when there are dozens or more of the files that need to be rendered, and this is what he gave me. The script needs to be run from the same directory as the files are stored.
Get-ChildItem "." -Filter *.kfr |
Foreach-Object {
$kfr = $_.FullName
$kfs = $kfr.replace("kfr", "kfs")
$png = $kfr.replace("kfr", "png")
C:/path/to/kf.exe -s $kfs -l $kfr -p $png
}
Unfortunately, I've tried every variation of this script that I could think of, and nothing gives me any results. I have already allowed unsigned scripts to be run on my computer. I would greatly appreciate some help on this.
(PowerShell is nice and flexible - but only when you use it to invoke only PowerShell commands rather than running native executables. For example, to run a program in the current directory you need to prefix the program's name with ./ - ostensibly this is done for safety and I assume for similarity to Unix shells, but it's the first in a long list of gotchas for anyone wanting to use PowerShell for tasks that would be trivial in old-school batch files)
Anyway, you need to use Invoke-Command or Start-Process.
I've changed your script from using a piped expression into an easier-to-digest loop (and invoking .NET's Path.ChangeExtension directly because PowerShell's built-in string match-and-replace syntax is too arcane for me):
$kfrFiles = Get-ChildItem "." -Filter "*.kfr"
foreach ( $kfrFile in $kfrFiles ) {
$kfr = $kfrFile.Name
$kfs = [System.IO.Path]::ChangeExtension( $kfrFile.Name, "kfs" )
$png = [System.IO.Path]::ChangeExtension( $kfrFile.Name, "png" )
Start-Process -FilePath "C:\path\to\kfs.exe" -ArgumentList "-s $kfs", "-l $kf", "-p $png" -Wait
}
The -Wait option will wait for the kfs.exe program to finish before starting the next instance - otherwise if you have hundreds of .kfr files then you'll end-up with hundreds of kfr processes running concurrently.
I don't know how to allow concurrent processes but impose a limit on the maximum-number of concurrent processes in PowerShell. It is possible, just complicated.

colorgcc perl script with output to non-tty enabled writing to C dependency files

Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".

Execute robocopy powershell continuously between two times established

I have a program that creates temporary files in a specific folder. Then, automatically, after a few seconds, these files are deleted.
I wanted to copy those temporal files to an specific folder, I would like to use a powershell script to do this:
robocopy startFolder destinationFolder *.TIFF *.JPEG *.jpg *.PNG *.GIF *.BMP *.ICO *.PBM *.PGM *.PPM /s /XO
My problem is that I couldn't use a scheduled task (because of the problem with limitation of seconds) or install this powershell as a Windows Service with a powershell script (as far as I know is a bad practice) . I need this powershell running all the time trying to get files at the moment that they are created, before this folders were deleted.
Could you give me a hand please? Thanks!
Not sure it's quite what you want, but robocopy does have directory monitoring funcitonality built-in. You could add /mon:1 which should monitor the source directory and re-run the copy when it detects one change (a new or changed file, for example).
However, a down-side of this perhaps is that using this method, robocopy won't exit - it will run until you kill it.
Edit: I've just noticed you specify in your question title that this should run between two established times, in which case you could add the /rh:hhmm-hhmm option to specify times between which new copies can be started. For example, /rh:1000-1200 should only perform the copies (and hence monitoring) between 10am and midday.
Caveat: I've not tried using the "monitor" option of robocopy, so I'm not sure what sort of delay there would be between a change taking place, and the copy being re-run, but it's worth a shot.

Call a program from Powershell w/ very long, variable argument list?

I am currently trying to convert a series of batch files to powershell scripts. I would like to run a compiler for the source files that exist in a directory, recursively. The compiler requires a long list of arguments. The catch is, I want the arguments to be variable so I can change them as needed. This is a typical call from the batch file (simplified for readability and length):
"C:\PICC Compilers\picc18.exe" --pass1
"C:\Src Files\somefile.c"
"-IC:\Include Files" "-IC:\Header
Files" -P
--runtime=default,+clear,+init,-keep,+download,+stackwarn,-config,+clib,-plib
--opt=default,+asm,-speed,+space,9 --warn=0 --debugger=realice -Blarge --double=24 --cp=16 -g --asmlist "--errformat=Error [%n] %f; %l.%c
%s" "--msgformat=Advisory[%n] %s" --OBJDIR="C:\Built Files"
"--warnformat=Warning [%n] %f; %l.%c %s"
This command executes fine when included in a batch file, but I start getting errors when I copy and paste the command into powershell. This is only my second day working with powershell, but I have developed with .NET in the past. I have managed to scrape together the following attempt:
$srcFiles = Get-ChildItem . -Recurse -Include "*.c"
$srcFiles | % {
$argList = "--pass1 " + $_.FullName;
$argList += "-IC:\Include Files -IC:\Header Files -P --runtime=default,+clear,+init,-keep,+download,+stackwarn,-config,+clib,-plib --opt=default,+asm,-speed,+space,9 --warn=0 --debugger=realice -Blarge --double=24 --cp=16 -g --asmlist '--errformat=Error [%n] %f; %l.%c %s' '--msgformat=Advisory[%n] %s' '--warnformat=Warning [%n] %f; %l.%c %s"
$argList += "--OBJDIR=" + $_.DirectoryName;
&"C:\PICC Compilers\picc18.exe" $argList }
I know that I probably have multiple issues with the above code, namely how to pass arguments and how I am dealing with the quotes in the argument list. Incorrect as it is, it should illustrate what I am trying to achieve. Any suggestions on where to start?
Calling command line applications from PowerShell might be really tricky. Several weeks ago #Jaykul wrote great blog post The problem with calling legacy/native apps from PowerShell where he describes gotchas which people will meet in this situations. And there is of course solution too ;)
edit - correct url
The article is no more available, so it's only possible to see that through web.archive.org - see cached article
Make $arglist an array instead of a string. A single string will always be passed as a single argument which is what you don't want here.