How powershell handles returns from & calls - powershell

We have field devices that we decided to use a powershell script to help us handle 'updates' in the future. It runs every 5 minutes to execute rsync to see if it should download any new files. The script, if it sees any file types of .ps1, .exe, .bat ,etc. will then attempt to execute those files using the & operator. At the conclusion of execution, the script will write the file executed an excludes file (so that rsync will not download again) and remove the file. My problem is that the return from the executed code (called by &) behaves differently, depending on how the main script is called.
This is the main 'guts' of the script:
Get-ChildItem -Path $ScriptDir\Installs\* -Include #("*.ps1","*.exe","*.cmd","*.VBS","*.MSI") | ForEach {
Write-Verbose "Executing: $_"
& $_
$CommandName = Split-Path -Leaf $_
Write-Verbose "Adding $CommandName to rsync excludes"
Write-Output "$CommandName" | Out-File -FilePath $ScriptDir\excludes -Append -Encoding ASCII
Write-Verbose "Deleting '$_'"
Remove-Item $_
}
When invoking powershell (powershell.exe -ExecutionPolicy bypass) and then executing the script (.\Update.ps1 -Verbose), the script runs perfectly (i.e. the file is written to excludes and deleted) and you can see the verbose output (writing and deleting).
If you run the following (similar to task scheduler) powershell.exe -ExecutionPolicy bypass -NoProfile -File "C:\Update.ps1" -Verbose, you can see the new script get executed but none of the steps afterwards will execute (i.e. no adding to excludes or removing the file or the verbose outputs).

Related

Workflow not working when called by another script

I have a script, let's call it A.ps1 that uses a workflow called Copy-Parallel (the workflow is defined in the beginning of the script). I created a GUI.ps1 script that I use to easily select between multiple scripts, it calls the desired script with "Start-Process". In the end I created a shortcut, so I could run the GUI by double-click.
All resources are located on a server and the two scripts are in the same folder called Res. The shortcut is located in the same folder as Res. When we use the shortcut, the script works fine for me but other users get the: term 'Copy-Parallel' is not recognized as the name of a cmdlet, function... error.
We tried to run the A.ps1 script directly and it works fine for me and other users. We tried running the GUI.ps1 via a .bat file, the results were similar to the ones when we used the shortcut.
A.ps1:
workflow Copy-Parallel {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[array]
$Files,
[Parameter(Mandatory)]
[string]
$Destination
)
foreach -parallel -throttlelimit 20 ($File in $Files) {
Copy-Item -LiteralPath $File -Destination $Destination -Force
}
}
Copy-Parallel -Files $SourceFiles.FullName -Destination $DestinationPath
GUI.ps1 calls the script as follows:
$ScriptToRun = "$PSScriptRoot\A.ps1"
Start-Process PowerShell.exe -ArgumentList $ScriptToRun -NoNewWindow
The target for the shortcut that starts the GUI is: %windir%\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -WindowStyle Hidden -NonInteractive -ExecutionPolicy bypass -File "Res\GUI.ps1"

Powershell script on VM as scheduled task

I wrote a simple script to move files that contain a specific substring in the file name from one folder to another.
$Source = Get-ChildItem -Path "C:\source" -Filter *.xlsm
$Target = "C:\target"
$substring = 'copy'
foreach ($file in $Source) {
if($file.Name -match $substring){
Move-Item -Path $file.FullName -Destination $Target -Force
}
}
I want to automate it on VM. It works fine when I'm running it manually and via task scheduler when I'm logged in VM, however when I switch to 'run whether logged on or logged off' in task scheduler (script properties) it won't work. I run it with following parameters:
-noprofile -executionpolicy unrestricted -noninteractive -file "path to my script"
Any ideas?
When the option "Run whether user is logged on or not" is selected, the scheduled task runs on a different session.
This means that it does not have access to the mapped network drives !.
So, you need to either map drives in your script or use the fully qualified name (i.e., \server name\share name)
More details here as well

Powershell command for ReadinessReportCreator.exe

I've got 4,000 shares to go through with the readinessreportcreator, which works when I run it locally, however I put together a command with a foreach loop to read a csv of all the DFS shares, however I need to go deeper into the folder structure, normally I would use the -recurse switch, but that doesn't seem to be working.
The site I got the command from
https://learn.microsoft.com/en-us/deployoffice/use-the-readiness-toolkit-to-assess-application-compatibility-for-office-365-pro
The powershell I have put together is:
$shares = import-csv -path 'C:\temp\dfs.csv'
$shs = ($shares).share
foreach ($sh in $shs)
{
Start-Process -NoNewWindow -FilePath "C:\Program Files (x86)\Microsoft Readiness Toolkit for Office\ReadinessReportCreator.exe" "$sh" Out-File "C:\temp"
write-host "share=" $sh
}
The command the site suggest from a command line is:
ReadinessReportCreator.exe -p c:\officefiles\ -r -output \\server01\finance
I was thinking if I could just use the foreach loop and change c:\officefiles\ with the variable of the share it would run through each folder and subfolder, but it doesn't like the -p, -r or the -output. Porbably because they're not powershell cmdlets, so -p should be -path and -r should be -recurse and -output should be out-file, but onlu out-file is recognised.
Excel file si:
share
\\vmshare\share
\\vm3share\share1
\\vm2share\share2
\\vm2share\share3
Hope this makes sense
Thanks in advance
1)Info from site above:
ReadinessReportCreator.exe -p c:\officefiles\ -r -output \\server01\finance -silent
The following is an example of a command line that you can run to scan a folder, and all its subfolders, and save the results to a network share for the Finance department. This only scans for VBA macros.
2)Can't see structure your csv file, but i have doubt about this rows of code(not sure what in $shs):
$shares = import-csv -path 'C:\temp\dfs.csv'
$shs = ($shares).share
3)When you run command Start-Process you must pass parameters via key -ArgumentList,
-Wait — Wait for the process to be finished before accepting any more inputs.
Start-Process .\DeploymentServer.UI.CommandLine.exe -ArgumentList "register --estNumber $Number --postcode `"$PostCode`" --password $Password" -Wait -NoNewWindow
4)For more about Start-Process enter in cli Get-Help Start-Process -Online

Start-Transcript and Logging Batch File Output

I have a function in a PowerShell module which creates a log file and starts a transcript using that file (see below). When running a PowerShell script, this works great and captures all output.
When running a PowerShell script which calls a batch file (which we do quite often while migrating from CMD > PowerShell), the batch file output shows on the console in the same window as the PowerShell script, but the transcript log file shows only 1 blank line where the call to the batch file is.
09:53:25 AM [Success] Zip file already up to date, no need to download!
09:53:25 AM [Note ] Calling 1.bat
10:07:55 AM [Note ] Calling 2.bat
I'm calling the batch files from .ps1 scripts with only the ampersand '&'.
What's strange is that sometimes the batch file output is captured in the log (usually the first batch file called). However I can't find anything special about these files.
What's also strange is that sometimes we call external programs (WinSCP) and the output from those commands only sometimes show in the transcript. Possibly relevant.
For reference, here is the function I use to create a transcript of our processes.
Function Log_Begin()
{
<#
.SYNOPSIS
Starts the process for logging a PowerShell script.
.DESCRIPTION
Starts the process for logging a PowerShell script. This means that whenever
this function is called from a PowerShell script, a folder called 'Logs' will
be created in the same folder, containing a full transcript of the script's output.
.EXAMPLE
C:\PS> Log_Begin
#>
Process
{
$ScriptLoc = $MyInvocation.PSCommandPath
$WorkDir = Split-Path $ScriptLoc
If (!(Test-Path "$WorkDir\Logs")) {mkdir "$WorkDir\Logs" | Out-Null}
$LogPath = "$WorkDir\Logs"
$ScriptName = [io.path]::GetFileNameWithoutExtension($ScriptLoc)
$LogDate = Get-Date -format "yyyy-MM-dd"
$LogName = "$ScriptName $LogDate.log"
$global:Log = $LogPath + "\" + $LogName
$ErrorActionPreference="SilentlyContinue"
Stop-Transcript | out-null
$ErrorActionPreference = "Continue"
# Create file and start logging
If (!(Test-Path $Log)) {
New-Item -Path $Log -ItemType File | Out-Null
}
Start-Transcript -Path $Log -Append
}
}
Does anyone have any ideas on how I can capture the batch file output? Preferably I wouldn't have to change every call to a batch file from the script, and make something in the module.

Start-Process does not parse argumentlist

I have created a PowerShell script, but for some reason the "Start-Process"-cmdlet does not seem to be behaving correctly. Here is my code:
[string]$ListOfProjectFiles = "project_file*."
[string]$arg = "project_file"
[string]$log = "C:\Work\output.log"
[string]$error = "C:\Work\error.log"
Get-ChildItem $PSScriptRoot -filter $ListOfProjectFiles | `
ForEach-Object {
[string]$OldFileName = $_.Name
[string]$Identifier = ($_.Name).Substring(($_.Name).LastIndexOf("_") + 1)
Rename-Item $PSScriptRoot\$_ -NewName "project_file"
Start-Process "$PSScriptRoot\MyExecutable.exe" ` #This line causes my headaches.
-ArgumentList $arg `
-RedirectStandardError $error `
-RedirectStandardOutput $log `
-Wait
Remove-Item "C:\Work\output.log", "C:\Work\error.log"
Rename-Item "$PSScriptRoot\project_file" -NewName $OldFileName
}
The main issue is, is that on my machine the program runs, but only after I added the -Wait switch. I found out that if I stepped through my code in the PowerShell-ISE, MyExecutable.exe did recognise the argument and ran the program properly, while if I just ran the script without breakpoints, it would error as if it could not parse the $arg value. Adding the -Wait switch seemed to solve the problem on my machine.
On the machine of my colleague, MyExecutable.exe does not recognise the output of the -ArgumentList $arg part: it just quits with an error stating that the required argument (which should be "project_file") could not be found.
I have tried to hard-code the "project_file" part, but that is no success. I have also been playing around with the other switches for the Start-Process-cmdlet, but nothing works. I am a bit at a loss, quite new to PowerShell, but totally confused why it behaves differently on different computers.
What am I doing wrong?
If you does not use -Wait switch, then your script continue to run while MyExecutable.exe still executing. In particular you can rename file back (Rename-Item "$PSScriptRoot\project_file" -NewName $OldFileName) before you program open it.
You pass plain project_file as argument to your program. What if current working directory is not a $PSScriptRoot? Does MyExecutable.exe designed to look for files in the exe location directory in addition to/instead of current working directory? I recommend to supply full path instead:
[string]$arg = "`"$PSScriptRoot\project_file`""
Do not just convert FileInfo or DirectoryInfo objects to string. It does not guaranteed to return full path or just file name. Explicitly ask for Name or FullName property value, depending of what you want.
Rename-Item $_.FullName -NewName "project_file"