How to generate comprehensive log file from PowerShell script - powershell

I am trying to write a migration/deployment script to deploy an application to an environment (DEV, QA, PROD) and I need to be able to fully log all output. This would include any status message I specifically put in the output stream (not a problem) as well as verbose output from all commands. For instance, if I'm calling Copy-Item, I want the full listing of each item copied.
I'm trying to figure out a way to do this throughout the entire script reliably. In other words, I don't want to rely on including -Verbose on every command (as it could be missed when someone else maintains the script in the future). I've been looking at things like $VerbosePreference as well as the possibility of calling my main cmdlet/function using -Verbose, with the hope being that either would apply to the entire script. But that appears to not be the case. While any Write-Verbose commands I use respect either approach, calls to Copy-Item only show the verbose listing if I specifically pass -Verbose to it. I'm really hoping I'm just missing something! Surely this is possible to do what I'm wanting!
Sample code:
function Main () {
[CmdletBinding()]
Param()
Begin {
Copy-Item C:\Temp\src\* -Destination C:\Temp\dest -Recurse -Force
Write-Output 'Main output'
Write-Verbose 'Main verbose'
Child
}
}
function Child () {
[CmdletBinding()]
Param()
Begin {
Copy-Item C:\Temp\src\* -Destination C:\Temp\dest -Recurse -Force
Write-Output 'Child output'
Write-Verbose 'Child verbose'
}
}
$VerbosePreference = 'SilentlyContinue'
Write-Output $VerbosePreference
Main
''
Main -Verbose
''
''
$VerbosePreference = 'Continue'
Write-Output $VerbosePreference
Main
''
Main -Verbose
Produces output:
SilentlyContinue
Main output
Child output
Main output
VERBOSE: Main verbose
Child output
VERBOSE: Child verbose
Continue
Main output
VERBOSE: Main verbose
Child output
VERBOSE: Child verbose
Main output
VERBOSE: Main verbose
Child output
VERBOSE: Child verbose
So, clearly $VerbosePreference and -Verbose are affecting the Write-Verbose, but that's about it. The Copy-Item is not displaying ANY output whatsoever (though it will if I specifically use -Verbose directly on that command).
Any thoughts? Am I going about this all wrong? Please help!

How about leveraging...
Tip: Create a Transcript of What You Do in Windows PowerShell
The PowerShell console includes a transcript feature to help you
record all your activities at the prompt. As of this writing, you
cannot use this feature in the PowerShell application. Commands you
use with transcripts include the following:
https://technet.microsoft.com/en-us/library/ff687007.aspx
... or the approaches provided / detailed here:
Enhanced Script Logging module (automatic console output captured to
file)
Automatically copy PowerShell console output to a log file (from
Output, Error, Warning, Verbose and Debug streams), while still
displaying the output at the console. Log file output is prepended
with date/time and an indicator of which stream originated the line
https://gallery.technet.microsoft.com/scriptcenter/Enhanced-Script-Logging-27615f85
Write-Log PowerShell Logging Function
The Write-Log PowerShell advanced function is designed to be a simple
logger function for other cmdlets, advanced functions, and scripts.
Often when running scripts one needs to keep a log of what happened
and when. The Write-Log accepts a string and a path to a log file and
ap
https://gallery.technet.microsoft.com/scriptcenter/Write-Log-PowerShell-999c32d0
* Update as per the OP comment*
See this discussion...
Powershell apply verbosity at a global level
where the -verbose flag is not supplied to the ni command. Is there a
way to set the Verbosity at a global PSSession level if I were to run
this script to force verbosity? The reason I ask is that I have a
group of about 60 scripts which are interdependent and none of these
supply -verbose to any commands they issue and I'd like to see the
entire output when I call the main entry point powershell script.
Powershell apply verbosity at a global level
Use PowerShell Default Parameter Values to Simplify Scripts
Changing default parameter values
When I was asked to write about my favorite Windows PowerShell 3.0
feature, my #1 $PSDefaultParameterValues came to mind immediately.
From my point of view, this was something I was looking for, for a
long time.
How does it work? With $PSDefaultParameterValues, you can define
(overwrite) default values of parameters for Windows PowerShell
cmdlets.
https://blogs.technet.microsoft.com/heyscriptingguy/2012/12/03/use-powershell-default-parameter-values-to-simplify-scripts/
See also:
Script Tracing and Logging
While Windows PowerShell already has the LogPipelineExecutionDetails
Group Policy setting to log the invocation of cmdlets, PowerShell’s
scripting language has plenty of features that you might want to log
and/or audit. The new Detailed Script Tracing feature lets you enable
detailed tracking and analysis of Windows PowerShell scripting use on
a system. After you enable detailed script tracing, Windows PowerShell
logs all script blocks to the ETW event log,
Microsoft-Windows-PowerShell/Operational. If a script block creates
another script block (for example, a script that calls the
Invoke-Expression cmdlet on a string), that resulting script block is
logged as well.
Logging of these events can be enabled through the Turn on PowerShell
Script Block Logging Group Policy setting (in Administrative Templates
-> Windows Components -> Windows PowerShell).
https://learn.microsoft.com/en-us/powershell/wmf/5.0/audit_script

Related

Powershell function call causes missing function error using powershell v7 on windows 10

I wrote a script to build all .net projects in a folder.
Issue
The issue is I am getting a missing function error when I call Build-Sollution.
What I tried
I made sure that function was declared before I used it so I am not really sure why it saids that it is not defined.
I am new to powershell but I would think a function calling another functions should work like this?
Thanks in advance!
Please see below for the error message and code.
Error Message
Line |
3 | Build-Sollution $_
| ~~~~~~~~~~~~~~~
The term 'Build-Sollution' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Build-Sollution:
Code
param (
#[Parameter(Mandatory=$true)][string]$plugin_path,
[string]$depth = 5
)
$plugin_path = 'path/to/sollutions/'
function Get-Sollutions {
Get-ChildItem -File -Path $plugin_path -Include *.sln -Recurse
}
function Build-Sollution($solution) {
dotnet build $solution.fullname
}
function Build-Sollutions($solutions) {
$solutions | ForEach-Object -Parallel {
Build-Sollution $_
}
}
$solutions_temp = Get-Sollutions
Build-Sollutions $solutions_temp
From PowerShell ForEach-Object Parallel Feature | PowerShell
Script blocks run in a context called a PowerShell runspace. The runspace context contains all of the defined variables, functions and loaded modules.
...
And each runspace must load whatever module is needed and have any variable be explicitly passed in from the calling script.
So in this case, the easiest solution is to define Build-Sollution inside Build-Sollutions
As for this...
I am new to powershell but I would think a function calling another
functions should work like this?
... you cannot use the functions until you load your code into memory. You need to run the code before the functions are available.
If you are in the ISE or VSCode, if the script is not saved, Select All and hit use the key to run. In the ISE use F8 Selected, F5 run all. In VSCode, F8 run selected, crtl+F5 run all. YOu can just click the menu options as well.
If you are doing this from the consolehost, the run the script using dot sourcing.
. .\UncToYourScript.ps1
It's ok to be new, we all started somewhere, but it's vital that you get ramped up first. so, beyond what I address here, be sure to spend time on Youtube and search for Beginning, Intermediate, Advanced PowerShell for videos to consume. There are tons of free training resources all over the web and using the built-in help files would have given you the answer as well.
about_Scripts
SCRIPT SCOPE AND DOT SOURCING Each script runs in its own scope. The
functions, variables, aliases, and drives that are created in the
script exist only in the script scope. You cannot access these items
or their values in the scope in which the script runs.
To run a script in a different scope, you can specify a scope, such as
Global or Local, or you can dot source the script.
The dot sourcing feature lets you run a script in the current scope
instead of in the script scope. When you run a script that is dot
sourced, the commands in the script run as though you had typed them
at the command prompt. The functions, variables, aliases, and drives
that the script creates are created in the scope in which you are
working. After the script runs, you can use the created items and
access their values in your session.
To dot source a script, type a dot (.) and a space before the script
path.
See also:
'powershell .net projects build run scripts'
'powershell build all .net projects in a folder'
Simple build script using Power Shell
Update
As per your comments below:
Sure the script should be saved, using whatever editor you choose.
The ISE does not use PSv7 by design, it uses WPSv5x and earlier.
The editor for PSv7 is VSCode. If you run a function that contains another function, you have explicitly loaded everything in that call, and as such it's available.
However, you are saying, you are using PSv7, so, you need to run your code in the PSv7 consolehost or VSCode, not the ISE.
Windows PowerShell (powershell.exe and powershell_ise.exe) and PowerShell Core (pwsh.exe) are two different environments, with two different executables, designed to run side-by-side on Windows, but you do have to explicitly choose which to use or write your code to branch to a code segment to execute relative to the host you started.
For example, let's say I wanted to run a console command and I am in the ISE, but I need to run that in Pwsh. I use a function like this that I have in a custom module autoloaded via my PowerShell profiles:
# Call code by console executable
Function Start-ConsoleCommand
{
[CmdletBinding(SupportsShouldProcess)]
[Alias('scc')]
Param
(
[string]$ConsoleCommand,
[switch]$PoSHCore
)
If ($PoSHCore)
{Start-Process pwsh -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
Else {Start-Process powershell -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
}
All this code is doing is taking whatever command I send it and if I use the PoSHCore switch...
scc -ConsoleCommand 'SomeCommand' -PoSHCore
... it will shell out to PSCore, run the code, otherwise, it just runs from the ISE>
If you want to use the ISE with PSv7 adn not do the shell out thing, you need to force the ISE to use PSv7 to run code. See:
Using PowerShell Core 6 and 7 in the Windows PowerShell ISE

Invoking long NAnt process from PowerShell form Jenkins (using Pipelines)

I've been working on wrapping up the usage of some old NAnt scripts behind a Jenkins job. The Jenkins job itself is using the pipelines feature, a groovy DSL script, one of the steps is a PowerShell block, and it calls some a function that invokes NAnt, after working out lots of parameters to be parsed in.
I did have this working at some point just fine, but something has broken at some stage. The PowerShell function is called, and it triggers NAnt, and for the nearly an hour that it takes to complete, you get the output, as it happens, showing up in Jenkins.
This was done using something like Invoke-Expression "& $NAntExe $NAntFile $Target $ParameterString" | Write-Host, where $ParameterString is all the -D:Key=Value parameters.
I believe I had added the | Write-Host as without it, you only get the output at the very end, but we wanted to be able to see the progress as it's happening.
As I said, something has changed somewhere, and we were no longer getting any output from NAnt. I eventually found that removing the | Write-Host would restore the logs, but as I expected, we now have to wait for NAnt to finish before we see any logs.
What is the 'correct' way to invoke NAnt here to get the output as I desire? I want to see the output as it happens.
I've tried various ways of invoking NAnt, with no luck. Seems I'm having to settle for either "I get all the output in one go at the end" or "no output". I suspect this is not a PowerShell issue as such, but that's based on nothing but gut feeling.
Seems I can mostly recreate the symptoms I see in Jenkins. If I invoke NAnt through a fresh PowerShell session I get the same problem, I'm running something akin the following, which as far as I can tell would be the same as how the Jenkins plugin invokes PowerShell:
powershell.exe -NoProfile -NonInteractive -ExecutionPolicy ByPass -Command 'Invoke-FunctionThatCallsNAnt'
Within my Invoke-FunctionThatCallsNAnt, I had initially, as I said above, just directly called NAnt and got no logging. I then update my function to pipe the output to Write-Host or I can remove the -NonInteractive flag and I will get the output from NAnt in real time. However, when I go to Jenkins, this does not resolve the problem, I end up with getting no output at all.
I'm not sure why it wouldn't stream. You should be able to write the command these ways:
& $NAntExe $NAntFile $Target $ParameterString
Or with whatever the nant command is.
$env:path += ';c:\program files\nant' # add to path if needed
nant.exe $NAntFile $Target $ParameterString
If it's not in the path, and the folder doesn't have spaces, you can put the whole path to it as well.
c:\nant\nant.exe $NAntFile $Target $ParameterString
EDIT:
Here's a way to run something in a path with spaces:
C:\Program` Files\Internet` Explorer\iexplore.exe
EDIT2:
It looks like you have to unblock the nant zip after downloading it: How do I resolve configuration errors with Nant 0.91?
Or unblock all the files after the fact:
get-childitem -recurse c:\nant-92 |
get-item -stream zone.identifier -erroraction silentlycontinue |
select -expand filename | get-item | unblock-file

Get the last command executed on a PowerShell script

I'm updating the logging function I use for all my scripts and making it compatible with the SCCM logging viewer.
Under the fields that the SCCM viewer takes is the component field, and I'll like to fill it with the main command that it's run and I'm logging. For example, if I'm logging "Data copied" and I did that with Copy-Item, I'd like that the logs says Copy-Item for that line of the log.
So far I've tested the following methods, using $$ and the history cmdlets (Get-History, Add-, etc.), but they only work with the PowerShell console, not with script execution.
The final result I'd like to get is:
I think this is what you're talking about?
$Folder = "some path"
$Cmdline = 'Remove-Item $Folder -Force -Recurse'
Invoke-Expression $Cmdline
Write-Host $Cmdline.replace("`$Folder",$Folder)

See PowerShell Verbose output in Visual Studio Code

I am using Visual Studio Code 1.17.2 with the PowerShell extension (1.4.3). I have Write-Verbose statements in my code. When I run the PowerShell script, the Verbose output doesn't seem to go anywhere. How do I request that output be shown?
The simplest approach is to execute:
$VerbosePreference = 'Continue'
in the integrated PowerShell console before invoking your script (and executing $VerbosePreference = 'SilentlyContinue' later to turn verbose output back off, if needed).
From that point on:
running a script (starting it with (F5) or without (Ctrl+F5) the debugger)
highlighting a section of code and running the selection (F8)
will make Write-Verbose calls produce output.
If you want to preset $VerbosePreference = 'Continue' every time the integrated console starts, put that statement in the $PROFILE file:
If your VS Code-specific $PROFILE file already exists, simply run psedit $PROFILE from the integrated console.
If not, create the file first from the integrated console: New-Item -Type File $PROFILE
Verbose output by default it not shown, you need to declare [CmdletBinding()] in your function or script to enable the -Verbose parameter to be passed through in order have the option to display Verbose output.
You can 'cheat' though and pass -Verbose to Write-Verbose "Hello Verbose" -Verbose itself and that stream will appear in the console.
(Tested with your two matching versions for VSCode and Extension on Mac (PS6 Beta 8) and can see the verbose output).
function Test-Verbose {
# This enables the function to have '-Verbose'.
[CmdletBinding()]
param()
Write-Output "Hello output!"
# Will only be displayed if 'Test-Verbose' is passed '-Verbose'.
Write-Verbose "Hello verbose"
}
Test-Verbose -Verbose

Powershell to EXE tool Advice

So here's the deal. Because of a number of... let's just say not PowerShell smart people who will be using an incredibly complex application that I just finished, I need the ability to package it in an exe wrapper.
This shouldn't be that hard
I was able to successfully use PS2EXE, except for some reason with AD, it throws out a whooooole bunch of AD text that I can't get rid of. Tried to fix that for a few days before getting frustrated and moving on.
Then, I discovered PowerGUI. I can't say that I like it, at all. However, its compiler was exactly what I was looking for! Except for the fact that Exchange 2010 snap-ins are not compatible with .NET 4.5 through this application.
I want to make it very clear that my script works perfectly on multiple different computers, but as soon as I use any of these tools, everything breaks.
An exe is the best thing that I can think of to simplify the interface, and keep the Technically Intellectually Stunted from breaking everything, or running to me with every little error because they somehow got into the code and typed something and saved it, and now nothing works and it's the end of the world and they have no idea what happened.
If you guys know of any tools to wrap this up into an exe, or have any other ideas on how to help, I would really appreciate anything you guys can give me.
You have never failed me in the past!
From my point of view if you really want an EXE file you should write a .NET application, it's not so hard to embed PowerShell CmdLets.
In order to avoid end user modifying your code I know two solutions :
First : set execution policy to AllSigned on the user computer and sign the scripts you deploy. You can manage to use our own certificates (not expensive at all) or public certificates (more expensive). One of the drawback of this solution is that it does not prevent users from seeing the code. Another big drawback is that a PKI and sign code infrastructure is a lot of wast time.
Second : for non interactive scripts (be carefull it's a kind of makeshift job) :
Create a new user account
Only allow access to the script file for the new account.
Set up a task in the Windows scheduler to run that script file with PowerShell under that specific account. The permissions for the scheduled tasks allow read and execute access to the user(s). Then set the task to "disabled".
Whenever the script file needs to be run, the corresponding task is manually started by the user.
Using this solution will also allow you to remote execute your script.
When I had a similar deployment problem - 1) user's didn't know powershell 2) I didn't want them to have to understand things like execution policy, 3) how to start PS, 4) etc. I wrapped it in a batch file. I also wanted to make sure that experienced PS users still had the capabilities of PS, so the batch file determined if it was running under PS or not and ran in the current PS session if applicable. I was never too worried that users would mess with the script - they were happy if it "just worked". So whether users liked Explorer, CMD.EXE, or PS, they all were accommodated.
The batch file I wrote first runs a bit of powershell code to determine if the process of the batch file is the grandchild of a powershell process. If it is then the batch file is being invoked from PS. The execution policy is also checked and if it is lenient enough then Wscript.SendKeys is used to send keystrokes to PS to get the script running in the current PS session. If it isn't then it starts a new PS session using -ExecutionPolicy parameter and passes the script as a command line argument (-Command).
This bit of powershell code communicates back to the .CMD file using a return code. Sorry it's cryptic, but the length of command line parameters is limited. Here's the code:
set scr= $mp=[diagnostics.process]::getcurrentprocess().id
set scr=%scr%; $pp=([wmi]\"win32_process.handle='$mp'\").parentprocessid
set scr=%scr%; $gp=([wmi]\"win32_process.handle='$pp'\").parentprocessid
set scr=%scr%; $ep=[int][microsoft.powershell.executionpolicy](get-executionpolicy)
set scr=%scr%; try {$pnp=1-[int](([wmi]\"win32_process.handle='$gp'\").Name -eq \"powershell.exe\")
set scr=%scr%; } catch {$pnp=1}
set scr=%scr%; $ev = (8 * $pnp + $ep) -band 0xB; %wo% pp: $pp gp: $gp ev: $ev; if ($ev -le 1) {
set scr=%scr% %wo% Launching within existing powershell session...`n;
set scr=%scr% $w=new-object -com wscript.shell;$null=$w.appactivate($gp);
set scr=%scr%; $w.sendkeys(\"^&{{}`$st =cat "%me%";`$sc=`$st -join [char]10 -split 'rem PS script';
set scr=%scr% `$script:myArgs = `\" %*`\";`$sb=[scriptblock]::create{(} `$sc[3]{)};. `$sb{}}~\")
set scr=%scr%; }
set scr=%scr%; exit $ev
powershell -noprofile -Command %scr%
%wo% is to allow debugging this "checker script". If debugging is on the %wo% is set to write-host. Otherwise it is set to define a "null" function and then invoke the null function. The null doesn't do anything so the message that is the argument to the function is not output.
Note the escaping when invoking SendKeys. ^ is the CMD.EXE escape character and SendKeys has it's own escape mechanism, as does PS.
If run from PS you end up in a PS session thanks to SendKeys. Otherwise the batch file does this:
set scr= ren function:prompt prompto
set scr=%scr%; function prompt{ 'myApp: '+(prompto)}
set scr=%scr%; $st= (cat %me%) -join \"`n\";
set scr=%scr%; $sx=($st -split 'rem PS script')
set scr=%scr%; $sc=$sx[3]
set scr=%scr%; %wo% myArgs: $myArgs script length: $sc.length
set scr=%scr%; ^&{$script:myArgs=\"%*\"; iex $sc}
title MyApp
rem Change the number of lines on the console if currently set to 25
for /f "tokens=2" %%i in ('mode con^|findstr Lines:') do if %%i LEQ 25 (mode con lines=50&color 5F)
powershell -noexit -noprofile -command "%scr%"
This "helper script" also can't be too long. So the helper script reads the original .CMD file and then splits it by using the string 'rem PS script'. That string will be in both this helper script as well as in the batch file (separating the batch file statements from PS statements). In my case the string is also in the batch file comments, so that is why the index of 3 is used.
Your PS script can define functions or a module. Your PS script can also output some introductory info to explain to users how to get started, how to get help, or whatever you want.
Rather than just using the PS command line, your PS script could create it's own interactive environment (using Read-Host for example). However I didn't want to do that because it would have prevented experienced PS users from using their knowledge about PS. For example if your script requires a username/password, an experienced PS user could use get-credential to create a credential to send to your script.