Proper use of Invoke-Expression? - powershell

I've just recently completed my first nightly build script (first significant anything script, really) in powershell. I seem to have things working well, if not yet robustly (I haven't handled significant error-checking yet), but I found myself falling into an idiom around the Invoke-Expression cmdlet, and I'm wondering if I'm using it properly.
Specifically, I use a series of variables to build up command-lines that I will use to build the solution, then run the solution's unit tests. e.g., something like:
$tmpDir = "C:\Users\<myuser>\Development\Autobuild"
$solutionPath=$tmpDir+"\MyProj\MyProj.sln"
$devenv="C:\Program Files (x86)\Microsoft Visual Studio 10.0\common7\ide\devenv"
$releaseProfile="Release"
$releaseCommandLine="`"$devenv`" `"$solutionPath`" /build `"$releaseProfile`""
This works well enough, $releaseCommandLine contains the command line that I want to execute when I'm done. I then execute it via this line:
$output = Invoke-Expression "& $releaseCommandLine"
Is this the proper way to execute a manually-built command line from a powershell script? I thought initially that Invoke-Command would do it, but I must have been doing something wrong because I couldn't get that working at all for half an hour, and I got this working almost immediately.
I've followed this same pattern a few other times in this same script. Is this a best-practice?

Looks fine to me. Only thing I'd change is to use more Powershell features in place of fragile assumptions. E.g.:
use Join-Path instead of string concatenation
use the Env:\ provider to look up the %programfiles(x86)% dir (or better yet, use the HKML:\ provider to find the path - it's in SOFTWARE\Microsoft\VisualStudio\\InstallDir)
when I have to write a string that contains literal doublequotes and variable expansion, I usually fall back to the syntax below. Personal preference, obviously.
'"{0}" "{1}" /build "{2}"' -f $devenv, $solutionPath, $releaseProfile
In some cases I'd be inclined to use Process.Start() so that I could capture the stdout & stderr streams independently (and maybe even control stdin interactively, depending on the application).
PS - the '&' is not strictly necessary.

I think it is unnecessary to use Invoke-Expression here. I've done this with a lot of build scripts and it usually looks like this:
$vsroot = "$env:ProgramFiles(x86)\Microsoft Visual Studio 9.0"
$devenv = "$vsroot\Common7\IDE\devenv.exe"
$sln = Join-Path <source_root> Source\MyProj\MyProj.sln
& $devenv $sln /build Release
or
& $devenv $sln /build "Release|Any CPU"
Although lately, I have had some troubles with using devenv.exe (mis-behaving add-ins, etc), so now I use msbuild.exe:
$msbuild = 'C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe'
& $msbuild $sln /p:Configuration=Release
Currently MSBuild can handle C#, VB and C++ (invokes vcbuild) but it can't handle solutions with setup & deployment projects in them. However, I have found it to be more reliable than using devenv.exe.
BTW you typically need to invoke other tools (sn.exe, signtool.exe, mt.exe, etc) in a build script that are specific to the version of Visual Studio/.NET you want to build against. So it is usually best to configure your environment variables in the same way that the VS 2008 command prompt does. With the PowerShell Community Extensions installed, you can enable one line in the PSCX profile header to enable this for .NET 3.5/VS 2008 settings:
$Pscx:Preferences["ImportVisualStudioVars"] = $true

Related

How to make parameter name suggestions work?

I'm trying to create a powershell module to store some reusable utility functions. I created script module PsUtils.psm1 and script module manifest PsUtils.psd1 (used these docs). My problem is that when I import this module in another script Visual Code does not suggest parameters names. Here's a screenshot:
When I hover cursor over the function I only get this:
PsUtils.psm1
function Get-Filelist {
Param(
[Parameter(Mandatory=$true)]
[string[]]
$DirectoryPath
)
Write-Host "DIR PATH: $DirectoryPath"
}
PsUtils.psd1 (excerpt)
...
FunctionsToExport = '*'
I have Powershell extension installed. Do I need to install anything else to make the suggestions work? What am I missing?
Generally speaking, only auto-loading modules - i.e., those in one of the directories listed in environment variable $env:PSModulePath - are automatically discovered.
As of version v2022.7.2 of the PowerShell extension, the underlying PowerShell editor services make no attempt to infer from the current source-code file what modules in nonstandard directories are being imported via source code in that file, whether via Import-Module or using module
Doing so would be the prerequisite for discovering the commands exported by the modules being imported.
Doing so robustly sounds virtually impossible to do with the static analysis that the editor services are limited to performing, although it could work in simple cases; if I were to guess, such a feature request wouldn't be entertained, but you can always ask.
Workarounds:
Once you have imported a given module from a nonstandard location into the current session - either manually via the PIC (PowerShell Integrated Console) or by running your script (assuming the Import-Module call succeeds), the editor will provide IntelliSense for its exported commands from that point on, so your options are (use one of them):
Run your script in the debugger at least once before you start editing. You can place a breakpoint right after the Import-Module call and abort the run afterwards - the only prerequisite is that the file must be syntactically valid.
Run your Import-Module command manually in the PIC, replacing $PSScriptRoot with your script file's directory path.
Note: It is tempting to place the cursor on the Import-Module line in the script in order to use F8 to run just this statement, but, as of v2022.7.2, this won't work in your case, because $PSScriptRoot is only valid in the context of running an entire script.
GitHub issue #633 suggests adding special support for $PSScriptRoot; while the proposal has been green-lighted, no one has stepped up to implement it since.
(Temporarily) modify the $env:PSModulePath variable to include the path of your script file's directory.
The most convenient way to do that is via the $PROFILE file that is specific to the PowerShell extension, which you can open for editing with psedit $PROFILE from the PIC.
Note: Make sure that profile loading is enabled in the PowerShell extension's settings.
E.g., if your directory path is /path/to/my/module, add the following:
$env:PSModulePath+="$([IO.Path]::PathSeparator)/path/to/my/module"
The caveat is that all scripts / code that is run in the PIC will see this updated $env:PSModulePath value, so at least hypothetically other code could end up mistakenly importing your module instead of one expected to be in the standard locations.
Note that GitHub issue #880 is an (old) proposal to allow specifying $env:PSModulePath entries as part of the PowerShell extension settings instead.
On a somewhat related note:
Even when a module is auto-discovered / has been imported, IntelliSense only covers its exported commands, whereas while you're developing that module you'd also like to see its private commands. Overcoming this limitation is the subject of GitHub issue #104.

How to Automate scripts with options in Powershell?

I'm not a native English speaker as such pardon some discrepancy in my question. I'm looking at a way to Automate option selection in Programs/Scripts running via PowerShell.
For example:
Start-Process -FilePath "velociraptor.exe" -ArgumentList "config generate -i"
In the above snipper PowerShell will run Velociraptor and initiate the configuration wizard from a Ps1 file. The Wizard has few options. After running it will generate some Yaml files.
As such what would be the way to have PowerShell Script automate the option selection process? I know what the option should be. I looked around but I don't know proper terms to find what I need. Nor am I sure this can be done with PowerShell.
The end goal is to have the Ps1 download Exe, run the config command and continue with choosing the selection based on predefined choices. So far I gotten download and lunching of the velociraptor.exe working. But not sure how to skip the window in screenshot and have PowerShell script do it instead.
I couldn't find a CLI reference for velociraptor at https://www.velocidex.com/, but, generally speaking, your best bet is to find a non-interactive way to provide the information of interest, via dedicated _parameters_ (possibly pointing to an input file).
Absent that, you can use the following technique to provide successive responses to an external program's interactive prompts, assuming that the program reads the responses from stdin (the standard input stream):
$responses = 'windows', 'foo', 'bar' # List all responses here
$responses | velociraptor.exe config generate -i

Install arguments breaking an automated PowerShell install

I'm installing SQL Data Tools via a PowerShell script. I run my script, but the final part where the Data Tools are installed fails (inside of the SQL installer window). If I run the script without that part, and install Data Tools manually it works.
The error is:
VS Shell installation has failed with exit code -2147205120.
The parts before this install .NET and SQL Server Management Studio. I don't think they're relevant to my issue, but I will post that part if requested. Here are the relevant parts. The first try block installs SQL SP1 (removed now for readability), the second installs Data Tools and SNAC_SDK.
try
{
Write-Host "Lauching SQL Server Data Tools install ..."
& "\\mynetworkpath\SSDTBI_x86_ENU.exe" "/ACTION=INSTALL" "/FEATURES=SSDTBI,SNAC_SDK" "/Q" "/IACCEPTSQLSERVERLICENSETERMS"
Write-Host "Installer launched ..."
}
catch
{
Write-Host "SQL Server Data Tools installation failed"
exit
}
I have tried juggling around the arguments for the Data Tools install part, and playing with the -wait verb to make sure SP1 is done for sure, but no luck.
EDIT: Per Matt's suggestion I added /NORESTART to my argument list, but now it doesn't install anything, and doesn't error either...
EDIT: Added updated code with quoted arguments. Still doesn't work, but I think it's closer than it was originally.
I think the comma in the arguments is the culprit here because powershell interprets entities separated by comma as an array.
You can see how parameters get passed with this little hack
& { $args } /ACTION=INSTALL /FEATURES=SSDTBI,SNAC_SDK /Q /IACCEPTSQLSERVERLICENSETERMS
which gives
/ACTION=INSTALL
/FEATURES=SSDTBI
SNAC_SDK
/Q
/IACCEPTSQLSERVERLICENSETERMS
To get rid of this problem you need to quote at least the FEATURES argument. I usually quote everything in those cases, just to be consistent, so
& { $args } "/ACTION=INSTALL" "/FEATURES=SSDTBI,SNAC_SDK" "/Q" "/IACCEPTSQLSERVERLICENSETERMS"
gives you the wanted parameters:
/ACTION=INSTALL
/FEATURES=SSDTBI,SNAC_SDK
/Q
/IACCEPTSQLSERVERLICENSETERMS
Update: Many installers return immediately after they have been called while the install process still runs in the background, which can be a bugger when the rest of the script depends on the install.
There are several methods to make powershell wait for a process exit. One of the shortest is to use Out-Null like this:
& "\\mynetworkpath\SSDTBI_x86_ENU.exe" "/ACTION=INSTALL" "/FEATURES=SSDTBI,SNAC_SDK" "/Q" "/IACCEPTSQLSERVERLICENSETERMS" | Out-Null
You may also want to look at $? or $LASTEXITCODE afterwards to check for errors.

Azure Cloud Service Startup task that needs to run a PowerShell script

All,
Note: I have updated the question after some feedback.
Thanks to #jisaak for his help so far.
I have the need to run a PowerShell script that adds TCP bindings and some other stuff when I deploy my Cloud Service.
Here is my Cloud Service Project:
Here is my Cloud Service Project and Webrole project:
Here is my task in ServiceDefinition.csdef:
And here is the PowerShell script I want to run:
here is my attempt at the Startup.cmd:
When I deploy I get this in the Azure log:
And this in the powershell log:
Any help would be very much appreciated.
I think I am nearly there but following other people syntax on the web doesn't seem to get me there.
thanks
Russ
I think the issue is that the working directory of the batch command interpreter when it runs Startup.cmd runs is not as expected.
The Startup.cmd is located in the \approot\bin\Startup directory but the working directory is \approot\bin.
Therefore the command .\RoleStartup.ps1 is not able to find the RoleStartup.ps1 as it is looking in the bin directory not in the bin\Startup directory.
Solutions I know to this are:
Solution 1:
Use ..\Startup\RoleStartup.ps1 to call the RoleStartup.ps1 from Startup.cmd.
Soltuion 2:
Change the current working directory in Startup.cmd so that the relative path .\RoleStartup.ps1 is found. I do this by CHDIR %~dp0 (see here) to change into the directory that contains Startup.cmd.
Solution 3:
As Don Lockhart's answer suggested, do not copy the Startup directory to the output, instead leave it set as "Content" in the Visual Studio project. This means the files within it will exist in the \approot\Startup directory on the Azure instance. (You would then want to make sure that the Startup folder is not publically accessible via IIS!). Then update the reference to Startup.cmd in ServiceDefinition.csdef to ..\Startup\Startup.cmd, and update the reference to RoleStartup.ps1 in Startup.cmd to ..\Startup\RoleStartup.ps1. This works on the fact that the working directory is bin and uses ..\Startup to always locate the Startup directory relative to it.
You don't need to set the executionpolicy within your cmd - just call the script. Also, you should use a relative path because you can't rely that there is C disk.
Change your batch to:
powershell -executionpolicy unrestricted -file .\RoleStartup.ps1
Right click on the RoleStartup.ps1 and Startup.cmdin Visual Studio and ensure that the Copy to Output directory is set to copy always.
If this still doesn't work, remove the startup call in your csdef, deploy the service, rdp into it and try to invoke the script by yourself to retrieve any errors.
Edit:
Try to adopt your script as below:
Import-Module WebAdministration
$site = $null
do # gets the first website until the result is not $null
{
$site = Get-WebSite | select -first 1
Sleep 1
}
until ($site)
# get the appcmd path
$appcmd = Join-Path ([System.Environment]::GetFolderPath('System')) 'inetsrv\appcmd.exe'
# ensure the appcmd.exe is present
if (-not (Test-Path $appcmd))
{
throw "appcmd.exe not found in '$appcmd'"
}
# The rest of your script ....
I've found it easier in the past to not copy the content to the output directory. I have approot\bin as the working directory. My startUp task element's commandLine attribute uses a relative reference to the .cmd file like so:
The .cmd file references the PowerShell script relatively from the working directory as well:
PowerShell -ExecutionPolicy Unrestricted -f ..\StartUp\RoleStartup.ps1
Ok,
So I am coming back to this after many different attempts to make it work.
I have tried using:
Startup config in the ServiceDefinition.csdef
I have tried registering a scheduled task on the server that scans the Windows Azure log looking for [System[Provider[#Name='Windows Azure Runtime 2.6.0.0'] and EventID=10004]]
Nothing worked either due to security or the timing of events and IIS not being fully setup yet.
So I finally bit the bullet and used my Webrole.cs => public override bool OnStart() method:
Combined with this in the ServiceDefinition.csdef:
Now it all works. This was not the most satisfying result as some of the other ways to do it felt more elegant. Also, many others posted that they got the other ways of doing it to work. Maybe I would have got there eventually but my time was restricted.
thanks
Russ

MSBuild fails when run via Powershell

I have an MSBuild script that I want to call from a PowerShell script as part of a deployment process. If I call the build script via a bat file all works well. If I do the exact same thing in PowerShell I get CS1668 error looking for wierd and wonderful paths that dont exist on my machine.
I know I am getting into the MSBuild script as these errors are occuring from within the MSBuild script targets (logging output is showing this).
The bat file and the PowerShell script reside in the same place, right next to the build script.
Errors:
error CS1668 : Warning as error :
Invalid search path 'C:\Program
Files\Microsoft Visual Studio
8\VC\AtlMfc\Lib' specified in 'LIB
environment variable' -- 'The system
cannot find the path specified. '
error CS1668 : Warning as error :
Invalid search path 'C:\Program
Files\Microsoft Visual Studio
8\VC\PlatformSDK\Lib' specified in
'LIB environment variable'-- 'The
system cannot find the path specified.
'
I have checked and neiter of these paths do NOT exist on my machine.
Why would running from PowerShell change what paths MSBuild look for?
Thanks in advance
RhysC
EDIT- adding in code: NB the name of the MSBuild script is AutomatedDebug.build
POWERSHELL SCRIPT:
#Begining of script
$CurrentPath = Split-Path (Split-Path $myinvocation.mycommand.path )
#Assign the Build script paths. 1 is for building and testsing, 1 is for deployment (to keep things clean)
$MSBuildScriptBuildAndTestPath = $CurrentPath + "\Tools\AutomatedDebug.build"
$MSBuildScriptDeployPath = $CurrentPath + "\Tools\Deploy.build"
#Run the automated build with tests
Write-Host "*** Run the automated build with tests ***"
C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe $MSBuildScriptBuildAndTestPath "/t:AllTests" "/l:FileLogger,Microsoft.Build.Engine;logfile=AllTests.log"
if($LastExitCode -ne 0)
{
throw "AllTests failed"
}
Write-Host "*** FINISHED: Run the automated build with tests ***"
Bat File
#C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe AutomatedDebug.build /t:AllTests /l:FileLogger,Microsoft.Build.Engine;logfile="AllTests.log"
#pause
enter code here
Ok, what is actually happening is that I have dowloaded the Powershell Community Extensions (http://pscx.codeplex.com) many moons ago and there is a Environment.VisualStudio2005.ps1 file in there that adds EnvVars that dont exist. This is (I assume) becuase i dont use VS2005. I have commmented this out and all is well. I have tried to look for the 2008 equivilent but dint have too much luck. To be honest i dont use powershell to its full potential so i doubt i even need the PSCX on my machine anyway.
Thanks guys for your help.
It is most likely due to how you are passing parameters into the MSBuild script. There are some key parameter parsing differences between CMD and PowerShell and by using a batch file, you are implicitly using CMD's parameter parsing engine to hand off the parameters to MSBuild. If you post the command line you're using, one of the PowerShell gurus here can probably help debug where the parameters are getting confused.
I am wondering, basically you get "Warning as error" here. You can either try turning off treating warnings as errors (I mean, having a lib path which doesn't exist doesn't sound that bad to me) or you'll let your scripts print out the respective environment variables (using echo %LIB% and $Env:LIB respectively) from the scripts and look for differences.
However, since the envvars for VS aren't set globally by default (that's why there are the three Visual Studio command prompts) you have to set all variables somewhere. If you're doing this within the scripts you may have a typo somewhere?