Using Windows Workflow in TFS2010, I set up a PS script to run at end of build process. Followed example in http://www.ewaldhofman.nl/post/2010/11/09/Part-14-Execute-a-PowerShell-script.aspx to a T and it appears correct in Process section of build definition. However, no matter what I set arg to the dir path of script, the result is always...
The term '.\DataServiceCpy.ps1' is not recognized as the name of a cmdlet, func
tion, script file, or operable program. Check the spelling of the name, or if a
path was included, verify that the path is correct and try again.
I've tried 10 variations of the path. I enabled PS scripts to run on build server and I can run the script successfully from cmd prompt.
Anything obvious that I am overlooking?
many thanks...
In TFS 2013 (not sure for others) exists RunScript activity which is placed into "Team Foundation Build Activities" in Toolbox. I prefer that activity when I want to execute some custom script.
To use this you need to first create arguments (at least one for script path)! To create arguments follow this post.
After you created argument for script path you need to navigate yourself to "Metadata" argument on arguments tab. Add your created argument and also add following statement in "Editor" field: Microsoft.TeamFoundation.Build.Controls.ServerFileBrowserEditor, Microsoft.TeamFoundation.Build.Controls
After that go to "Properties" on RunScript activity and into FilePath insert following statement: AdvancedBuildSettings.GetValue(Of String)("PUT_HERE_YOUR_ARGUMENT_NAME", String.Empty)
Note that you need to insert in last statement exactly name of your argument which you created for script path.
After you are done:
check in your changes;
go to build definition "Process" tab and press "Refresh" button in "Build process template" section;
These steps will allows you to browse your source control and choose your script which you want without providing path.
If you don't need to insert new activity, you can just modify old arguments.
I did it the same way using the same blog and it worked for me. The only difference that I see is that my Powershell script lives in the solution/project folder and in the build definition I specify the powershell script using a relative path. Having a relative path works well with ConvertWorkspaceItem activity. You may want to check (output) the file path that you get after your ConverntWorkspaceItem activity to see if that is the right path.
I hope that helps.
Related
I'm trying to create a powershell module to store some reusable utility functions. I created script module PsUtils.psm1 and script module manifest PsUtils.psd1 (used these docs). My problem is that when I import this module in another script Visual Code does not suggest parameters names. Here's a screenshot:
When I hover cursor over the function I only get this:
PsUtils.psm1
function Get-Filelist {
Param(
[Parameter(Mandatory=$true)]
[string[]]
$DirectoryPath
)
Write-Host "DIR PATH: $DirectoryPath"
}
PsUtils.psd1 (excerpt)
...
FunctionsToExport = '*'
I have Powershell extension installed. Do I need to install anything else to make the suggestions work? What am I missing?
Generally speaking, only auto-loading modules - i.e., those in one of the directories listed in environment variable $env:PSModulePath - are automatically discovered.
As of version v2022.7.2 of the PowerShell extension, the underlying PowerShell editor services make no attempt to infer from the current source-code file what modules in nonstandard directories are being imported via source code in that file, whether via Import-Module or using module
Doing so would be the prerequisite for discovering the commands exported by the modules being imported.
Doing so robustly sounds virtually impossible to do with the static analysis that the editor services are limited to performing, although it could work in simple cases; if I were to guess, such a feature request wouldn't be entertained, but you can always ask.
Workarounds:
Once you have imported a given module from a nonstandard location into the current session - either manually via the PIC (PowerShell Integrated Console) or by running your script (assuming the Import-Module call succeeds), the editor will provide IntelliSense for its exported commands from that point on, so your options are (use one of them):
Run your script in the debugger at least once before you start editing. You can place a breakpoint right after the Import-Module call and abort the run afterwards - the only prerequisite is that the file must be syntactically valid.
Run your Import-Module command manually in the PIC, replacing $PSScriptRoot with your script file's directory path.
Note: It is tempting to place the cursor on the Import-Module line in the script in order to use F8 to run just this statement, but, as of v2022.7.2, this won't work in your case, because $PSScriptRoot is only valid in the context of running an entire script.
GitHub issue #633 suggests adding special support for $PSScriptRoot; while the proposal has been green-lighted, no one has stepped up to implement it since.
(Temporarily) modify the $env:PSModulePath variable to include the path of your script file's directory.
The most convenient way to do that is via the $PROFILE file that is specific to the PowerShell extension, which you can open for editing with psedit $PROFILE from the PIC.
Note: Make sure that profile loading is enabled in the PowerShell extension's settings.
E.g., if your directory path is /path/to/my/module, add the following:
$env:PSModulePath+="$([IO.Path]::PathSeparator)/path/to/my/module"
The caveat is that all scripts / code that is run in the PIC will see this updated $env:PSModulePath value, so at least hypothetically other code could end up mistakenly importing your module instead of one expected to be in the standard locations.
Note that GitHub issue #880 is an (old) proposal to allow specifying $env:PSModulePath entries as part of the PowerShell extension settings instead.
On a somewhat related note:
Even when a module is auto-discovered / has been imported, IntelliSense only covers its exported commands, whereas while you're developing that module you'd also like to see its private commands. Overcoming this limitation is the subject of GitHub issue #104.
I'm working on a script that automatically installs software. One of the programs to be installed includes its own command line commands and sub-commands when installed.
The goal is to use the program's provided commands to perform an action after its installation.
But running the command right after the program's installation I'm greeted by:
" is not recognized as an internal or external command , operable program or batch file"
If I open a new Powershell or cmd window the command is available in that instance.
What is the easiest way to to grant the script access to the commands?
Bender the Greatest's helpful answer explains the problem and shows you how to modify the $env:PATH variable in-session by manually appending a new directory path.
While that is a pragmatic solution, it requires that you know the specific directory path of the recently installed program.
If you don't - or you just want a generic solution that doesn't require you to hard-code paths - you can refresh the value of $env:PATH (the PATH environment variable) from the registry, via the [Environment]::GetEnvironmentVariable() .NET API method:
$env:PATH = [Environment]::GetEnvironmentVariable('Path', 'Machine'),
[Environment]::GetEnvironmentVariable('Path', 'User') -join ';'
This updates $env:PATH in-session to the same value that future sessions will see.
Note how the machine-level value (list of directories) takes precedence over the user-level one, due to coming first in the composite value.
Note:
If you happen to have made in-session-only $env:PATH modifications before calling the above, these modifications are lost.
If applicable, this includes modifications made by your $PROFILE file.
Hypothetically, other processes could have made additional modifications to the persistent Path variable definitions as well since your session started, which the call above will pick up too (as will future sessions).
This is because the PATH environment variable gets updated, but existing processes don't see that update unless they specifically query the registry for the live value of the update the PATH environment variable, then update PATH within its own process. If you need to continue in the same process, the workaround is to add the installation location to the PATH variable yourself after the program has been installed:
Note: I don't recommend updating the live value from the registry instead of the below in most cases. Other processes can modify that value, not just your own. It can introduce unnecessary risk, whereas appending only what you know should have changed is a more pragmatic approach. In addition, it adds code complexity for a case that often doesn't need to be generalized to that point.
# This will update the PATH variable for the current process
$env:PATH += ";C:\Path\To\New\Program\Folder;"
I am trying to configure dymola.mos file, here is an example of changing directory, but when I activate Dymola, it seems the working directory doesn't change at all, even though the log shows Dymola run the script.
My question is:
How could I make the cd command work in the dymola.mos file?
I assume you have activated the option Save startup directory. You can check this with the flag Advanced.StartupDirectory, which will be either 1 or 2. You can simply turn that off or follow the steps below.
From your command log we see that:
Dymola first executes the script <install-path/insert/dymola.mos
Then it restores the settings stored in setup.dymx
Hence the settings in setup.dymx override your working directory.
Instead of using <install-path/insert/dymola.mos you should use a custom .mos script, which is passed as first argument to dymola.exe on startup. This will always be executed last.
Example for Windows
Create the file startup.mos somewhere, e.g. in C:\dymola\startup.mos
Create a shortcut to Dymola.exe, (for Dymola 2021x: C:\Program Files\Dymola 2021x\bin64\Dymola.exe)
Add the .mos script as argument in the Target field in the properties of the shortcut. The result will be:
"C:\Program Files\Dymola 2021x\bin64\Dymola.exe" "C:\dymola\startup.mos"
Im getting the following error;
'mongod' is not recognized as an internal or external command,
operable program or batch file.
I've googled/read various threads and incorrect path seems to be common root cause.
However I have specified what appears to be correct path (tried both with and without quotation marks as you can see) ;
You need to add the directories to the PATH variable. Click on the Path variable, then "Edit" and add 2 entries listed below. You then need to click "OK" twice for the changes to be fully saved.
%Mongo%
%Mongo2%
This references the two environment variables which you have created.
Notice: you may need to restart cmd before changes take effect.
In azure devops I am trying to figure out how to build a release pipeline to release a static website to firebase cloud. I found this guide to help me with that.
I added 2 variables in the library in a variable group with the names 'firebase_token' and 'projectId' I try to use these variables in a release pipeline with one task which executes a powershell script from my repository. I do that via the next argument:
-fireBaseToken $(firebase_token) -fireBaseProject $(projectId) -releaseMessage $(Release.ReleaseName)
When I try to execute the release pipeline I get an error when the powershell script is being called. This is the error I get
firebase_token : The term 'firebase_token' is not recognized as the
name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the
path is correct and try again.
When I look at the command that has been tried to execute, I see this:
Formatted command: . 'D:\a\r1\a\test-project\drop\deploy.ps1' -fireBaseToken $(firebase_token) -fireBaseProject $(projectId) -releaseMessage Release-3
As far as I can see and think of, for some reason $(firebase_token) and $(projectId) aren't replaced by their values.
In my guess that thesee variables should be replaced by there values, what am I doing wrong? What is causing the issue that these variables aren't replaced?
You need to link the variable group into your release definition. Simply creating a variable group isn't enough.