Is there a way to suppress PSScriptAnalyzer from highlighting alias warnings? e.g.
'rm' is an alias of 'Remove-Item'. Aliases can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
Aliases in PowerShell are extremely useful. I have a simple rule: I only ever use the rational built-in aliases in scripts (I ignore the strange ones). Why? Well, most of these particular aliases are now 13 years old and have never changed (PowerShell 1.0 release November 14, 2006). So, for example, % or ls or cd are reliable in 99.99% of cases. I consider 99.99% reliability to be "good enough". Possibly the single-most-over-repeated comment on all PowerShell StackOverflow questions is "Note: it is not recommended to use aliases in PowerShell scripts as they can change!" (not recommended by whom I often wonder? God? ;-) )
However, PSScriptAnalyzer in VSCode highlights all aliases as problems so that my current 7,000 line script has 488 such "problems". Is there a way to tell PSScriptAnalyzer that I like aliases, I intend to use aliases for the vastly more concise code, clarity, and greatly improved readability that they give me, and so I do not consider them to be problems?
Mathias' comment states to search for "Select PSScriptAnalyzer Rules" but I was not able to find that setting (VS 1.58.2, ms-vscode.powershell 2021.6.2).
The solution I found was to change the "Script Analysis: Settings Path" to point to a created file that contains the following code1 to whitelist certain aliases. Below I've un-commented the relevant section.
#{
# Only diagnostic records of the specified severity will be generated.
# Uncomment the following line if you only want Errors and Warnings but
# not Information diagnostic records.
#Severity = #('Error','Warning')
# Analyze **only** the following rules. Use IncludeRules when you want
# to invoke only a small subset of the default rules.
IncludeRules = #('PSAvoidDefaultValueSwitchParameter',
'PSMisleadingBacktick',
'PSMissingModuleManifestField',
'PSReservedCmdletChar',
'PSReservedParams',
'PSShouldProcess',
'PSUseApprovedVerbs',
'PSAvoidUsingCmdletAliases',
'PSUseDeclaredVarsMoreThanAssignments')
# Do not analyze the following rules. Use ExcludeRules when you have
# commented out the IncludeRules settings above and want to include all
# the default rules except for those you exclude below.
# Note: if a rule is in both IncludeRules and ExcludeRules, the rule
# will be excluded.
#ExcludeRules = #('PSAvoidUsingWriteHost')
# You can use rule configuration to configure rules that support it:
Rules = #{
PSAvoidUsingCmdletAliases = #{
Whitelist = #("cd")
}
}
}
[1] https://github.com/PowerShell/vscode-powershell/blob/master/examples/PSScriptAnalyzerSettings.psd1
Related
Trying to get get param(...) to be have some basic error checking... one thing that puzzles me is how to detect invalid switch and flags that are not in the param list?
function abc {
param(
[switch]$one,
[switch]$two
)
}
When I use it:
PS> abc -One -Two
# ok... i like this
PS> abc -One -Two -NotAValidSwitch
# No Error here for -NotAValidSwitch? How to make it have an error for invalid switches?
As Santiago Squarzon, Abraham Zinala, and zett42 point out in comments, all you need to do is make your function (or script) an advanced (cmdlet-like) one:
explicitly, by decorating the param(...) block with a [CmdletBinding()] attribute.
and/or implicitly, by decorating at least one parameter variable with a [Parameter()] attribute.
function abc {
[CmdletBinding()] # Make function an advanced one.
param(
[switch]$one,
[switch]$two
)
}
An advanced function automatically ensures that only arguments that bind to explicitly declared parameters may be passed.
If unexpected arguments are passed, the invocation fails with a statement-terminating error.
Switching to an advanced script / function has side effects, but mostly beneficial ones:
You gain automatic support for common parameters, such as -OutVariable or -Verbose.
You lose the ability to receive unbound arguments, via the automatic $args variable variable (which is desired here); however, you can declare a catch-all parameter for any remaining positional arguments via [Parameter(ValueFromRemainingArguments)]
To accept pipeline input in an advanced function or script, a parameter must explicitly be declared as pipeline-binding, via [Parameter(ValueFromPipeline)] (objects as a whole) or [Parameter(ValueFromPipelineByPropertyName)] (value of the property of input objects that matches the parameter name) attributes.
For a juxtaposition of simple (non-advanced) and advanced functions, as well as binary cmdlets, see this answer.
If you do not want to make your function an advanced one:
Check if the automatic $args variable - reflecting any unbound arguments (a simpler alternative to $MyInvocation.UnboundArguments) - is empty (an empty array) and, if not, throw an error:
function abc {
param(
[switch]$one,
[switch]$two
)
if ($args.Count) { throw "Unexpected arguments passed: $args" }
}
Potential reasons for keeping a function a simple (non-advanced) one:
To "cut down on ceremony" in the parameter declarations, e.g. for pipeline-input processing via the automatic $input variable alone.
Generally, for simple helper functions, such as for module- or script-internal use that don't need support for common parameters.
When a function acts as a wrapper for an external program to which arguments are to be passed through and whose parameters (options) conflict with the names and aliases of PowerShell's common parameters, such as -verbose or -ov (-Out-Variable).
What isn't a good reason:
When your function is exported from a module and has an irregular name (not adhering to PowerShell's <Verb>-<Noun> naming convention based on approved verbs) and you want to avoid the warning that is emitted when you import that module.
First and foremost, this isn't an issue of simple vs. advanced functions, but relates solely to exporting a function from a module; that is, even an irregularly named simple function will trigger the warning. And the warning exists for a good reason: Functions exported from modules are typically "public", i.e. (also) for use by other users, who justifiable expect command names to follow PowerShell's naming conventions, which greatly facilitates command discovery. Similarly, users will expect cmdlet-like behavior from functions exported by a module, so it's best to only export advanced functions.
If you still want to use an irregular name while avoiding a warning, you have two options:
Disregard the naming conventions altogether (not advisable) and choose a name that contains no - character, e.g. doStuff - PowerShell will then not warn. A better option is to choose a regular name and define the irregular names as an alias for it (see below), but note that even aliases have a (less strictly adhered-to) naming convention, based on an official one or two-letter prefix defined for each approved verb, such as g for Get- and sa for Start- (see the approved-verbs doc link above).
If you do want to use the <Verb>-<Noun> convention but use an unapproved verb (token before -), define the function with a regular name (using an approved verb) and also define and export an alias for it that uses the irregular name (aliases aren't subject to the warning). E.g., if you want a command named Ensure-Foo, name the function Set-Foo, for instance, and define Set-Alias Ensure-Foo Set-Foo. Do note that both commands need to be exported and are therefore visible to the importer.
Finally, note that the warning can also be suppressed on import, namely via Import-Module -DisableNameChecking. The downside of this approach - aside from placing the burden of silencing the warning on the importer - is that custom classes exported by a module can't be imported this way, because importing such classes requires a using module statement, which has no silencing option (as of PowerShell 7.2.1; see GitHub issue #2449 for background information.
I have been asked to review and edit, if needed, some scripts. I have entered the relevant part of this one below. I believe its correct, but I cannot figure out if the period at the beginning of the second to last line is needed or just a typo. It appears to be a source operator, but I don't see why it'd be needed there.
As always, you folks here are the salt of the earth and deserve much more plaudits than you get and than I can give. Thank you so much for continuing to making me look better at this than I am.
$Assembly = 'D:\MgaLin2.dll'
."C:\Windows\Microsoft.NET\Framework\v4.0.30319\RegAsm.exe" -codebase -tlb $Assembly
Copy-Item -Path D:\Mga -Destination "C:\Program Files (x86)\Common Files\COMPANY_NAME\COMPANY_SUBFOLDER\" -Include *.*
The use of a period (.) (the dot-sourcing operator) is unusual in this case, but it works.
More typically, you'll see use of &, the call operator in this situation.
Only for PowerShell scripts (and script blocks, { ... }) does the distinction between . and & matter (. then dot-sources the code, i.e. runs it directly in the caller's scope rather than in a child scope, as & does).
For external programs, such as RegAsm.exe, you can technically use . and & interchangeably, but to avoid conceptual confusion, it is best to stick with &.
Note that the reason that & (or .) is required in this case is that the executable path is quoted; the same would apply if the path contained a variable reference (e.g., $env:ProgramFiles) or a subexpression (e.g., $($name + $ext)).
For more information about this requirement, which is a purely syntactic one, see this answer.
I'm trying to make use of module-qualified names[1] and a DefaultCommandPrefix and not have it break if the module is imported with Import-Module -Prefix SomethingElse. Maybe I'm just doing something really wrong here, or those two features aren't meant to be used together.
Inside the main module file using "ModuleName\Verb-PrefixNoun args..." works as long as "Prefix" matches the DefaultCommandPrefix in the manifest (the module-qualified syntax seems to require the prefix used for the import[2]). But importing the module with a different prefix, all module-qualified references inside the module breaks.
After a bit of searching and trial and error, the least horrible solution I've managed to get working is to use something like the following hackish solution. But, I can't help wonder if there isn't some better way that automatically handles the prefix (just as Import-Module obviously manages to add the prefix, my first naive though was that using just ModuleName\Verb-Noun would automatically append any prefix to the noun, but evidently not[2].
So this is the hack I came up with, that looks up the modules prefix and appends it, then using "." or "&" to expand/invoke the command:
# (imagine this code in the `ModuleName.psm1`, and a manifest with some `DefaultCommandPrefix`)
Function MQ {
param (
[Parameter()][string]
$Verb,
[Parameter()][string]
$Noun,
[string]
$Module='ModuleName'
)
"$Module\$Verb-$((Get-Module $Module).Prefix)$Noun"
}
Function Verb-Noun {
# This works even when importing with a prefix,
# but can I be guaranteed that it's not some
# other module's cmdlet?
Verb-OtherNoun 1 2 3 '...'
#ModuleName\Verb-OtherNoun 1 2 3 '...'
. (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
# or:
& (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
}
(MQ could be made more user friendly by also accepting a single string MQ "Verb-Noun" and split/recombine automatically, and so on, etc. and all the usual disclamers)
Note: I know it would be possible to hard-code the name instead of using DefaultCommandPrefix, e.g. as PSReadLine does (and a bunch of other modules). But, to be honest that feels like a workaround.
Just calling Verb-OtherNoun seems fragile to me, as the most recent one is used[3]. So I would imagine that for example just before the call adding an Import-Module statement with a module that exports a Verb-OtherNoun would cause the wrong (not this module's) cmdlet being called. (Perhaps a more real world scenario is a module being loaded after this module has been loaded, but before calling Verb-Noun.)
Is there perhaps some syntax for module-qualifiaction I'm not aware of that would do something akin to Import-Module(e.g. Module\\Verb-Noun or Module\Verb+Noun that would resolve and inject Module's prefix) and now that I think of it, is there some reason why Module\Verb-Noun doesn't handle prefixes, or just that no one wrote the code for it? (I can't see how it would break things more than how using DefaultCommandPrefix would break v2/v3[2])
[1] https://www.sapien.com/blog/2015/10/23/using-module-qualified-cmdlet-names/
[2] https://github.com/PoshCode/PowerShellPracticeAndStyle/issues/23#issuecomment-106843619
[3] https://stackoverflow.com/a/22259706/13648152
You can avoid the problem by not using a module qualifier not using a noun prefix when you call your module's own functions.
That is, call them as Verb-Noun, exactly as named in the target function's implementation.
This is generally safe, because your own module's functions take precedence over any commands of the same name defined outside your module.
The sole exception is if an alias defined in the global scope happens to have the same name as one of your functions - but that shouldn't normally be a concern, because aliases are used for short names that do not follow the verb-noun naming convention.
It also makes sense in that it allows you to call your functions module-internally with an invariable name - a name that doesn't situationally change, depending on what the caller decided to choose as a noun prefix via Import-Module -Prefix ....
Think of the prefix feature as being a caller-only feature that is unrelated to your module's implementation.
As an aside: As of PowerShell 7.0, declaring a default noun prefix via the DefaultCommandPrefix module-manifest property doesn't properly integrate with the module auto-loading and command-discovery features - see this GitHub issue.
I am attempting to exclude certain files from my doxygen generated documentation. I am using version 1.8.14.
My files come in this naming convention:
/Path2/OtherFile.cs
/Path/DAL.Entity/Source.cs
/Path/DAL.Entity/SourceBase.generated.cs
I want to exclude all files that do NOT end in Base.generated.cs, and are located inside of /Path/.
Since it appears doxygen claims to use regex for the exclude_patterns variable, I eventually came up with this:
.*\\Path\\DAL\..{4,15}\\((?<!Base\.generated).)*
Needless to say, it did not work. Nor did multiple other variations. So far a simple wildcard * is the only regex character I have gotten to actually work.
doxygen uses QRegExp for a lot of things, so I assumed that was the library used for this variable as well, but even several variations of a pattern that that library claims to support did not work; granted apparently that library is full of bugs, but I would expect some things to work.
Does doxygen actually use a regex library for this variable?
If so, which library is it?
In either case, is there a method of achieving my goal?
My conclusion is; No... Doxygen Doxyfile does not support real regex. Even though they claim that it do. It's just standard wildcards that work.
We ended up with a really awkward solution to work around this.
What we did is that we added a macro in our CMakeLists.txt that creates a string with everything we want to include in INPUT instead. Manually excluding the parts we don't want.
The sad part is that CMakes regex also is crippled. So we couldn't use advanced regex such as negative lookahead in LIST(FILTER EXLUDE) similar to LIST(FILTER children EXCLUDE REGEX "^((?!autogen/public).)*$")... So even this solution is not really what we wanted.
Our CMakeLists.txt ended up looking something like this
cmake_minimum_required(VERSION 3.9)
project(documentation_html LANGUAGES CXX)
find_package(Doxygen REQUIRED dot)
# Custom macros
## Macro for getting all relevant directories when creating HTML documentain.
## This was created cause the regex matching in Doxygen and CMake are lacking support for more
## advanced syntax.
MACRO(SUBDIRS result current_dir include_regex)
FILE(GLOB_RECURSE children ${current_dir} ${current_dir}/*)
LIST(FILTER children INCLUDE REGEX "${include_regex}")
SET(dir_list "")
FOREACH(child ${children})
get_filename_component(path ${child} DIRECTORY)
IF(${path} MATCHES ".*autogen/public.*$" OR NOT ${path} MATCHES ".*build.*$") # If we have the /source/build/autogen/public folder available we create the doxygen for those interfaces also.
LIST(APPEND dir_list ${path})
ENDIF()
ENDFOREACH()
LIST(REMOVE_DUPLICATES dir_list)
string(REPLACE ";" " " dirs "${dir_list}")
SET(${result} ${dirs})
ENDMACRO()
SUBDIRS(DOCSDIRS "${CMAKE_SOURCE_DIR}/docs" ".*.plantuml$|.*.puml$|.*.md$|.*.txt$|.*.sty$|.*.tex$|")
SUBDIRS(SOURCEDIRS "${CMAKE_SOURCE_DIR}/source" ".*.cpp$|.*.hpp$|.*.h$|.*.md$")
# Common config
set(DOXYGEN_CONFIG_PATH ${CMAKE_SOURCE_DIR}/docs/doxy_config)
set(DOXYGEN_IN ${DOXYGEN_CONFIG_PATH}/Doxyfile.in)
set(DOXYGEN_IMAGE_PATH ${CMAKE_SOURCE_DIR}/docs)
set(DOXYGEN_PLANTUML_INCLUDE_PATH ${CMAKE_SOURCE_DIR}/docs)
set(DOXYGEN_OUTPUT_DIRECTORY docs)
# HTML config
set(DOXYGEN_INPUT "${DOCSDIRS} ${SOURCEDIRS}")
set(DOXYGEN_EXCLUDE_PATTERNS "*/tests/* */.*/*")
set(DOXYGEN_FILE_PATTERNS "*.cpp *.hpp *.h *.md")
set(DOXYGEN_RECURSIVE NO)
set(DOXYGEN_GENERATE_LATEX NO)
set(DOXYGEN_GENERATE_HTML YES)
set(DOXYGEN_HTML_DYNAMIC_MENUS NO)
configure_file(${DOXYGEN_IN} ${CMAKE_BINARY_DIR}/DoxyHTML #ONLY)
add_custom_target(docs
COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_BINARY_DIR}/DoxyHTML -d Markdown
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "Generating documentation"
VERBATIM)
and in the Doxyfile we added the environment variables for those fields
OUTPUT_DIRECTORY = #DOXYGEN_OUTPUT_DIRECTORY#
INPUT = #DOXYGEN_INPUT#
FILE_PATTERNS = #DOXYGEN_FILE_PATTERNS#
RECURSIVE = #DOXYGEN_RECURSIVE#
EXCLUDE_PATTERNS = #DOXYGEN_EXCLUDE_PATTERNS#
IMAGE_PATH = #DOXYGEN_IMAGE_PATH#
GENERATE_HTML = #DOXYGEN_GENERATE_HTML#
HTML_DYNAMIC_MENUS = #DOXYGEN_HTML_DYNAMIC_MENUS#
GENERATE_LATEX = #DOXYGEN_GENERATE_LATEX#
PLANTUML_INCLUDE_PATH = #DOXYGEN_PLANTUML_INCLUDE_PATH#
After this we can run cd ./build && cmake ../ && make docs to create our html documentation and have it include the autogenerated interfaces in our source folder without including all the other directories in the build folder.
Quick description of what actually happens in the CMakeLists.txt
# Macro that gets all directories from current_dir recursively and returns the result to result as a space separated string
MACRO(SUBDIRS result current_dir include_regex)
# Gets all files recursively from current_dir
FILE(GLOB_RECURSE children ${current_dir} ${current_dir}/*)
# Filter files so we only keep the files that match the include_regex (can't be to advanced regex)
LIST(FILTER children INCLUDE REGEX "${include_regex}")
SET(dir_list "")
# Let us act on all files... :)
FOREACH(child ${children})
# We're only interested in the path. So we get the path part from the file
get_filename_component(path ${child} DIRECTORY)
# Since CMakes regex also is crippled we can't do nice things such as LIST(FILTER children EXCLUDE REGEX "^((?!autogen/public).)*$") which would have been preferred (CMake regex does not understand negative lookahead/lookbehind)... So we ended up with this ugly thing instead... Adding all build/autogen/public paths and not adding any other paths inside build. I guess it would be possible to write this expression in regex without negative lookahead. But I'm both not really fluent in regex (who are... right?) and a bit lazy in this case. We just needed to get this one pointer task done... :P
IF(${path} MATCHES ".*autogen/public.*$" OR NOT ${path} MATCHES ".*build.*$")
LIST(APPEND dir_list ${path})
ENDIF()
ENDFOREACH()
# Remove all duplicates... Since we GLOBed all files there are a lot of them. So this is important or Doxygen INPUT will overflow... I know... I tested...
LIST(REMOVE_DUPLICATES dir_list)
# Convert the dir_list to a space seperated string
string(REPLACE ";" " " dirs "${dir_list}")
# Return the result! Coffee and cinnamon buns for everyone!
SET(${result} ${dirs})
ENDMACRO()
# Get all the pathes that we want to include in our documentation ... this is also where the build folders for the different applications are going to be... with our autogenerated interfaces which we want to keep.
SUBDIRS(SOURCEDIRS "${CMAKE_SOURCE_DIR}/source" ".*.cpp$|.*.hpp$|.*.h$|.*.md$")
# Add the dirs we want to the Doxygen INPUT
set(DOXYGEN_INPUT "${SOURCEDIRS}")
# Normal exlude patterns for stuff we don't want to add. This thing does not support regex... even though it should.
set(DOXYGEN_EXCLUDE_PATTERNS "*/tests/* */.*/*")
# Normal use of the file patterns that we want to keep in the documentation
set(DOXYGEN_FILE_PATTERNS "*.cpp *.hpp *.h *.md")
# IMPORTANT! Since we are creating all the INPUT paths our self we don't want Doxygen to do any recursion for us
set(DOXYGEN_RECURSIVE NO)
# Write the config
configure_file(${DOXYGEN_IN} ${CMAKE_BINARY_DIR}/DoxyHTML #ONLY)
# Create the target that will use that config to create the html documentation
add_custom_target(docs
COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_BINARY_DIR}/DoxyHTML -d Markdown
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "Generating documentation"
VERBATIM)
I know this isn't the answer anyone who stumbles in on this question wants... unfortunately it seems to be the only reasonable solution...
... you all have my deepest condolences...
First, I would like to apologize in case that the title is not descriptive enough, I'm having a hard time dealing with this problem. I'm trying to build an automation for a svn merge using a powershell script that will be executed for another process. The function that I'm using looks like this:
function($target){
svn merge $target
}
Now, my problem occurs when there are conflicts in the merge. The default behavior of the command is request an input from the user and proceed accordingly. I would like to automatize this process using predefined values (show the differences and then postpone the merge), but I haven't found a way to do it. In summary, the workflow that I am looking to accomplish is the following:
Detect whether the command execution requires any input to proceed
Provide a default inputs (in my particular case "df" and then "p")
Is there any way to do this in powershell? Thank you so much in advance for any help/clue that you can provide me.
Edit:
To clarify my question: I would like to automatically provide a value when a command executed within a powershell script require it, like in the following example:
Requesting user input
Edit 2:
Here is a test using the snippet provided by #mklement0. Unfortunately, It didn't work as expected, but I thought it was wort to add this edition to clarify the question per complete
Expected behavior:
Actual result:
Note:
This answer does not solve the OP's problem, because the specific target utility, svn, apparently suppresses prompts when the process' stdin input isn't coming from a terminal (console).
For utilities that do still prompt, however, the solution below should work, within the constraints stated.
Generally, before attempting to simulate user input, it's worth investigating whether the target utility offers programmatic control over the behavior, via its command-line options, which is both simpler and more robust.
While it would be far from trivial to detect whether a given external command is prompting for user input:
you can blindly send the presumptive responses,
which assumes that no situational variations are needed (except if a particular calls happens not to prompt at all, in which case the input is ignored).
Let's assume the following batch file, foo.cmd, which puts up 2 prompts and echoes the input:
#echo off
echo begin
set /p "input1=prompt 1: "
echo [%input1%]
set /p "input2=prompt 2: "
echo [%input2%]
echo end
Now let's send responses one and two to that batch file:
C: PS> Set-Content tmp.txt -Value 'one', 'two'; ./foo.cmd '<' tmp.txt; Remove-Item tmp.txt
begin
prompt 1: one
[one]
prompt 2: two
[two]
end
Note:
For reasons unknown to me, the use of an intermediate file is necessary for this approach to work on Windows - 'one', 'two' | ./foo.cmd does not work.
Note how the < must be represented as '<' to ensure that it is passed through to cmd.exe and not interpreted by PowerShell up front (where < isn't supported).
By contrast, 'one', 'two' | ./foo does work on Unix platforms (PowerShell Core).
You can store the SVN command line output into a variable and parse through that and branch as you desire. Each line of output is stored into a new enumerator (cli output stored in PS variables is in array format)
$var = & svn merge $target
$var