Appcmd Syntax for Arrays in Powershell - powershell

I'm trying to run the following to set the handler mappings for each of our websites using appcmd in powershell:
$websites = Get-ChildItem IIS:\Sites
foreach ($Site in $WebSites) {
C:\Windows\system32\inetsrv\appcmd set config "$site" /section:handlers -accessPolicy:"Read,Script,Execute"}
However I'm getting this error message for each site as it iterates through them:
ERROR ( message:Cannot find SITE object with identifier "Microsoft.IIs.PowerShell.Framework.ConfigurationElement". )
What am I doing wrong? I tried using a property of the $sites variable:
$site.name
but even that doesn't work. I'm at a loss. Thanks!

As you've encountered, "$site" doesn't expand to the name of the website, but to the type of object that $site refers to. "$site.name" is not way off, but in fact:
"$site" -eq "Microsoft.IIs.PowerShell.Framework.ConfigurationElement"
"$site.name" -eq "Microsoft.IIs.PowerShell.Framework.ConfigurationElement.name"
The PowerShell parser stops recognizing the variable name at ., and treats the rest (".name") as a string.
You can use the $() sub-expression operator to escape an entire statement:
"Name: $($site.name)"
You can do anything you like inside $() and nest them all you like:
"Random Site Name: $("$(Get-Random -Maximum ([int32]::MaxValue)){0}" -f $site.name)"

Related

The term '>>' is not recognized as the name of a cmdlet, function, script file, or operable program

I want to run a powershell script and save/redirect the result to another file.
My script is:
# Define time for report (default is 10 day)
$startDate = (get-date).AddDays(-10)
# Store successful logon events from security logs with the specified dates and workstation/IP in an array
$slogonevents = Get-Eventlog -LogName Security -after $startDate | where {$_.eventID -eq 4624 }
# Crawl through events; print all logon history with type, date/time, status, account name, computer and IP address if user logged on remotely
foreach ($e in $slogonevents){
# Logon Successful Events
# Local (Logon Type 2)
if (($e.EventID -eq 4624 ) -and ($e.ReplacementStrings[8] -eq 2)){
write-host "Type: Local Logon`tDate: "$e.TimeGenerated "`tStatus: Success`tUser: "$e.ReplacementStrings[5] "`tWorkstation: "$e.ReplacementStrings[11]
}
# Remote (Logon Type 10)
if (($e.EventID -eq 4624 ) -and ($e.ReplacementStrings[8] -eq 10)){
write-host "Type: Remote Logon`tDate: "$e.TimeGenerated "`tStatus: Success`tUser: "$e.ReplacementStrings[5] "`tWorkstation: "$e.ReplacementStrings[11] "`tIP Address: "$e.ReplacementStrings[18]
}
} >> D:\test.txt
but I get errors like that
>> : The term '>>' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and
try again.
At D:\Cyber_security\Python\Untitled1.ps1:26 char:3
+ } >> D:\test.txt
+ ~~
+ CategoryInfo : ObjectNotFound: (>>:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
why this is happening?
To address an incidental problem up front: even if you fix the redirection problem (see below), your foreach loop won't produce success-stream output, resulting in an empty file. You're using Write-Host, which is is typically the wrong tool to use, unless the intent is to write to the display only (though in PowerShell 5 and above you can capture Write-host output if you redirect it to the success output stream, e.g. with *>&1). Instead, use Write-Output (e.g. Write-Output "foo") or, preferably, implicit output (just "foo"). See also: the bottom section of this answer.
foreach is a language statement, and as such you cannot directly apply a redirection (> or >>) to it - see bottom section for an explantion.
You need to wrap it in a (by definition pipeline-based) command or expression first, for which there are two options:
Streaming option (preferred): Wrap the statement in a script block ({ ... }) and call it via &, the call operator (or, if you want the statement to run directly in the caller's scope as opposed to a child scope, as created by &, use ., the dot-sourcing operator)
& { foreach ($i in 1..2) { $i } } > test.txt
Collect-all-output-first option: Use $(...), the subexpression operator:
$(foreach ($i in 1..2) { $i }) > test.txt
Alternatively, use the ForEach-Object cmdlet, which is a command (as all named units of execution are in PowerShell), which also results in streaming processing (perhaps confusingly, a built-in alias for ForEach-Object is also named foreach, with the syntactical context deciding whether the cmdlet or the language statement is being referenced):
1..2 | ForEach-Object { $_ } > test.txt
As for what you tried:
The > (>>) operator is, in effect, an alias of the Out-File cmdlet (Out-File -Append), and therefor requires a pipeline to function.
However, language statements cannot directly be used in a pipeline, and by themselves are always self-contained statements, meaning that whatever comes after isn't considered part of the same statement.
This becomes more obvious when you replace >> with Out-File -Append:
# !! FAILS, because `| Out-File -Append test.txt` is considered
# !! a separate statement, resulting in the following error:
# !! "An empty pipe element is not allowed."
foreach ($i in 1..2) { $i } | Out-File -Append test.txt
The error message An empty pipe element is not allowed. implies that | was considered the start of a new statement.
The same happened with >>, albeit with the more obscure error message shown in your question, but you can easily reproduce it by executing >> test.txt in isolation.
Note: Unlike POSIX-compatible shells such as Bash, PowerShell does not allow you to place a redirection anywhere within a statement, and fails if it starts a statement; e.g., Get-Date >> test.txt' works fine and even Get-Date >>test.txt -Format g, but >> test.txt 'hi' does not.
Design musings:
Given that an expression can serve as the - first only - segment of a pipeline (e.g., 1..2 | Out-File -Append test.txt), it isn't obvious why a language statement cannot be used that way too.
The reason is a fundamental limitation in PowerShell's grammar:
A pipeline by itself is a statement,
but it cannot (directly) contain statements.
Hence the need to nest statements inside pipelines using the techniques shown above (& { ... } / $(..)).
Another unfortunate manifestation of this design is when you attempt to use language statements with && and ||, the PowerShell (Core) 7+ pipeline-chain operators:
Since exit and throw are language statements too, the following idiom - which would work in POSIX-combatible shells - does not work:
# Exit the script with an exit code of 1 if the Get-ChildItem call
# reports an error.
# !! FAILS, because `exit`, as a language statement, cannot be
# !! used directly in a pipeline.
Get-ChildItem NoSuchDir -ErrorAction SilentlyContinue || exit 1
Again, nesting of the statement is required, such as $(...):
# OK, due to $(...)
Get-ChildItem NoSuchDir -ErrorAction SilentlyContinue || $(exit 1)
Perhaps needless to say:
This requirement is obscure and easy to forget...
... and it is exacerbated by the fact that placing e.g. exit 1 after && or || does not cause a syntax (parse) error and only fails at runtime, and only when the condition is met.
That is, you may not notice the problem until the LHS command actually reports an error.
Additionally, the error message you get when it does fail can be confusing: The term 'exit' is not recognized as a name of a cmdlet, function, script file, or executable program. This is because exit in this context is then interpreted as the name of a command (such as a function or external program) rather than as a language statement.

How to pass output from a PowerShell cmdlet to a script?

I'm attempting to run a PowerShell script with the input being the results of another PowerShell cmdlet. Here's the cross-forest Exchange 2013 PowerShell command I can run successfully for one user by specifying the -Identity parameter:
.\Prepare-MoveRequest.ps1 -Identity "user#domain.com" -RemoteForestDomainController "dc.remotedomain.com" $Remote -UseLocalObject -OverwriteLocalObject -Verbose
I want to run this command for all MailUsers. Therefore, what I want to run is:
Get-MailUser | select windowsemailaddress | .\Prepare-MoveRequest.ps1 -RemoteForestDomainController "dc.remotedomain.com" $Remote -LocalForestDomainController "dc.localdomain.com" -UseLocalObject -OverwriteLocalObject -Verbose
Note that I removed the -Identity parameter because I was feeding it from each Get-MailUser's WindowsEmailAddress property value. However, this returns with a pipeline input error.
I also tried exporting the WindowsEmailAddress property values to a CSV, and then reading it as per the following site, but I also got a pipeline problem: http://technet.microsoft.com/en-us/library/ee861103(v=exchg.150).aspx
Import-Csv mailusers.csv | Prepare-MoveRequest.ps1 -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote
What is the best way to feed the windowsemailaddress field from each MailUser to my Prepare-MoveRequest.ps1 script?
EDIT: I may have just figured it out with the following foreach addition to my Import-Csv option above. I'm testing it now:
Import-Csv mailusers.csv | foreach { Prepare-MoveRequest.ps1 -Identity $_.windowsemailaddress -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote }
You should declare your custom function called Prepare-MoveRequest instead of simply making it a script. Then, dot-source the script that declares the function, and then call the function. To accept pipeline input into your function, you need to declare one or more parameters that use the appropriate parameter attributes, such as ValueFromPipeline or ValueFromPipelineByPropertyName. Here is the official MSDN documentation for parameter attributes.
For example, let's say I was developing a custom Stop-Process cmdlet. I want to stop a process based on the ProcessID (or PID) of a Windows process. Here is what the command would look like:
function Stop-CustomProcess {
# Specify the CmdletBinding() attribute for our
# custom advanced function.
[CmdletBinding()]
# Specify the PARAM block, and declare the parameter
# that accepts pipeline input
param (
[Parameter(ValueFromPipelineByPropertyName = $true)]
[int] $Id
)
# You must specify the PROCESS block, because we want this
# code to execute FOR EACH process that is piped into the
# cmdlet. If we do not specify the PROCESS block, then the
# END block is used by default, which only would run once.
process {
Write-Verbose -Message ('Stopping process with PID: {0}' -f $ID);
# Stop the process here
}
}
# 1. Launch three (3) instances of notepad
1..3 | % { notepad; };
# 2. Call the Stop-CustomProcess cmdlet, using pipeline input
Get-Process notepad | Stop-CustomProcess -Verbose;
# 3. Do an actual clean-up
Get-Process notepad | Stop-Process;
Now that we've taken a look at an example of building the custom function ... once you've defined your custom function in your script file, dot-source it in your "main" script.
# Import the custom function into the current session
. $PSScriptRoot\Prepare-MoveRequest.ps1
# Call the function
Get-MailUser | Prepare-MoveRequest -RemoteForestDomainController dc.remotedomain.com $Remote -LocalForestDomainController dc.localdomain.com -UseLocalObject -OverwriteLocalObject -Verbose;
# Note: Since you've defined a parameter named `-WindowsEmailAddress` that uses the `ValueFromPipelineByPropertyName` attribute, the value of each object will be bound to the parameter, as it passes through the `PROCESS` block.
EDIT: I would like to point out that your edit to your post does not properly handle parameter binding in PowerShell. It may achieve the desired results, but it does not teach the correct method of binding parameters in PowerShell. You don't have to use the ForEach-Object to achieve your desired results. Read through my post, and I believe you will increase your understanding of parameter binding.
My foreach loop did the trick.
Import-Csv mailusers.csv | foreach { Prepare-MoveRequest.ps1 -Identity $_.windowsemailaddress -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote }

Commands executed in PowerShell with variables surrounded in quotes fail. Why?

I'm having a surprisingly difficult time embedding variables with quotes to an external command with PoSH. For example, this command
dfsradmin membership list /rgname:`"stuff I want`"
gives me the following expected result:
Failed:
Replication group with name stuff I want cannot be found.
This command, however
$group = "stuff I want"
dfsradmin membership list /rgname:`"$group`"
fails with this error:
Failed:
The subobject "/rgname:"stuff is not a valid subobject.
Is this a bug with Powershell or am I missing/misunderstanding something?
Yeah there are known issues in Powershell ( including v2.0) around this: http://connect.microsoft.com/PowerShell/feedback/details/376207/executing-commands-which-require-quotes-and-variables-is-practically-impossible
See if the alternatives discussed in the link above work for you. I cannot try it out as I don't have that executable.
Also echoargs.exe is a useful tool that you can use to see what arguments have been recevied from Powershell.
I found that defining
$quote = '"'
and then using /command$quote"test"$quote works as well
There's no need to add back ticks in front of quotes. Does this work for you?
$group = "stuff I want"
dfsradmin membership list /rgname:"$group"
So I was able to get around this by executing it in CMD.exe and doing string manipulations to get what I need.
$str = &cmd /c 'dfsradmin membership list /rgname:"blah blah"'
$str = &cmd /c "dfsradmin membership list /rgname:$blah" # with vars
Thanks for the help! I hope this has been resolved in Powershell 3.0.
I found a workaround which doesn't call cmd but uses Invoke-Expression instead. The command has to be put in a variable first:
$var = "string with spaces"
$command = "first part " + [char]96 + [char]34 + $var + [char]96 + [char]34 + " second part"
Invoke-Expression $command
Not that pretty but it works. You can replace [char]96 with '`' and [char]34 with '"' if you prefer. Easy to create a function which does it if you use it a lot.
All of the above did not work for me but based on Carlos idea, this is the solution that worked posted here
# get msdeploy exe
$MSDeploy = ${env:ProgramFiles}, ${env:ProgramFiles(x86)} |
ForEach-Object {Get-ChildItem -Path $_ -Filter 'MSDeploy.exe' -Recurse} |
Sort-Object -Property #{Expression={[version]$_.VersionInfo.FileVersion}} -Descending |
Select-Object -First 1 -ExpandProperty FullName
#build deploy command
$deplyCmd = """""$MSDeploy"" -verb:sync -dest:iisApp=""Default Web Site"" -enableRule:DoNotDeleteRule -source:iisApp=""$ExtraWebFilesFolder"""
#execute
&cmd /c $deplyCmd
I know this is old thread but just posting here in case my solution works for somebody as it worked for me.
This particular command (dfsradmin) expects natively seen quotes so I just enclosed value with quotes in single quotes thus passing quotes as well:
dfsradmin membership list /rgname:'"stuff I want"'
or if using through variable:
$group = '"stuff I want"'
dfsradmin membership list /rgname:$group

Getting the arguments of the last invoked command in powershell?

I want to be able to get the argument portion of the previous command. $^ seems to return just the command and not the args. Get-History -count 1 returns the last full command including the command and the args. I could just .Replace the first instance, but I am not sure if it is correct.
Scenario is that sometimes I want to do something like this. Let's assume that $* are the args to the last command:
dir \\share\files\myfile.exe
copy $* c:\windows\system32
Any ideas how to get the last args correctly?
UPDATE: finished my method for doing this.
function Get-LastArgs
{
$lastHistory = (Get-History -count 1)
$lastCommand = $lastHistory.CommandLine
$errors = [System.Management.Automation.PSParseError[]] #()
[System.Management.Automation.PsParser]::Tokenize($lastCommand, [ref] $errors) | ? {$_.type -eq "commandargument"} | select -last 1 -expand content
}
Now I can just do:
dir \\share\files\myfile.exe
copy (Get-LastArgs) c:\windows\system32
To reduce typing, I did
set-alias $* Get-LastArgs
so now I still have to do
copy ($*) c:\windows\system32
if anybody has any ideas for making this better please let me know.
For the last argument (not all!) in the interactive hosts like Console and ISE it is the automatic variable $$.
Help
man about_Automatic_Variables
gets
$$
Contains the last token in the last line received by the session.
Other hosts may or may not implement this feature (as well as the $^ variable).
There is no easy way to get the last args in this fashion without parsing the history item itself, and this is no trivial matter. The reason is that the "last arguments" may not be what you think they are after you take splatting, pipelines, nested subexpressions, named and unnammed arguments/parameters into the equasion. In powershell v2 there is a parser available for tokenizing commands and expressions, but I'm not sure you want to go that route.
ps> $psparser::Tokenize("dir foo", [ref]$null) | ? {
$_.type -eq "commandargument" } | select -last 1 -expand content
foo

How to test for existence of a script-scoped variable in PowerShell?

Is it possible to test for the existence of a script-scoped variable in PowerShell?
I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced:
The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set.
At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24
While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1:
#requires -version 2.0
param([string[]]$PreCacheList)
if ((!$SCRIPT:helpCache) -or $RefreshCache) {
$SCRIPT:helpCache = #{}
}
This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet.
Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable.
So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?
Use test-path variable:SCRIPT:helpCache
if (!(test-path variable:script:helpCache)) {
$script:helpCache = #{}
}
This works for me without problems.
Checked using this code:
#'
Set-PsDebug -strict
write-host (test-path variable:script:helpCache)
$script:helpCache = "this is test"
write-host (test-path variable:script:helpCache) and value is $script:helpCache
'# | set-content stricttest.ps1
.\stricttest.ps1
Try this trick:
Get-Variable [h]elpCache -Scope Script
It should not throw or emit any errors because we use a wildcard [h]elpCache. On the other hand this kind of a wildcard is a literal name de facto.
You can use Get-Variable with the -Scope parameter. This cmdlet will (by default at least) not return only the variable's value but a PSVariable object and will throw an exception if the variable isn't found:
Get-Variable foo -Scope script