How to remove multiple IIS bindings in PowerShell fast? - powershell

In a process of version upgrade, our current solution takes all bindings (except for two dummy urls) from one site and setting them on another site.
I'm currently removing the bindings through PowerShell but it is super slow. I've looked on about every thread in SO and almost every solution uses "Remove-WebBinding".
This is my current code:
Get-Website -Name $siteName | Get-WebBinding | Where-Object { $_.HostHeader -notlike '*dummy*' } | Remove-WebBinding;
I have 272 (-2 dummy) bindings to remove and it takes more about 3 minutes.
Any ideas how to do it faster?
BTW: Adding all of those bindings one by one is super-slow too, but I guess if I'll find an answer here a similar solution would do for adding as well.

Copied from the comment and expand it a little bit.
Cause of Slowness
WebAdministration cmdlets were designed a long while ago, which has many disadvantages.
The slowness you observed can be explained as by design. Though it is not open sourced, we can guess that each call of Remove-WebBinding creates the relevant underlying objects (like ServerManager) and then commits change to IIS configuration file. Thus, the more bindings to remove, the longer it takes (and the more resources consumed).
Solution
For all in-support IIS releases today (8+), you should use IISAdministration cmdlets instead. They are newly developed with both flexibility and performance in mind.
By using a single ServerManager object and committing changes only once, removing bindings can be a lot faster.

try to run below PowerShell script:
Import-Module WebAdministration
Get-WebBinding -HostHeader 'dummy' | Remove-WebBinding -Confirm:$true

Related

Call Write-Host on the results of a call to Get-Content [duplicate]

I've been trying to work with an API that only accepts raw text or base64 encoded values in a JSON object. The content I'm POSTing is data from an XML file. So I used Powershell's Get-Content cmdlet (without -Raw) to retrieve the data from the .xml and then base64 encode it and sent it to the API. The API then decodes it, but the XML formatting was lost.
I found a SO post about using the -Raw switch on Get-Content, but it seems like the documentation for this switch is vague. When I used the -Raw switch, encoded it and sent it back to the API, the formatting was good.
briantist's helpful comment on the question sums up the answer succinctly (in his words; lightly edited, emphasis added):
Get-Content [by default] reads a file line by line and returns an array of the lines. Using -Raw reads the entire contents of the file as a single string.
The name -Raw is tad unfortunate, because it mistakenly suggests reading raw bytes, whereas -Raw still detects encodings and ultimately reads everything into a .NET [string] type.
(By contrast, you need either -Encoding Byte (Windows PowerShell) or -AsByteStream (PowerShell Core) to read a file as a byte array.)
Given -Raw's actual purpose, perhaps something like -Whole would have been a better name, but that ship has sailed (though adding an alias name for a parameter is still an option).
Let's take a look at why this information may currently be difficult to discover [Update: It no longer is]:
[Update: This section is now OBSOLETE, except the link to the PowerShell documentation GitHub repository, which welcomes contributions, bug reports, suggestions]
A Tale of PowerShell Documentation Woes
The central conflict of this tale is the tension between the solid foundation of PowerShell's potentially great help system and its shoddy current content.
As is often the case, third parties come to the rescue, as shown in gms0ulman's helpful answer.
As briantist also points out, however, PowerShell's documentation is now open-source and welcomes contributions; he states:
"I will direct your attention to the Edit link
[for the Get-Content help topic on GitHub] [...] so you can actually fix it up and submit something better
(including examples). I have done it before; they do accept pull
requests for it."
The caveat is that while future PowerShell Core versions will benefit from improvements, it's not clear whether improvements will make their way back into Windows PowerShell.
Let's ask PowerShell's built-in help system, accessible via the standard Get-Help cmdlet (the content for which may not be preinstalled; install when prompted, or run Update-Help from an elevated session):
Get-Help Get-Content -Parameter Raw
Note how you can conveniently ask for help on a specific parameter (-Parameter Raw).
On Windows PowerShell v5.1, this yields:
-Raw
Ignores newline characters and returns the entire contents of a file in one string.
By default, the contents of a file is returned as a array of strings that is delimited
by the newline character.
Raw is a dynamic parameter that the FileSystem provider adds to the Get-Content cmdlet.
This parameter works only in file system drives.
This parameter is introduced in Windows PowerShell 3.0.
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters? false
That is indeed what we were looking for and quite helpful (leaving the awkward phrasing "delimited by the newline character" aside and that on Windows a newline is a character sequence).
On Powershell Core v6.0.2, this yields:
-Raw
Required? false
Position? Named
Accept pipeline input? false
Parameter set name (All)
Aliases None
Dynamic? true
While the meta-data is more detailed - including a hint that the parameter is dynamic (see below) - it is crucially missing a description of the parameter.
Some provider-cmdlet parameters are dynamic, in that they are specific to a given provider, so there is a mechanism to specify the target provider when asking for help, by passing a provider-specific example path to the -Path parameter.
In the case at hand, let's therefore try (PowerShell Core on Windows):
Get-Help Get-Content -Parameter Raw -Path C:\
Sadly, the result is the same unhelpful response as before.
Note that, as long as you're invoking the command from a filesystem location, explicit use of -Path should not be necessary, because the provider underlying the current location is implicitly targeted.
Now let's take a look at the online versions of PowerShell's help topics:
As it turns out, a given provider cmdlet can have multiple documentation pages:
A generic one that applies to all providers.
Provider-specific pages that document provider-exclusive behavior and parameters, such as -Raw for the filesystem provider.
Sadly, the generic topics make no mention of the existence of the provider-specific ones, making them hard to discover.
Googling Get-Content takes you to https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-content, the generic topic, which contains the following misleading statement: This parameter is not supported by any providers that are installed with Windows PowerShell.
This is not only unhelpful, but actively misleading, because the PowerShell file-system provider clearly is installed with PowerShell and it does support -Raw.
[Drive] providers are PowerShell's generalization of the filesystem drive metaphor to support targeting other [typically hierarchical] storage systems with a unified set of cmdlets. For instance, Windows PowerShell also ships with the registry drive provider, which allows managing the registry as if it were a drive.
The -Online switch for Get-Help conveniently allows opening the online version of the requested topic in the browser; so let's try that (Get-Help Get-Content -Online):
Windows PowerShell v5.1: Takes you a 404 page(!) related to v4.
PowerShell Core v6.0.1: Takes you to the same generic topic that googling does.
There's a sliver of hope, however: The aforementioned 404 page offers a link to the filesystem-provider-specific topic:
Get-Content for FileSystem
It is there that we finally discover the online version of the truly relevant, provider-specific information, which is the same that Get-Help Get-Content -Parameter Raw provides locally, but - as stated - only in Windows PowerShell.
As per Kory Gill's comment and your own, the built-in Get-Help and MSDN documentation should be your first port of call. But you've already RTFM!
When that fails, ss64 is great reference for Powershell documentation and additional examples.
Get-Content page here. It has this to say about -Raw:
Return multiple lines as a single string (PowerShell 3.0)
In PowerShell 2.0 use the static method: [System.IO.File]::ReadAllText(string path)

Is Exchange Online cmdlet Update-DistributionGroupMember transactional/atomic?

There is an Exchange Online/Exchange 2016 Powershell Cmdlet Update-DistributionGroupMember which is supposed to ...replace all members of distribution groups... (https://technet.microsoft.com/en-us/library/dd335049(v=exchg.160).aspx). Does anyone know if I can trust it to be atomic/transactional, meaning that it will either complete its task or give me an error and leave the distribution group membership as it was? Or do I have to be prepared for the scenario in which it only does part of its task and leaves the DG in a halfway state?
thanks!
Martin
This cmdlet notwithstanding. The same can be said for any code you run. Out of box or self created.
You must always be prepared for failure. Just like the old adage, the only good backup up is a tested and validated one. The number of times, I've seen org do backups and never test the restore and then realize that backup was crap, I've stopped counting.
Trust is on the process, validations in that process, and intimate knowledge of the actions about to be taken, not the code alone.
Bulk updates, like this, are always an one an done thing. It either works or it doesn't. Which is why they should always be approached with caution, to avoid corruption. Otherwise, you need to chunk the update and validate success of a chunk before you attempt to process another chunk.
There is no out of box concept of apply a change, validate the change before moving to the next without you writing that logic in to you code.
PowerShell does have the concept of transactions, but not all things support it.
# Get parameters, examples, full and Online help for a cmdlet or function
(Get-Command -Name Start-Transaction).Parameters
Get-help -Name Start-Transaction -Examples
Get-help -Name Start-Transaction -Full
Get-help -Name Start-Transaction -Online
So, you have to manually look at each of the planned cmdlets and see if it supports the -UseTransaction option and if that meets your needs. You can do that this way...
Get-Help * -parameter UseTransaction

Suppress content listing from invoke-command with filepath

After writing a Powershell script to build a list of servers and then perform some maintenance activity on each, I decided to split it into two parts. The inner script, call it scriptB, does its thing on the server whose name is passed in; the outer script, scriptA, builds the list of servers and runs scriptB on each. The idea is that scriptA will someday be able to run a choice of scripts -- scriptB or scriptC, for instance -- against each server, depending on a control parm. And the maintenance script (B or C) can also be run by itself, i.e. by passing it the name of the target server.
I call scriptB from scriptA using invoke-command with the -filepath option, and this appears to work just fine. Except that, for each iteration, the content of scriptB appears in the output. If I call it three times then I have three copies of scriptB in the output. I already have write-output statements in scriptB that explain what's going on, but those messages are hard to spot amid all the noise.
I have tried assigning the output to a variable, for instance:
$eatthis = invoke-command -computername sqlbox -filepath c:\dba\scriptB.ps1
and then it was quiet, but the variable ate the good output along with the unwanted listings ... and it is large enough that I would prefer not to parse it. I tried reading up on streams, but that didn't look like a promising direction either. At this point I'm ready to convert scriptB to a function and give up the benefits of having it be a standalone script, but if anyone knows an easy way to suppress the listing of an invoke-command scriptblock specified via -filepath then that would be helpful.
Alternatively, a good way to phrase this for Google would be welcome. Search terms like "listing," "echo," and "suppress" aren't getting me what I need.
Convert your scripts into advanced functions. They can be stored in separate files and dotsourced in the master script. This loads the function and makes it available.
e.g.
c:\scripts\ComplicatedProcessfunctions.ps1
(which contains function Run-FunctionB {...} and later function RunFunctionC {...})
Then call the function
$dataResults = RunFunctionA
or even
$dataResults += RunFunctionA
if you're running within a loop and building a collection of results. Which sounds like you might be.
Make sure each function returns its data as an object or collection of objects. Probably a custom powershell object of your creation.
The master script then processes the results.
I would recommend Get-Help about_advanced_functions, the scripting guys blog, and the Powershell Scripting Games website for information on how to build advanced functions, and how to do it right.

Returning and saving all Unix Attributes for users in a specific OU

As part of our company policy, all employees who have left the company keep their active directory accounts, which are disabled and moved to a specific OU. There are several parts to this process which need to be automated, but a significant part is unchecking the "Unix Enabled" property from the ADUC MMC and clearing all Unix attributes. These actions are not always performed, so I am tasked with cleaning it up. I am fairly new to Powershell, but have a reasonable enough understanding of it to work out a solution. I believe the scipt below should do it (formatted for better visibility):
Get-ADUser -SearchBase "OU=Disabled Accounts,OU=AnotherOU,DC=mycompany,DC=com"
-Filter {(Enabled -eq $false)} -Properties SamAccountName | ForEach-Object {
Clear-QasUnixUser $_.SamAccountName
Disable-QasUnixUser $_.SamAccountName
}
It may not be the most elegantly written script, but it seems to work as intended. Of course, it will be run in a test environment prior to production.
My dilemma:
I need to return all of the attributes that will be cleared by these commands before I run them (for the purposes of backing out) and I don't believe Get-QasUnixUser alone does this. Can anyone give me an idea of how to approach returning all of this information, and perhaps some professional insight as to how to sort it based on user? I know that links are not considered appropriate answers, but I also understand the scope of the question I am asking, so any assistance would be greatly appreciated.
Looking at the docs for QAS, it looks like they use the out of the box schema for their purposes. Newer versions appear to use the altSecurityIdentities attribute while older versions appear to consume the various SFU attributes that come with Windows. You might try using ldifde to take a snapshot of a user, enable them for QAS, take another LDIF snapshot, and diff the files as one approach to seeing what all QAS changes.
You can use the Properties parameter of Get-ADUser to provide a list of attributes you want back. It will be natively sorted by user, but, the Sort-Object cmdlet gives you the ability to tweak that order.

PowerShell: best way to ensure function name uniqueness?

What's the best way to ensure your PowerShell function name is unique? The standard since version 1 is to put in a short unique id after the verb dash and before the noun. For example, with my initials I could create function Get-DWServer; this is fine until someone creates a function in a different module for getting an object reference to a datawarehouse and uses the same function name. Two or three letters just isn't sufficient but more than that gets ugly to read.
I'd prefer to have a unique prefix* similar to .NET namespaces. It's better for organization, easier on the eye and works with tab completion. And it expands gracefully so you could name it DW.Get-Server or DW.Network.Get-Server.
The downside of doing this is it runs afoul of PowerShell's proper verb check during module import/Export-ModuleMember. You can get around this by specifying DisableNameChecking during import but I'm wondering if doing this is sloppy and might be bad if PowerShell 3 comes out with a better solution. I know PS verb purists (are there any?) will complain that this probably 'hinders discovery' but I can't think of a better option.
What do you do?
(*You can refer to an exported function using module_name\function_name notation but this won't work with tab completion and still doesn't get around the problem of the function name being unique).
I have heard Jeffrey Snover (the inventor of PowerShell) talk about this a few times and he described it as a dilemma, not a problem. A dilemma has to be managed but can't be solved completely. A problem can be solved. As a PS verb "purist" I would say the best way to manage this is to have a 2 or 3 letter prefix to your nouns. This has been sufficient so far for many widely distributed sets of cmdlets. IE, Quest AD Cmdlets vs Microsoft's AD Cmdlets. Get-ADUser and get-qaduser.
If you are consuming a module and want to use your own prefix, you can specify one with
import-module mymodule -Prefix myPrefix
I know this isn't the one single silver bullet answer, but I would say it works for 95% of the situations.