There is an Exchange Online/Exchange 2016 Powershell Cmdlet Update-DistributionGroupMember which is supposed to ...replace all members of distribution groups... (https://technet.microsoft.com/en-us/library/dd335049(v=exchg.160).aspx). Does anyone know if I can trust it to be atomic/transactional, meaning that it will either complete its task or give me an error and leave the distribution group membership as it was? Or do I have to be prepared for the scenario in which it only does part of its task and leaves the DG in a halfway state?
thanks!
Martin
This cmdlet notwithstanding. The same can be said for any code you run. Out of box or self created.
You must always be prepared for failure. Just like the old adage, the only good backup up is a tested and validated one. The number of times, I've seen org do backups and never test the restore and then realize that backup was crap, I've stopped counting.
Trust is on the process, validations in that process, and intimate knowledge of the actions about to be taken, not the code alone.
Bulk updates, like this, are always an one an done thing. It either works or it doesn't. Which is why they should always be approached with caution, to avoid corruption. Otherwise, you need to chunk the update and validate success of a chunk before you attempt to process another chunk.
There is no out of box concept of apply a change, validate the change before moving to the next without you writing that logic in to you code.
PowerShell does have the concept of transactions, but not all things support it.
# Get parameters, examples, full and Online help for a cmdlet or function
(Get-Command -Name Start-Transaction).Parameters
Get-help -Name Start-Transaction -Examples
Get-help -Name Start-Transaction -Full
Get-help -Name Start-Transaction -Online
So, you have to manually look at each of the planned cmdlets and see if it supports the -UseTransaction option and if that meets your needs. You can do that this way...
Get-Help * -parameter UseTransaction
Related
In a process of version upgrade, our current solution takes all bindings (except for two dummy urls) from one site and setting them on another site.
I'm currently removing the bindings through PowerShell but it is super slow. I've looked on about every thread in SO and almost every solution uses "Remove-WebBinding".
This is my current code:
Get-Website -Name $siteName | Get-WebBinding | Where-Object { $_.HostHeader -notlike '*dummy*' } | Remove-WebBinding;
I have 272 (-2 dummy) bindings to remove and it takes more about 3 minutes.
Any ideas how to do it faster?
BTW: Adding all of those bindings one by one is super-slow too, but I guess if I'll find an answer here a similar solution would do for adding as well.
Copied from the comment and expand it a little bit.
Cause of Slowness
WebAdministration cmdlets were designed a long while ago, which has many disadvantages.
The slowness you observed can be explained as by design. Though it is not open sourced, we can guess that each call of Remove-WebBinding creates the relevant underlying objects (like ServerManager) and then commits change to IIS configuration file. Thus, the more bindings to remove, the longer it takes (and the more resources consumed).
Solution
For all in-support IIS releases today (8+), you should use IISAdministration cmdlets instead. They are newly developed with both flexibility and performance in mind.
By using a single ServerManager object and committing changes only once, removing bindings can be a lot faster.
try to run below PowerShell script:
Import-Module WebAdministration
Get-WebBinding -HostHeader 'dummy' | Remove-WebBinding -Confirm:$true
According to the MSDN for Strongly Encouraged Development Guidelines:
Cmdlets should not use the Console API.
Why is this?
If I write [Console]::Write("test"), it works just as well as
Write-Host "test"
EDIT:
It's well known that Write-Host should be avoided. When MSDN says to not use the Console API, is it safe to assume that they are implying that we should not use Write-Host either since that uses the Console API behind the scenes?
The main reason you shouldn't use console-related functionality is that not all PowerShell host environments are consoles.
While the typical use case is to run PowerShell in a console, PowerShell does not need a console and can cooperate with different kinds of host environments.
Thus, for your code to remain portable, it shouldn't assume the existence of a console.
It is safe, however, to assume the existence of (the abstraction called) host, which PowerShell exposes via the automatic $HOST variable.
The capabilities of hosts vary, however, which has historically created problems even when not using the console API directly, but its PowerShell abstraction, Write-Host - see below.
PowerShell provides a hosting API,
with which the PowerShell runtime can be embedded inside other applications. These applications can then use PowerShell functionality to implement certain operations, including those exposed via the graphical interface.
https://en.wikipedia.org/wiki/PowerShell
The regular PowerShell console using the Console Window Host (conhost.exe) on Windows is therefore just one implementation of a PowerShell host - the PowerShell ISE is another example, as is the Microsoft Exchange Server management GUI (2007+).
As for Write-Host:
Up to PSv4, as the name suggests, it used to write to the host - which may or may not be a console - so Write-Host could actually fail on hosts that don't support user interaction; see this question.
Starting with PSv5, Write-Host is safe to use, because it now writes to the newly introduced, host-independent information stream (number 6) - see Get-Help about_Redirection and the next section.
Note that Write-Host still does and always has generated output outside of the normal PowerShell output stream - its output is meant to be "commentary" (feedback to the user) rather than data.
While Write-Host is safe to use in PSv5+, it exist for backward compatibility, so instead consider using
Write-Information -InformationAction Continue or using Write-Information with preference variable $InformationPreference set to Continue, because:
"Write-Host" is now a bit of a misnomer, given that it doesn't actually directly write to the host anymore.
Write-Host, in the interest of backward compatibility, doesn't integrate with the $InformationPreference preference variable - see below.
Write-Host still offers console-inspired formatting parameters (-ForegroundColor, -BackgroundColor), which not all hosts (ultimately) support.
Write-Host vs. Write-Information:
Tip of the hat to PetSerAl for his help with the following.
Write-Information, introduced in PSv5, is the cmdlet that fully integrates with the new, host-independent information stream (number 6).
Notably, you can now redirect and thus capture Write-Information / Write-Host output by using 6>, something that wasn't possible with Write-Host in PSv4-.
Also note that this redirection works even with $InformationPreference's default value, SilentlyContinue, which only governs the display, not the output aspect (only using common parameter -InformationAction Ignore truly suppresses writing to the stream).
In line with how PowerShell handles errors and warnings, the display behavior of Write-Information is controllable via the new $InformationPreference preference variable / the new common -InformationAction cmdlet parameter.
Write-Information's default behavior is to be silent - $InformationPreference defaults to SilentlyContinue.
Note that Write-Information has no direct formatting parameters[1] and instead offers keyword tagging with the -Tags parameter[2]
.
By contrast, for backward compatibility, Write-Host effectively behaves like
Write-Information -InformationAction Continue, i.e., it outputs by default, and the only way to silence it is to use Write-Host -InformationAction Ignore[3]
- it does not respect an $InformationPreference value of SilentlyContinue (it does, however, respect the other values, such as Inquire).
[1] PetSerAl points out that you can pass formatting information to Write-Information, but only in an obscure fashion that isn't even documented as of PSv5.1; e.g.:
Write-Information -MessageData ([System.Management.Automation.HostInformationMessage] #{Message='Message'; ForegroundColor='Red'}) -InformationAction Continue
[2] Note how parameter name "Tags" actually violates one of the strongly encouraged cmdlet development guidelines: it should be "Tag" (singular).
[3] PetSerAl explains that this behavior stems from Write-Host passing the PSHOST tag to Cmdlet.WriteInformation behind the scenes.
[Console]::Write or Write-Host are basically the same. They both write a message to the console which can be seen on the screen.
The basic reason why this is discouraged is that it breaks the workflow. The output of a Write-Host cmdlet can't be piped or used further. Now if the script runs on a machine without graphical output or similar constraints the command is lost.
According to this and this thread, you should therefore rather use Write-Output, which sends the output message to the pipeline where it can be further used. Further, you can use exceptions if your message is meant to signal an error.
After writing a Powershell script to build a list of servers and then perform some maintenance activity on each, I decided to split it into two parts. The inner script, call it scriptB, does its thing on the server whose name is passed in; the outer script, scriptA, builds the list of servers and runs scriptB on each. The idea is that scriptA will someday be able to run a choice of scripts -- scriptB or scriptC, for instance -- against each server, depending on a control parm. And the maintenance script (B or C) can also be run by itself, i.e. by passing it the name of the target server.
I call scriptB from scriptA using invoke-command with the -filepath option, and this appears to work just fine. Except that, for each iteration, the content of scriptB appears in the output. If I call it three times then I have three copies of scriptB in the output. I already have write-output statements in scriptB that explain what's going on, but those messages are hard to spot amid all the noise.
I have tried assigning the output to a variable, for instance:
$eatthis = invoke-command -computername sqlbox -filepath c:\dba\scriptB.ps1
and then it was quiet, but the variable ate the good output along with the unwanted listings ... and it is large enough that I would prefer not to parse it. I tried reading up on streams, but that didn't look like a promising direction either. At this point I'm ready to convert scriptB to a function and give up the benefits of having it be a standalone script, but if anyone knows an easy way to suppress the listing of an invoke-command scriptblock specified via -filepath then that would be helpful.
Alternatively, a good way to phrase this for Google would be welcome. Search terms like "listing," "echo," and "suppress" aren't getting me what I need.
Convert your scripts into advanced functions. They can be stored in separate files and dotsourced in the master script. This loads the function and makes it available.
e.g.
c:\scripts\ComplicatedProcessfunctions.ps1
(which contains function Run-FunctionB {...} and later function RunFunctionC {...})
Then call the function
$dataResults = RunFunctionA
or even
$dataResults += RunFunctionA
if you're running within a loop and building a collection of results. Which sounds like you might be.
Make sure each function returns its data as an object or collection of objects. Probably a custom powershell object of your creation.
The master script then processes the results.
I would recommend Get-Help about_advanced_functions, the scripting guys blog, and the Powershell Scripting Games website for information on how to build advanced functions, and how to do it right.
In PowerShell 3 if you are searching for a command you could use both, both Get-Help Get-* and Get-Command Get-* work?
So whats the major difference ?
Both commands share a lot of information in common but the main difference is that Get-Help outputs MAML objects (which are "text based", error prone and even can be out dated) while Get-Command gets you real objects (metadata), that you can further investigate.
For most help parts, Get-Help is displaying pre-made help, contained in XML files.
For other parts, Get-Help "is using" Get-Command to generate the information, like the SYNTAX section.
Get-Command also gets you information that Get-Help doesn't, like the module of the command, , it's DLL path (in case of a compiled cmdlet), parameter sets, and so on.
One is not a replacement for the other, you use both under different circumstances.
The way I think of it is - Get-Command returns the technical information about commands (DLL, implementing type, function body for functions, etc), Get-Help returns the user-friendly information about commands (detailed syntax, examples, explanation of parameters, etc).
And Get-Command returns a normal object, which behaves perfectly normally and predictably, whereas Get-Help returns a weird formatted help object which is really only intended for viewing in the console, not for processing in code.
Is there any builtin functionality with Powershell that allows you to examine system processes in great detail, and view/manipulate its I/O stream? Are there any community modules? Has anybody worked with process streams and know of any good references for such work?
The standard cmdlets provided by powershell allows you basic operations on processes. Get-Process cmdlet returns all running processes objects with detailed information about the process. You can also get the modules that the process loaded using the parameter -Module. You can use start/stop process cmdlets to manage the list of running processes.
However, the returned objects give you all information you may search for. Get-Process returns objects as System.Diagnostics.Process, while Get-Process -Module returns objects as System.Diagnostics.ProcessModule.