I've recently started using Pester to write tests in PowerShell and I've got no problem running basic tests, however I'm looking to build some more complex tests and I'm struggling on what to do with variables that I need for the tests.
I'm writing tests to validate some cloud infrastructure, so after we've run a deployment it goes through and validates that it has deployed correctly and everything is where it should be. Because of this there are a large number of variables needed, VM Names, network names, subnet configurations etc. that we want to validate.
In normal PowerShell scripts these would be stored outside the script and fed in as parameters, but this doesn't seem to fit with the design of Pester or BDD, should I be hard coding these variables inside my tests? This doesn't seem very intuitive, especially if I might want to re-use these tests for other environments. I did experiment with storing them in an external JSON file and reading that into my test, but even then I need to hardcode the path to the JSON file in my script. Or am I doing it all wrong and there is a better approach?
I don't know if I can speak to best practice for this sort of thing but at the end of the day a Pester script is just a Powershell script so there's no harm in doing powershell anywhere in and around your tests (although beware that some constructs have their own scopes).
I would probably use a param block at the top of the script and pass in variables via the -script parameter of invoke-pester per this suggestion: http://wahlnetwork.com/2016/07/28/using-the-script-param-to-pass-parameters-into-pester-tests/
At the end of the day "best practice" for pester testing (particularly for infrastructure validation) is very loosely defined/pretty nonexistent.
As an example, I used a param block in my Active Directory test script that (in part) tests against a stored configuration file much as you described:
https://github.com/markwragg/Test-ActiveDirectory/blob/master/ActiveDirectory.tests.ps1
As described here, you can add BeforeAll block to define the variables
Describe "Check name" {
BeforeAll {
$name = "foo"
}
It "should have the correct value" {
$name | Should -Be "foo"
}
}
Related
We use NUnit 3 to run deployment smoke tests with parameters read from VCS. It works like this:
Powershell reads the parameters from VCS and compose NUnit 3 console command line.
NUnit3 console runs the tests.
Some parameters are passwords.
The problem is that the end test result XML lists all the test parameters, including the passwords.
Is it possible to instruct NUni3 somehow to avoid including the test parameters in the test result XML?
There is no feature like that because the assumption is you would not pass in anything that needs to be secure.
Why is that? Imagine there were an argument like --secret:password=XXX which worked like a parameter but was not displayed in the results. In that case, your password would still be in clear in your script, for anyone to read. Also, it would be available to any test, which could do what it wanted with it, like write it somewhere.
A better approach is to use some sort of encryption, so that you are only passing in a key, which is not usable except by an account or program that knows how to decrypt it. There are various approaches to doing this,depending on how you are running tests. I believe you will find that VCS has a way of encrypting passwords that you may be able to use.
In any case, without such a "secret" option, the only way you could avoid publishing the password would be to create your own output format by writing an engine result writer extension. Your extension code would receive the entire nunit 3 output document and you could modify it to remove the passwords before saving the file.
I have used PesterHelpers to construct a suite of tests for my module and have begun adding Functional tests. I run the min, norm, and full test scripts as needed to test my work. I find that I am using the same mocks over and over, copying them from script to script. Is it possible to create one global mock that all test scripts can use in both Public and Private directories?
You could probably put your mocks in to another file and dot source them in to your script:
. .\mocks.ps1
This would save some duplication in your scripts, but would also make them a little more obscure.
I don’t think there’s any concept in Pester for declaring Mocks in a more global way, as I believe they are scoped to each describe or context block they are declared in.
(Resolve-Path ($PSScriptRoot + "\..\..\..\AOI\UDT\testfile.text"))
Use this. You will be able to use your file anywhere.
How would one go about returning an object from powershell into another powershell script? I am looking to automate some of my deployments using powerhsell so that we can have easier to repeat deployments with a minimum amount of human intervention.
The idea would be to have a "library" of scripts for the various processes that occur during a deployment that take a series of arguments, and then have a main deployment script that just calls each of those subscripts with arguments for the files being used. For example, for one deployment, I might have to create A login on a Sql Server, add some functions or stored procedures to a database , Deploy SSRS Reports, update the shared data sources for the SSRS to use an AD Service account, etc.
I am able to cram everything into a single script with a bunch of functions, but for easier re-usability, I would like to take each basic task - (Run SQL Scripts, get a credential from Secret Server , run a folder of SQL Scripts, Deploy SSRS Reports , etc. ) and place it in its own script with parameters that can be called from my main script. This would allow me to have a main script that just calls each task script with parameters. In order to do this though, for things like updating the AD Credentials, I would need a way to return the PScredential object that the function currently returns from a separate script instead.
You can explicitly return an object by using the return keyword:
return $myObject
Or you can implicitly return the object by explicitly using Write-Ouptut or implicitly outputting the object by having it bare on a line:
Write-Output $myObject
Or
$myObject
In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules
So I am working on some IIS management scripts for a specific IIS Site setup exclusive to a product to complete tasks such as:
- Create Site
- Create App Pool
- Create Virtual directories
The problem, is I would like to keep separate scripts for each concern and reference them in a parent script. The parent script could be ran to do a full deployment/setup. Or you could run the individual scripts for a specific task. The problem is that they are interactive, so they will request for the user information relevant to completing the task.
I am not sure how to approach the problem where each script will have a script body that will acquire information from the user, yet if it is loaded into the parent script avoid that specific's scripts script body from prompting the user.
NOTE: I know I could put them into modules, and fire off the individual "Exported to the environment" functions, but this script is going to be moved around to the environment that needs setup, and having to manually put modules (psm1) files into the proper PowerShell module folders just to run the scripts is a route I am not particularly fond of.
I am new to scripting with Powershell, any thoughts or recommendations?
Possible Answer*
This might be (a) solution: but I found I could Import-Modules from the working directory and from there have access to those exported functions.
I am interested in any other suggestions as well.
They way I would address it would be to implement a param block at the top of each sub script that would collect the information it needs to run. If a sub script is run individually the param block would prompt the user for the data needed to run that individual script. This also allows the parent script to pass the data needed to run the subscripts as the parent script calls the sub script. The data needed can be hard coded in the parent script or prompted for or some mixture thereof. That way you can make the sub scripts run either silently or with user interaction. You get the user interaction for free from Powershell's parameter handling mechanism. In the subscripts add a parameter attribute to indicate that Powershell will request those particular parameter values from the user if they are not already provided by the calling script.
At the top of your sub scripts, use a parameter block to collected needed data.
param
(
[parameter(Mandatory=$true, HelpMessage="This is required, please enter a value.")]
[string] $SomeParameter
)
You can have a deploy.ps1 script which dot sources the individual scripts and then calls the necessary functions within them:
. $scriptDir\create-site.ps1
. $scriptDir\create-apppool.ps1
. $scriptDir\create-virtualdirectories.ps1
Prompt-Values
Create-Site -site test
Create-AppPool -pool test
Create-VirtualDirectories -vd test
In the individual functions, you can see if the values needed are passed in from the caller ( deploy.ps1 or the command line)
For example, create-site.ps1 will be like:
function Create-Site($site){
if(-not $site){
Prompt-Values
}
}
The ideal is to make the module take care of maintaining storing it's own settings, depending on distribution concerns of that module, and to make commands to help work with the settings.
I.e.
Write a Set-SiteInfo -Name -Pool -VirtualDirectory and have that store values in the registry or in the local directory of the module ($psScriptRoot), then have other commands in the module use this.
If the module is being put in a location where there's no file write access for low-rights users (i.e. a web site directory, or $psHome), then it's a notch better to store the values in the registry.
Hope this Helps