I have used PesterHelpers to construct a suite of tests for my module and have begun adding Functional tests. I run the min, norm, and full test scripts as needed to test my work. I find that I am using the same mocks over and over, copying them from script to script. Is it possible to create one global mock that all test scripts can use in both Public and Private directories?
You could probably put your mocks in to another file and dot source them in to your script:
. .\mocks.ps1
This would save some duplication in your scripts, but would also make them a little more obscure.
I don’t think there’s any concept in Pester for declaring Mocks in a more global way, as I believe they are scoped to each describe or context block they are declared in.
(Resolve-Path ($PSScriptRoot + "\..\..\..\AOI\UDT\testfile.text"))
Use this. You will be able to use your file anywhere.
Related
I have some PowerShell modules and scripts that looks something like this:
MyClassModule.psm1
class MyClass {
#...
}
MyUtilityModule.psm1
using module MyClassModule
# Some code here
MyScript.ps1
using module MyUtilityModule
[MyClass]::new()
My questions are:
Why does this script fail if the class is imported as a dependency in MyUtilityModule?
Are imports always only local to the direct script they are imported into?
Is there any way to make them global for a using statement?
If it makes any difference I need to write with backwards compatibility to at least PowerShell version 5.1
This is all about scope. The user experience is directly influenced by how one loads a class. We have two ways to do so:
import-Module
Is the command that allows loading the contents of a module into the session.
...
It must be called before the function you want to call that is located in that specific module. But not necessarly at the beginning/top of your
script.
Using Module
...
The using statement must be located at the very top of your script. It also must be the very first statement of your script (Except for comments). This
make loading the module ‘conditionally’ impossible.
Comand Type, Can be called anywhere in script, internal functions, public functions, Enums, Classes
Import-Module, Yes, No, Yes, No, No
using Module, No, No, Yes, Yes, Yes
See these references for further details
How to write Powershell modules with classes
https://stephanevg.github.io/powershell/class/module/DATA-How-To-Write-powershell-Modules-with-classes
What is this Module Scope in PowerShell that you Speak of?
https://mikefrobbins.com/2017/06/08/what-is-this-module-scope-in-powershell-that-you-speak-of
In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules
I've recently started using Pester to write tests in PowerShell and I've got no problem running basic tests, however I'm looking to build some more complex tests and I'm struggling on what to do with variables that I need for the tests.
I'm writing tests to validate some cloud infrastructure, so after we've run a deployment it goes through and validates that it has deployed correctly and everything is where it should be. Because of this there are a large number of variables needed, VM Names, network names, subnet configurations etc. that we want to validate.
In normal PowerShell scripts these would be stored outside the script and fed in as parameters, but this doesn't seem to fit with the design of Pester or BDD, should I be hard coding these variables inside my tests? This doesn't seem very intuitive, especially if I might want to re-use these tests for other environments. I did experiment with storing them in an external JSON file and reading that into my test, but even then I need to hardcode the path to the JSON file in my script. Or am I doing it all wrong and there is a better approach?
I don't know if I can speak to best practice for this sort of thing but at the end of the day a Pester script is just a Powershell script so there's no harm in doing powershell anywhere in and around your tests (although beware that some constructs have their own scopes).
I would probably use a param block at the top of the script and pass in variables via the -script parameter of invoke-pester per this suggestion: http://wahlnetwork.com/2016/07/28/using-the-script-param-to-pass-parameters-into-pester-tests/
At the end of the day "best practice" for pester testing (particularly for infrastructure validation) is very loosely defined/pretty nonexistent.
As an example, I used a param block in my Active Directory test script that (in part) tests against a stored configuration file much as you described:
https://github.com/markwragg/Test-ActiveDirectory/blob/master/ActiveDirectory.tests.ps1
As described here, you can add BeforeAll block to define the variables
Describe "Check name" {
BeforeAll {
$name = "foo"
}
It "should have the correct value" {
$name | Should -Be "foo"
}
}
Background: I am looking for Best Practices for building PowerShell framework of my own. As I was used to be .NET programmer I like to keep source files small and organize code in classes and libraries.
Question: I am totally confused with using module, import-module, sourcing using dot-import and Add-PSSnapin. Sometimes it works. Sometimes it does not. Some includes work when running from ISE/VS2015 but fail when running via cmd powershell -command "& './myscript.ps1'". I want to include/import classes and functions. I also would like to use type and namespace aliases. Using them with includes produces even weirdest results, but sometimes somehow they work.
Edit: let me be more specific:
Local project case (all files in one dir): main.ps1, common_autorun.ps1, _common.psm1, _specific.psm1.
How to include these modules into main script using relative paths?
_specific.psm1 also rely on _common.psm1.
There are ScriptBlocks passed between modules that may contain calls to classes defined in parent context.
common_autorun.ps1 contains solely type accelerators and namespace imports as described here.
Modules contain mainly classes with static methods as I am not yet used to PowerShell style of programming where functions do not have predicted returns.
As I understand my problems are related to context and scope. Unfortunately these PowerShell concepts are not well documented for v5 classes.
Edit2: Simplified sample:
_common.psm1 contains watch
_specific.psm1 contains getDiskSpaceInfoUNC
main.ps1 contains just:
watch 'getDiskSpaceInfoUNC "\\localhost\d$"'
What includes/imports should I put into these files in order this code to work both in ISE and powershell.exe -command "& './main.ps1'"?
Of cause this works perfectly when both functions are defined in main.ps1.
I'm having this problem for a while now and google have its limits.
I'm writing a powershell file that contain several generic function.
I use the function in vary scripts and now I want to let other personal in my work to use them as well.
the problem is, do to sensitive operation, I want to lock and protect the script (compile to a dll, exe etc').
how do I create powershell library like C# DLL?
one option I try but did not find out how to continue is to compile the script using powerGUI to executable file ( .exe) but then I canot access the function in it let alone pass on parameters to that function.
hope you understood me :)
thank you.
You don't. Rather than trying to obscure this information (if you compile them, they can be decompiled and your "protected" resources will no longer be), remove them entirely and make those parameters for your functions. This both protects your "sensitive" data and makes the code much more reusable.
You can then package your functions into a module