How would one go about returning an object from powershell into another powershell script? I am looking to automate some of my deployments using powerhsell so that we can have easier to repeat deployments with a minimum amount of human intervention.
The idea would be to have a "library" of scripts for the various processes that occur during a deployment that take a series of arguments, and then have a main deployment script that just calls each of those subscripts with arguments for the files being used. For example, for one deployment, I might have to create A login on a Sql Server, add some functions or stored procedures to a database , Deploy SSRS Reports, update the shared data sources for the SSRS to use an AD Service account, etc.
I am able to cram everything into a single script with a bunch of functions, but for easier re-usability, I would like to take each basic task - (Run SQL Scripts, get a credential from Secret Server , run a folder of SQL Scripts, Deploy SSRS Reports , etc. ) and place it in its own script with parameters that can be called from my main script. This would allow me to have a main script that just calls each task script with parameters. In order to do this though, for things like updating the AD Credentials, I would need a way to return the PScredential object that the function currently returns from a separate script instead.
You can explicitly return an object by using the return keyword:
return $myObject
Or you can implicitly return the object by explicitly using Write-Ouptut or implicitly outputting the object by having it bare on a line:
Write-Output $myObject
Or
$myObject
Related
I've recently started using Pester to write tests in PowerShell and I've got no problem running basic tests, however I'm looking to build some more complex tests and I'm struggling on what to do with variables that I need for the tests.
I'm writing tests to validate some cloud infrastructure, so after we've run a deployment it goes through and validates that it has deployed correctly and everything is where it should be. Because of this there are a large number of variables needed, VM Names, network names, subnet configurations etc. that we want to validate.
In normal PowerShell scripts these would be stored outside the script and fed in as parameters, but this doesn't seem to fit with the design of Pester or BDD, should I be hard coding these variables inside my tests? This doesn't seem very intuitive, especially if I might want to re-use these tests for other environments. I did experiment with storing them in an external JSON file and reading that into my test, but even then I need to hardcode the path to the JSON file in my script. Or am I doing it all wrong and there is a better approach?
I don't know if I can speak to best practice for this sort of thing but at the end of the day a Pester script is just a Powershell script so there's no harm in doing powershell anywhere in and around your tests (although beware that some constructs have their own scopes).
I would probably use a param block at the top of the script and pass in variables via the -script parameter of invoke-pester per this suggestion: http://wahlnetwork.com/2016/07/28/using-the-script-param-to-pass-parameters-into-pester-tests/
At the end of the day "best practice" for pester testing (particularly for infrastructure validation) is very loosely defined/pretty nonexistent.
As an example, I used a param block in my Active Directory test script that (in part) tests against a stored configuration file much as you described:
https://github.com/markwragg/Test-ActiveDirectory/blob/master/ActiveDirectory.tests.ps1
As described here, you can add BeforeAll block to define the variables
Describe "Check name" {
BeforeAll {
$name = "foo"
}
It "should have the correct value" {
$name | Should -Be "foo"
}
}
I have a PowerShell module that contains a number of common management and deployment functions. This is installed on all our client workstations. This module is called from a large number of scripts that get executed at login, via scheduled tasks or during deployments.
From within the module, it is possible to get the name of the calling script:
function Get-CallingScript {
return ($script:MyInvocation.ScriptName)
}
However, from within the module, I have not found any way of accessing the parameters originally passed to the calling script. For my purposes, I'd prefer to access them in the form of a dictionary object, but even the original command line would do. Unfortunately, given my use case, accessing the parameters from within the script and passing them to the module is not an option.
Any ideas? Thank you.
From about_Scopes:
Sessions, modules, and nested prompts are self-contained environments,
but they are not child scopes of the global scope in the session.
That being said, this worked for me from within a module:
$Global:MyInvocation.UnboundArguments
Note that I was calling my script with an unnamed parameter when the script was defined without parameters, so UnboundArguments makes sense. You might need this instead if you have defined parameters:
$Global:MyInvocation.BoundParameters
I can see how this in general would be a security concern. For instance, if there was a credential passed to the function up the stack from you, you would be able to access that credential.
The arguments passed to the current function can be accessed via $PSBoundParameters, but there isn't a mechanism to look at the call stack function's parameters.
We have a need that periodically, we will run a build configuration that among other things, recreates tokens/logins etc. We want to save these back to Team City as Environment variables. Builds that we subsequently do will want to look at this Environment Variable store and do a string replace within our configurations as required.
I've taken a look at :
##teamcity[setParameter name='env.TEST' value='test']
But from reading the documentation, this is only used to pass variables between build steps within the same build. It doesn't actually save the variable back to Team City.
Is there any way (Preferably from a powershell script), to call Team City and tell it to add a Environment Variable (Or any other variable).
In order to persist a value back to a parameter you have to call the REST API.
I use a PowerShell script that acts as a wrapper around the Invoke-RestMethod cmdlets in PowerShell 3+ that can be reused in a build step to achieve what you want.
Step 1.
Save the script to a PowerShell file and add it to your source control rest-api-wrapper.ps1
Step 2.
Create a PowerShell build step referencing the script and pass in the following arguments, tailored for your situation
%teamcity.serverUrl%/httpAuth/app/rest/projects/project_id/parameters/parameter_name
"Username"
"Password"
"PUT"
"TheValueToSave"
More details can be found here - TeamCity Documentation
Hope this helps
I'm trying to introduce PowerShell workflow into some existing scripts to take advantage of the parallel running capability.
Currently in the WorkFlow I'm having to use:
Inline
{
Import-Module My.Modules
Execute-MyModulesCustomFunctionFromImportedModules -SomeVariable $Using:SomeVariableValue
}
Otherwise I get the error stating it can't find the custom function. There must be a better way to do this?
The article at http://www.powershellmagazine.com/2012/11/14/powershell-workflows/ confirms that having to import modules and then use them is just how it works - MS gets around this by creating WF activities for all its common PowerShell commands:
General workflow design strategy
It’s important to understand that the entire contents of the workflow
get translated into WF’s own language, which only understands
activities. With the exception of a few commands, Microsoft has
provided WF activities that correspond to most of the core PowerShell
cmdlets. That means most of PowerShell’s built-in commands—the ones
available before any modules have been imported—work fine.
That isn’t the case with add-in modules, though. Further, because each
workflow activity executes in a self-contained space, you can’t even
use Import-Module by itself in a workflow. You’d basically import a
module, but it would then go away by the time you tried to run any of
the module’s commands.
The solution is to think of a workflow as a high-level task
coordination mechanism. You’re likely to have a number of
InlineScript{} blocks within a workflow because the contents of those
blocks execute as a single unit, in a single PowerShell session.
Within an InlineScript{}, you can import a module and then run its
commands. Each InlineScript{} block that you include runs
independently, so think of each one as a standalone script file of
sorts: Each should perform whatever setup tasks are necessary for it
to run successfully.
So I am working on some IIS management scripts for a specific IIS Site setup exclusive to a product to complete tasks such as:
- Create Site
- Create App Pool
- Create Virtual directories
The problem, is I would like to keep separate scripts for each concern and reference them in a parent script. The parent script could be ran to do a full deployment/setup. Or you could run the individual scripts for a specific task. The problem is that they are interactive, so they will request for the user information relevant to completing the task.
I am not sure how to approach the problem where each script will have a script body that will acquire information from the user, yet if it is loaded into the parent script avoid that specific's scripts script body from prompting the user.
NOTE: I know I could put them into modules, and fire off the individual "Exported to the environment" functions, but this script is going to be moved around to the environment that needs setup, and having to manually put modules (psm1) files into the proper PowerShell module folders just to run the scripts is a route I am not particularly fond of.
I am new to scripting with Powershell, any thoughts or recommendations?
Possible Answer*
This might be (a) solution: but I found I could Import-Modules from the working directory and from there have access to those exported functions.
I am interested in any other suggestions as well.
They way I would address it would be to implement a param block at the top of each sub script that would collect the information it needs to run. If a sub script is run individually the param block would prompt the user for the data needed to run that individual script. This also allows the parent script to pass the data needed to run the subscripts as the parent script calls the sub script. The data needed can be hard coded in the parent script or prompted for or some mixture thereof. That way you can make the sub scripts run either silently or with user interaction. You get the user interaction for free from Powershell's parameter handling mechanism. In the subscripts add a parameter attribute to indicate that Powershell will request those particular parameter values from the user if they are not already provided by the calling script.
At the top of your sub scripts, use a parameter block to collected needed data.
param
(
[parameter(Mandatory=$true, HelpMessage="This is required, please enter a value.")]
[string] $SomeParameter
)
You can have a deploy.ps1 script which dot sources the individual scripts and then calls the necessary functions within them:
. $scriptDir\create-site.ps1
. $scriptDir\create-apppool.ps1
. $scriptDir\create-virtualdirectories.ps1
Prompt-Values
Create-Site -site test
Create-AppPool -pool test
Create-VirtualDirectories -vd test
In the individual functions, you can see if the values needed are passed in from the caller ( deploy.ps1 or the command line)
For example, create-site.ps1 will be like:
function Create-Site($site){
if(-not $site){
Prompt-Values
}
}
The ideal is to make the module take care of maintaining storing it's own settings, depending on distribution concerns of that module, and to make commands to help work with the settings.
I.e.
Write a Set-SiteInfo -Name -Pool -VirtualDirectory and have that store values in the registry or in the local directory of the module ($psScriptRoot), then have other commands in the module use this.
If the module is being put in a location where there's no file write access for low-rights users (i.e. a web site directory, or $psHome), then it's a notch better to store the values in the registry.
Hope this Helps