How to give PowerShell WorkFlow access to previously imported modules - powershell

I'm trying to introduce PowerShell workflow into some existing scripts to take advantage of the parallel running capability.
Currently in the WorkFlow I'm having to use:
Inline
{
Import-Module My.Modules
Execute-MyModulesCustomFunctionFromImportedModules -SomeVariable $Using:SomeVariableValue
}
Otherwise I get the error stating it can't find the custom function. There must be a better way to do this?

The article at http://www.powershellmagazine.com/2012/11/14/powershell-workflows/ confirms that having to import modules and then use them is just how it works - MS gets around this by creating WF activities for all its common PowerShell commands:
General workflow design strategy
It’s important to understand that the entire contents of the workflow
get translated into WF’s own language, which only understands
activities. With the exception of a few commands, Microsoft has
provided WF activities that correspond to most of the core PowerShell
cmdlets. That means most of PowerShell’s built-in commands—the ones
available before any modules have been imported—work fine.
That isn’t the case with add-in modules, though. Further, because each
workflow activity executes in a self-contained space, you can’t even
use Import-Module by itself in a workflow. You’d basically import a
module, but it would then go away by the time you tried to run any of
the module’s commands.
The solution is to think of a workflow as a high-level task
coordination mechanism. You’re likely to have a number of
InlineScript{} blocks within a workflow because the contents of those
blocks execute as a single unit, in a single PowerShell session.
Within an InlineScript{}, you can import a module and then run its
commands. Each InlineScript{} block that you include runs
independently, so think of each one as a standalone script file of
sorts: Each should perform whatever setup tasks are necessary for it
to run successfully.

Related

Can file time stamps be used to define dependencies in Psake PowerShell makefiles?

From what I have seen Psake domain specific PowerShell scripts do not evaluate if dependent objects really need to be built - instead the dependent objects are always evaluated in order.
Is there a way to implement dependencies so that the script to build a make target, such as a file, is only executed if any of the dependent files are newer than the target file?
I experimented with precondition and post condition, with limited success but this seems like a standard requirement and is in every UNIX style "make" I've used in the past. It feels like I am missing something obvious. Help!
As far as I know, Psake does not have such tools. The similar PowerShell build tool Invoke-Build does. You may try it if "incremental" tasks are important for your build scripts. See its wiki pages
Incremental Tasks
Partial Incremental Tasks

CruiseControl.Net: Run NUnit task with parameters

My NUnit tests fail unless the nunit runner is launched with /noshadow parameter.
But in CC.net, it seems to be impossible to supply this parameter in the <nunit> block.
I know I always can fall back to generic <exec> block, but is there really no way to configure the <nunit> block?
I would surmise that if this switch/flag isn't documented, then it isn't available in the that you mention.
The thing to keep in mind with these custom tasks, is that usually they are just friendly-wrappers for what eventually becomes a command-line call.
The task-author is just making things simpler for you. They take on the onus of creating the correct commandline, and pass that to the original .exe.
Now, it looks like somebody did address the command line of your interest here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/NUnit.cs
Note the code:
if (DisableShadowCopy)
{
builder.AppendSwitch(c+"noshadow");
}
So I would see if you can get this task working.
In fact, I barely use any of the built in CC.NET tasks, except for source-code download and starting up msbuild.exe...and then the publishing. I leave the hard stuff to msbuild.
Aka, I pull source-code, which includes a MyBuild.proj file.
Then I have cc.net execute "msbuild.exe MyBuild.proj"
Then I have cc.net do some of the publishing.
Why?
If most of my logic is in a msbuild .proj file, then if I ever switch to another CI tool, the transition is much less traumatic. In fact, I recently learned that an old job of mine went to TFS, and because I wrote most of the build logic in msbuild (and not a lot of cc.net tasks)....the transition to TFS was fairly painless. If I had used cc.net tasks instead......every single one of those would have had to been translated to a corresponding tfs task.... :<
Anyways. Back to your question. Keep in mind...that somebody is basically (via a task) is usually just writing up a nice way to wire up things, and doing the command line arguments/syntax sugar for you. So they sometimes miss a flag, or a flag gets added later, but the original task is not updated.
So you'll either need to modify the source code yourself........ :< Or pick a library that keeps more up to date.
Good luck.

How to Edit/Update Nant script

I need to update an Nant script automatically by fetching some data from database. The solution I can think of is to be done through a service which fetches the data from DB and update the Nant script.
Can this be done? If yes, how?
In theory, if you need to change how the script works then you could create a program to generate the NAnt build file, run it with the exec task, include that file and then call a target.
That seems a bit over-complicated though. I suppose it depends on how much the script will change based on the data.
If the data is simply configuration, then you can use the data to set properties in your build script (either by the same mechanism above, or by creating a custom task to create a property value based on the result of a SQL statement). Then use those properties to determine control flow in the build script using standard things like if statements and foreach loops.
I don't think that there's anything built-in that will do this for you, but custom tasks are very easy to create if you can program.
If you update/edit a nant script it does not change the current execution. Instead you can generate .build files and execute them via <nant> task, for example using a <foreach> loop or <style> xsl-transformation. An alternative would be to write a small <script>, in particular if you can program it comfortably in C#. If you wish more specific answers more information would be helpful. (database used, what tools you can use to extract data)

How can I have a parent script but maintain separation of concerns with Powershell Scripts?

So I am working on some IIS management scripts for a specific IIS Site setup exclusive to a product to complete tasks such as:
- Create Site
- Create App Pool
- Create Virtual directories
The problem, is I would like to keep separate scripts for each concern and reference them in a parent script. The parent script could be ran to do a full deployment/setup. Or you could run the individual scripts for a specific task. The problem is that they are interactive, so they will request for the user information relevant to completing the task.
I am not sure how to approach the problem where each script will have a script body that will acquire information from the user, yet if it is loaded into the parent script avoid that specific's scripts script body from prompting the user.
NOTE: I know I could put them into modules, and fire off the individual "Exported to the environment" functions, but this script is going to be moved around to the environment that needs setup, and having to manually put modules (psm1) files into the proper PowerShell module folders just to run the scripts is a route I am not particularly fond of.
I am new to scripting with Powershell, any thoughts or recommendations?
Possible Answer*
This might be (a) solution: but I found I could Import-Modules from the working directory and from there have access to those exported functions.
I am interested in any other suggestions as well.
They way I would address it would be to implement a param block at the top of each sub script that would collect the information it needs to run. If a sub script is run individually the param block would prompt the user for the data needed to run that individual script. This also allows the parent script to pass the data needed to run the subscripts as the parent script calls the sub script. The data needed can be hard coded in the parent script or prompted for or some mixture thereof. That way you can make the sub scripts run either silently or with user interaction. You get the user interaction for free from Powershell's parameter handling mechanism. In the subscripts add a parameter attribute to indicate that Powershell will request those particular parameter values from the user if they are not already provided by the calling script.
At the top of your sub scripts, use a parameter block to collected needed data.
param
(
[parameter(Mandatory=$true, HelpMessage="This is required, please enter a value.")]
[string] $SomeParameter
)
You can have a deploy.ps1 script which dot sources the individual scripts and then calls the necessary functions within them:
. $scriptDir\create-site.ps1
. $scriptDir\create-apppool.ps1
. $scriptDir\create-virtualdirectories.ps1
Prompt-Values
Create-Site -site test
Create-AppPool -pool test
Create-VirtualDirectories -vd test
In the individual functions, you can see if the values needed are passed in from the caller ( deploy.ps1 or the command line)
For example, create-site.ps1 will be like:
function Create-Site($site){
if(-not $site){
Prompt-Values
}
}
The ideal is to make the module take care of maintaining storing it's own settings, depending on distribution concerns of that module, and to make commands to help work with the settings.
I.e.
Write a Set-SiteInfo -Name -Pool -VirtualDirectory and have that store values in the registry or in the local directory of the module ($psScriptRoot), then have other commands in the module use this.
If the module is being put in a location where there's no file write access for low-rights users (i.e. a web site directory, or $psHome), then it's a notch better to store the values in the registry.
Hope this Helps

Powershell in SQLCLR?

In the past I've been able to embed a sripting languate (like JScript) inside the SQLCLR, so scripts can be passed as parameters of functions, to perform certain calculations. Here is a simplistic example (the function ssScriptExecute returns a concatenation of all the print's in the script):
select dbo.ssScriptExecute( 'print("Calculation: "+(1+2/3) );' )
-- Calculation: 1.6666666666666665
I'd love to be able to embed a Powershell runtime in the same way. But I've had all sort of problems because the runtime tries to find assemblies by path, and there are no paths inside the SQlCLR. I'm happy to provide more information on the errors I get, but I was wondering if anybody has tried this!
Thanks!
I use il code injection to modified System.Automation.Management.
make variable version in GetPSVersionTable() be "2.0"
then i can run Powershell Code in SQL Server.
Be sure reference this modified dll in your visual studio project.
http://www.box.net/shared/57122v6erv9ss3aopq7p
btw, automated registering all dll you needed with running powershell in SQL
you can use this ps1 code
http://www.box.net/shared/tdlpu1875clsu8azxq4b
I think the only way to do this is to create a WCF service hosting powershell, and let SQLCLR send the request dbo.ssScriptExecute(...) to that service for execution.
Besides from that, I've also successfully embedded paxScript.net in the SQLCLR (an interpreter that does not have the memory leak problems of the DLR languages).
I thought SQLCLR was restricted to just a certain set of assemblies and PS Automation is not one of them.