I just finished the initial tests phase on automating our product release to Azure Virtual Machines using DSC, particularly with the commands described in this article, which are part of the Azure PowerShell SDK.
I can push a DSC configuration fine using PowerShell, but since this process is automated, I wanted to get feedback on how the configuration process progressed. When I call Update-AzureVM, I get an ok but the DSC configuration happens after that, asynchronously, and I don't know how it is going unless I log into the machine (or look at the updated Azure Portal which now shows this).
I'd like to fail my automated process if the configuration fails. How can I check the status of the configuration from my script and gracefully detect success or failure?
We have added a new cmdlet Get-AzureVMDscExtensionStatus to obtain status of the running DSC configuration
This article explains the same http://blogs.msdn.com/b/powershell/archive/2015/02/27/introducing-get-azurevmdscextensionstatus-cmdlet-for-azure-powershell-dsc-extension.aspx
There are a few ways to do this. You can call the REST-based API as I described in a recent post here.
You could also use Get-AzureVM to drill into the value (just like parsing the REST response) like so:
((Get-AzureVM -ServiceName "" -Name "").ResourceExtensionStatusList | Where-Object { $_.HandlerName -eq 'Microsoft.PowerShell.DSC' }).ExtensionSettingStatus.Status
Based on #David's suggestion, I ended up creating a polling function to detect the status changes and report back to my main script.
First, I needed to find what the terminating status codes where (I need to finish my loop as soon as I detect a successful DSC operation or if any error happens).
I drilled down in the files used by the DSC Extension in the VM to find the possible status codes and based my condition on that. The status codes can be found at C:\Packages\Plugins\Microsoft.Powershell.DSC\1.4.0.0\bin\DscExtensionStatus.psm1 in any virtual machine with the DSC Extension installed. Here are the status codes as of version 1.4.0.0 of the DSC Extension:
$DSC_Status = #{
Initializing = #{
Code = 1
Message = "Initializing DSC extension."
}
Completed = #{
Code = 2
Message = "DSC configuration was applied successfully."
}
Enabled = #{
Code = 3
Message = "PowerShell DSC has been enabled."
}
RebootingInstall = #{
Code = 4
Message = "Rebooting VM to complete installation."
}
RebootingDsc = #{
Code = 5
Message = "Rebooting VM to apply DSC configuration."
}
Applying = #{
Code = 6
Message = "Applying DSC configuration to VM."
}
#
# Errors
#
GenericError = 100; # The message for this error is provided by the specific exception
InstallError = #{
Code = 101
Message = "The DSC Extension was not installed correctly, please check the logs on the VM."
}
WtrInstallError = #{
Code = 102
Message = "WTR was not installed correctly, please check the logs on the VM."
}
}
The logic in the function is somewhat convoluted because the state changes are persistent, i.e. they are not from a single DSC operation, but from the whole extension itself. Because of that, I needed to pick the status first to then try to find updates. I'm using the timestamp field to detect a new status. Here is the code:
function Wait-AzureDSCExtensionJob
{
[CmdletBinding()]
Param(
[Parameter(Mandatory)]
[string] $ServiceName,
[int] $RefreshIntervalSeconds = 15
)
Begin
{
$statusFormat = `
#{Label = 'Timestamp'; Expression = {$_.TimestampUtc}},
#{Label = 'Status'; Expression = {"$($_.Code) - $($_.Status)"}}, `
#{Label = 'Message'; Expression = {$_.FormattedMessage.Message}}
Write-Verbose 'Getting starting point status...'
$previousStatus = Get-AzureDscStatus -ServiceName:$ServiceName
Write-Verbose "Status obtained: $($previousStatus | Format-List $statusFormat | Out-String)"
Write-Verbose 'This status will be used as the starting point for discovering new updates.'
}
Process
{
do
{
Write-Verbose "Waiting for the next check cycle. $RefreshIntervalSeconds seconds left..."
Start-Sleep -Seconds:$RefreshIntervalSeconds
$currentStatus = Get-AzureDscStatus -ServiceName:$ServiceName
if ($previousStatus.TimestampUtc -eq $currentStatus.TimestampUtc)
{
Write-Verbose 'Status has not changed since the last check.'
$statusUpdated = $false
}
else
{
Write-Verbose "Status updated: $($currentStatus | Format-List $statusFormat | Out-String)"
$previousStatus = $currentStatus
$statusUpdated = $true
}
# Script with default message codes for the DSC Extension:
# On Target VM: "C:\Packages\Plugins\Microsoft.Powershell.DSC\1.4.0.0\bin\DscExtensionStatus.psm1"
} until ($statusUpdated -and (($currentStatus.Code -eq 2) -or ($currentStatus.Code -ge 100)))
}
End
{
switch ($currentStatus.Code)
{
2 {Write-Verbose 'Configuration finished successfully.'; break}
default {throw "Configuration failed: $($currentStatus.Status)"}
}
}
}
function Get-AzureDscStatus
{
[CmdletBinding()]
Param(
[Parameter(Mandatory)]
[string] $ServiceName
)
Begin
{
$vm = Get-AzureVM -ServiceName:$ServiceName
$dscExtensionStatus = $vm.ResourceExtensionStatusList | where { $_.HandlerName -eq 'Microsoft.PowerShell.DSC' }
if (-not $dscExtensionStatus)
{
throw 'Could not find the PowerShell DSC Extension on the VM'
}
$dscExtensionStatus.ExtensionSettingStatus
}
}
I'm not very proficient in PowerShell yet, so this could probably look a bit better and be easier to read. Still, I hope it can be of use to someone in the same situation as me.
UPDATE 28/11/2014:
Microsoft has updated the DSC Extension to version 1.5.0.0 and my function broke, how nice of them. I mean... it's not as if changing the response codes is a breaking change or anything like that ;)
Here are the new status codes:
$DSC_Status = #{
Success = #{
Code = 1
Message = 'DSC configuration was applied successfully.'
}
Initializing = #{
Code = 2
Message = 'Initializing DSC extension.'
}
Enabled = #{
Code = 3
Message = 'PowerShell DSC has been enabled.'
}
RebootingInstall = #{
Code = 4
Message = 'Rebooting VM to complete installation.'
}
RebootingDsc = #{
Code = 5
Message = 'Rebooting VM to apply DSC configuration.'
}
Applying = #{
Code = 6
Message = 'Applying DSC configuration to VM.'
}
#
# Errors
#
GenericError = 1000 # The message for this error is provided by the specific exception
InstallError = #{
Code = 1001
Message = 'The DSC Extension was not installed correctly, please check the logs on the VM.'
}
WtrInstallError = #{
Code = 1002
Message = 'WTR was not installed correctly, please check the logs on the VM.'
}
OsVersionNotSupported = #{
Code = 1003
Message = 'The current OS version is not supported. The DSC Extension requires Windows Server 2012 or 2012 R2, or Windows 8.1.'
}
}
For some reason they swapped the codes and now 1 is success, while errors went up from 100 to 1000 (they surely are expecting a lot of errors on this one).
Related
I'm working on writing a script which will run from AzDo Pipeline to disable F5 WebServers. Below script works fine in Visual Code and does disable the server as expected . But when running from the terminal or PS window fails with the below error . Can someone please help.
$ServerInput = 'server1.abc.com'
$BIGIPBaseURL = "https://ser-f5-1.prod.abc.com"
$usr = "nilesh"
$SecurePassword='P#assword'
Write-Host "Starting the Script..."
# Initialize variables
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$BIGIPToken = $null
Write-Host -ForegroundColor Green " done!"
$DisableWebServers = $true
# Initialize functions
Write-Host "Initializing functions..." -NoNewline
$PSVersionTable
function Disable-BIGIPNode([string]$NodeName) {
# servers should use the Disable-BIGIPTelcoNode() function
Write-Host "In the Disable function"
if ($NodeName -match "(?i).*telco.*") {
Write-Host -ForegroundColor Yellow "WARNING: `"$($NodeName.ToUpper().Split('.')[0])`" is in the wrong list. telcoo hosts should be added to the TelcoServers list in your input file."
BREAK
}
else {
if ($BIGIPToken -eq $null) {
Write-Host "Now will enter the Open-Session"
Open-BIGIPSession
}
Write-Host "Disabling node `"$($NodeName.ToUpper().Split('.')[0])`" in BIG-IP..." -NoNewline
$WebRequestInput = #{
body = #{
"session" = "user-disabled"
} | ConvertTo-Json
uri = $($BIGIPBaseURL) + "/mgmt/tm/ltm/node/~Common~" + $NodeName.ToLower()
headers = #{
"Content-Type" = "application/json"
"X-F5-Auth-Token" = "$BIGIPToken"
}
method = "PATCH"
}
Write-Host $WebRequestInput
Write-Host $WebRequestInput.body
try {
Write-Host "In the final try block"
$Request = Invoke-WebRequest #WebRequestInput -UseBasicParsing -SkipCertificateCheck
}
catch {
Write-Host -ForegroundColor Red " failed!"
Write-Host -ForegroundColor Red ($_.ErrorDetails | ConvertFrom-Json).Message
}
Write-Host -ForegroundColor Green " done!"
$global:ZabbixRequestID++
}
}
function Open-BIGIPSession() {
Write-Host "Authenticating with BIG-IP API..." -NoNewline
$WebRequestInput = #{
body = #{
username = "$usr"
password = "$SecurePassword"
loginProviderName = "tmos"
} | ConvertTo-Json
uri = $ScriptInput.BIGIPBaseURL + "/mgmt/shared/authn/login"
headers = #{
"Content-Type" = "application/json"
}
method = "POST"
}
try {
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$Request = Invoke-WebRequest #WebRequestInput -UseBasicParsing -SkipCertificateCheck
}
catch {
Write-Host -ForegroundColor Red " failed!"
Write-Host -ForegroundColor Red ($_.ErrorDetails | ConvertFrom-Json).Message
EXIT 1
}
Write-Host -ForegroundColor Green " done!"
$global:BIGIPToken = ($Request.Content | ConvertFrom-Json).token.token
}
if ($DisableWebServers) {
Write-Host "Starting the main Methord "
foreach ($Server in $($ServerInput)) {
Disable-BIGIPNode -NodeName $Server
}
}
The PowerShell version is PSVersion 7.2.2
Disabling node "SAC-DEV-WEB2" in BIG-IP...System.Collections.DictionaryEntry System.Collections.DictionaryEntry System.Collections.DictionaryEntry System.Collections.DictionaryEntry
{
"session": "user-disabled"
}
In the final try block
failed!
ConvertFrom-Json: C:\Temp\Testing.ps1:49:64
Line |
49 | … Host -ForegroundColor Red ($_.ErrorDetails | ConvertFrom-Json).Messag …
| ~~~~~~~~~~~~~~~~
| Conversion from JSON failed with error: Additional text encountered after finished reading JSON content: U. Path '', line
| 3, position 4.
Its working fine when running from VsCode but fails if called with the file name from the same terminal
like .\Testing.ps1
Please help
Your incidental problem is that the true error message is being obscured by a follow-up error that results from attempting to parse the error record's .ErrorDetails property as JSON, which it isn't. (You report that examining the true error reveals a 401 authentication error).
I have no specific explanation for the difference in behavior you're seeing between running in Visual Studio Code vs. in a regular PowerShell console, but I have a guess:
Your Visual Studio Code session in the so-called PowerShell Integrated Console may have lingering state from earlier debugging runs, which may mask a bug in your script.
Restarting Visual Studio Code should clarify whether that is the case, but there's also a way to configure the PowerShell extension so that the problem doesn't arise to begin with - see below.
By default, code you run (debug) via the Visual Code PowerShell extension executes in the same PowerShell session, directly in the global scope.
That is, running a script being edited, say, foo.ps1, in the debugger is effectively the same as invoking it with . .\foo.ps1, i.e. it is in effect dot-sourced.
Therefore, a given debugging run can be affected by earlier runs, because the state of earlier runs lingers.
This can result in bugs going undetected, such as in the following example:
Say your script defines variable $foo and uses it throughout the script. If you debug your script at least one, $foo is now defined in the PowerShell session in the PowerShell Integrated Console.
Say you then change the name to $bar, but you forget to also replace (all) references to $foo with $bar.
Your script is now effectively broken, but you won't notice in the same session, because $foo is still around from earlier debugging runs.
However, running the script from a regular PowerShell console would surface the problem.
The obsolescent Windows PowerShell ISE exhibits the same unfortunate behavior, invariably so, but fortunately there is a solution for the PowerShell extension - see next point.
You can avoid this problem by activating the Create Temporary Integrated Console setting (via File > Preferences > Settings or Ctrl+,), which ensure that every debugging run creates a new, temporary session to run in, which starts with a clean slate:
Whenever a new temporary session is started, any previous one is automatically discarded.
A temporary session has prefix [TEMP] in the list of running shells in the integrated terminal.
You pay a performance penalty, because a new PowerShell session must be created for every run, and you lose the previous session's display output - but I suspect avoiding the pitfalls of lingering state is worth the price.
Note that, in a given temporary session, the dot-sourced invocation described above still applies, but with the lingering-state problem out of the picture, it can now be considered an advantage: After the script finishes, and before the temporary session is replaced with a new one, the variables and functions defined in the script's top-level scope are then available for inspection.
I'm trying to convert a .ps1 file to run as a windows service. This needs to run as a service as it's requirements for Business Continuity (scheduled task is not an option). i've always used NSSM to wrap the .ps1 as it will then run via NSSM as an exe.
This works for different scripts in Windows Server 2012, but this script is slightly different and i'm required to get this service to work on Windows Server 2016. The script itself, connects to a large amount of other servers (in total i'll have 3 services - Windows Service / Windows Process / Linux Process) which all work when just running within PowerShell.
Below is an example of the start of the script so you get an idea how it works (may not be relevant);
while ($test = 1)
{
[string]$query
[string]$dbServer = "DBSERVER" # DB Server (either IP or hostname)
[string]$dbName = "DBNAME" # Name of the database
[string]$dbUser = "CONNECTIONUSER" # User we'll use to connect to the database/server
[string]$dbPass = "CONNECTIONPASSWORD" # Password for the $dbUser
$conn = New-Object System.Data.Odbc.OdbcConnection
$conn.ConnectionString = "Driver={PostgreSQL Unicode(x64)};Server=$dbServer;Port=5432;Database=$dbName;Uid=$dbUser;Pwd=$dbPass;"
$conn.open()
$cmd = New-object System.Data.Odbc.OdbcCommand("select * from DBNAME.TABLENAME where typeofcheck = 'Windows Service' and active = 'Yes'",$conn)
$ds = New-Object system.Data.DataSet
(New-Object system.Data.odbc.odbcDataAdapter($cmd)).fill($ds) | out-null
$conn.close()
$results = $ds.Tables[0]
$Output = #()
foreach ($result in $results)
{
$Hostone = $Result.hostone
$Hosttwo = $Result.hosttwo
$Hostthree = $Result.hostthree
$Hostfour = $Result.hostfour
Write-Output "checking DB ID $($result.id)"
#Host One Check
if (!$result.hostone)
{
$hostonestatus = 17
$result.hostone = ""
}
else
{
try
{
if(Test-Connection -ComputerName $result.hostone -quiet -count 1)
{
$hostoneres = GET-SERVICE -COMPUTERNAME $result.hostone -NAME $result.ServiceName -ErrorAction Stop
$hostonestatus = $hostoneres.status.value__
$Result.HostOneCheckTime = "Last checked from $env:COMPUTERNAME at $(Get-date)"
}
else
{
$hostonestatus = 0
$result.hostonestatus = "Failed"
$Result.HostOneCheckTime = "Last checked from $env:COMPUTERNAME at $(Get-date)"
}
}
catch
{
$hostonestatus = 0
$result.hostonestatus = "Failed"
$Result.HostOneCheckTime = "Last checked from $env:COMPUTERNAME at $(Get-date) Errors Found"
}
if ($hostonestatus -eq 4)
{
$result.hostonestatus = "Running"
}
if ($hostonestatus -eq 1)
{
$result.hostonestatus = "Stopped"
}
elseif ($hostonestatus -eq 0)
{
$result.hostonestatus = "Failed"
}
}
As mentioned, the exact script running standalone works seamlessly.
Whats the best way to run this as a service or are there any known issues with NSSM when using it with Windows Server 2016?
I've also found the below which may be pointing in the right direction as i've intermittently got these in the logs;
DCOM event ID 10016 is logged in Windows
Windows sysadmin here.
Quiet a few different ways to accomplish this from a purely service-orientated perspective.
--- 1 ---
If you are using Server 2016, I believe that the Powershell command 'New-Service' may be one of the cleanest ways. Have a look at the following for syntax and if it suits your use case --
https://support.microsoft.com/en-au/help/137890/how-to-create-a-user-defined-service
This CMDlet takes a credential parameter, so depending on your use case may be good to access resources on other foreign machines.
--- 2 ---
Another way is to use the old trusty in-built SC.exe utility in windows.
SC CREATE <servicename> Displayname= "<servicename>" binpath= "srvstart.exe <servicename> -c <path to srvstart config file>" start= <starttype>
An example --
SC CREATE Plex Displayname= "Plex" binpath= "srvstart.exe Plex -c C:PlexService.ini" start= auto
As far as I can tell, this will create a service that will execute under the Local System context. For more information, have a look at the following:
https://support.microsoft.com/en-au/help/251192/how-to-create-a-windows-service-by-using-sc-exe
https://www.howtogeek.com/50786/using-srvstart-to-run-any-application-as-a-windows-service/
--- 3 ---
You may want to consider manually injecting some registry keys to create your own service (which is essentially what SC.exe does under the hood).
Although I'm unfortunately in no position at the moment to provide boiler-plate code, I'd encourage that you have a look at the following resource:
https://www.osronline.com/article.cfm%5Eid=170.htm
NOTE - you will need to provide all required sub-keys for it to work.
As with any registry changes, please make a backup of your registry and perform edits at your own risk prior to making any changes. I can only recommend to try this on a spare/test VM if possible prior to implementing to prod.
I installed xWebAdministration module. For some reason I am still getting this error message
The term 'xWebsite' is not recognized as the name of a cmdlet"
image url: http://i.stack.imgur.com/tTwUe.jpg
here's my code.
Configuration MvcWebTest {
Param(
[String[]]$ComputerName = "tvw-irwebsvc",
$AppName = "MvcWebTest",
$User = "PAOMSvc",
$Password = "Welcome1",
$CodePath = "C:\websites\MvcWebTest"
)
Import-DscResource -Module xWebAdministration
Node $ComputerName {
#Install ASP.NET 4.5
WindowsFeature ASP {
Ensure = “Present”
Name = “Web-Asp-Net45”
}
File WebContent {
Ensure ="Present";
SourcePath ="\\DVW-MORBAM01\Build\Publish\MvcWebTest\Dev";
DestinationPath=$CodePath;
Type = "Directory";
Recurse = $True
}
# Create a new website
xWebsite Website {
Ensure = "Present";
Name = $AppName;
State = "Started";
PhysicalPath = $CodePath;
DependsOn = "[File]WebContent"
}
}
}
The screenshot is showing you the problem: The xWebsite resource isn't installed. Only the xwebApplication and xWebVirtualDirectory resources are installed.
I just downloaded the xWebAdministration 1.3.2.3 zip file from Technet, and it looks like someone made a boo-boo -- it's missing xWebSite! The Q&A section is full of people upset about it, so you're not alone. :)
Oddly enough, the Wave 9 resource kit that supposedly includes all the modules has the same problem!
The easiest way to get past this is to just grab version 1.3.2, which looks like it has everything.
To enact the configuration, run the following command:
Start-DscConfiguration -Wait -Verbose -Path .\MvcWebTest
This cmdlet is part of the DSC system. The Wait parameter is optional and makes the cmdlet run interactively. Without this parameter, the cmdlet will create and return a job.
I'm having a lot of trouble trying to get a PowerShell Desired State Configuration script working to configure an in-house application. The root of the problem is that I can't seem to pass my configuration data down into a ScriptResource (at least not with the way I'm trying to do it).
My script is supposed to create a config folder for our in-house application, and then write some settings into a file:
configuration MyApp {
param (
[string[]] $ComputerName = $env:ComputerName
)
node $ComputerName {
File ConfigurationFolder {
Type = "Directory"
DestinationPath = $Node.ConfigFolder
Ensure = "Present"
}
Script ConfigurationFile {
SetScript = {
write-verbose "running ConfigurationFile.SetScript";
write-verbose "folder = $($Node.ConfigFolder)";
write-verbose "filename = $($Node.ConfigFile)";
[System.IO.File]::WriteAllText($Node.ConfigFile, "enabled=" + $Node.Enabled);
}
TestScript = {
write-verbose "running ConfigurationFile.TestScript";
write-verbose "folder = $($Node.ConfigFolder)";
write-verbose "filename = $($Node.ConfigFile)";
return (Test-Path $Node.ConfigFile);
}
GetScript = { #{Configured = (Test-Path $Node.ConfigFile)} }
DependsOn = "[File]ConfigurationFolder"
}
}
}
For reference, my configuration data looks like this:
$config = #{
AllNodes = #(
#{
NodeName = "*"
ConfigFolder = "C:\myapp\config"
ConfigFile = "C:\myapp\config\config.txt"
}
#{
NodeName = "ServerA"
Enabled = "true"
}
#{
NodeName = "ServerB"
Enabled = "false"
}
)
}
And I'm applying DSC with the following:
$mof = MyApp -ConfigurationData $config;
Start-DscConfiguration MyApp –Wait –Verbose;
When I apply this configuration it happily creates the folder, but fails to do anything with the config file. Looking at the output below, it's obvious that it's because the $Node variable is null inside the scope of ConfigurationFile / TestScript, but I've got no idea how to reference it from within that block.
LCM: [ Start Resource ] [[Script]ConfigurationFile]
LCM: [ Start Test ] [[Script]ConfigurationFile]
[[Script]ConfigurationFile] running ConfigurationFile.TestScript
[[Script]ConfigurationFile] node is null = True
[[Script]ConfigurationFile] folder =
[[Script]ConfigurationFile] filename =
LCM: [ End Test ] [[Script]ConfigurationFile] in 0.4850 seconds.
I've burnt off an entire day searching online for this specific problem, but all the examples of variables, parameters and configuration data all use File and Registry resources or other non-script resources, which I've already got working in the "ConfigurationFolder" block in my script. The thing I'm stuck on is how to reference the configuration data from within a Script resource like my "ConfigurationFile".
I've drawn a complete blank so any help would be greatly appreciated. If all else fails I may have to create a separate "configuration" script per server and hard-code the values, which I really don't want to do if at all possible.
Cheers,
Mike
Change this: $Node.ConfigFolder to $using:Node.ConfigFolder.
If you have a variable called $Foo and you want it to be passed to a script DSC resource, then use $using:Foo
Based on David's answer, I've written a utility function which converts my script block to a string and then performs a very naive search and replace to expand out references to the configuration data as follows.
function Format-DscScriptBlock()
{
param(
[parameter(Mandatory=$true)]
[System.Collections.Hashtable] $node,
[parameter(Mandatory=$true)]
[System.Management.Automation.ScriptBlock] $scriptBlock
)
$result = $scriptBlock.ToString();
foreach( $key in $node.Keys )
{
$result = $result.Replace("`$Node.$key", $node[$key]);
}
return $result;
}
My SetScript then becomes:
SetScript = Format-DscScriptBlock -Node $Node -ScriptBlock {
write-verbose "running ConfigurationFile.SetScript";
write-verbose "folder = $Node.ConfigFolder";
write-verbose "filename = $Node.ConfigFile)";
[System.IO.File]::WriteAllText("$Node.ConfigFile", "enabled=" + $Node.Enabled);
}
You have to be mindful of quotes and escapes in your configuration data because Format-DscScriptBlock only performs literal substitution, but this was good enough for my purposes.
A quite elegant way to solve this problem is to work with the regular {0} placeholders. By applying the -f operator the placeholders can be replaced with their actual values.
The only downside with this method is that you cannot use the curly braces { } for anything other than placeholders (i.e. say a hashtable or a for-loop), because the -f operator requires the braces to contain an integer.
Your code then looks like this:
SetScript = ({
Set-ItemProperty "IIS:\AppPools\{0}" "managedRuntimeVersion" "v4.0"
Set-ItemProperty "IIS:\AppPools\{0}" "managedPipelineMode" 1 # 0 = Integrated, 1 = Classic
} -f #($ApplicationPoolName))
Also, a good way to find out if you're doing it right is by simply viewing the generated .mof file with a text editor; if you look at the generated TestScript / GetScript / SetScript members, you'll see that the code fragment really is a string. The $placeholder values should already have been replaced there.
ConfigurationData only exists at the time the MOF files are compiled, not at runtime when the DSC engine applies your scripts. The SetScript, GetScript, and TestScript attributes of the Script resource are actually strings, not script blocks.
It's possible to generate those script strings (with all of the required data from your ConfigurationData already expanded), but you have to be careful to use escapes, subexpressions and quotation marks correctly.
I posted a brief example of this over on the original TechNet thread at http://social.technet.microsoft.com/Forums/en-US/2eb97d67-f1fb-4857-8840-de9c4cb9cae0/dsc-configuration-data-for-script-resources?forum=winserverpowershell
To the DSC pros, this may seem like a very simple question but I couldn't find any resources on the web for this, or for any of the error messages I've seen. It seems very difficult to dig up any information on DSC so perhaps we can start here.
I am trying to build a Powershell DSC configuration for installing a scheduled task. I have found a sample resource on Steve Murawski's Github page for StackExchange resources, and I have copied the 'StackExchangeResources' tree to my DSC repository.
I imported the StackExchangeModule and attempted to create a very simple configuration using the ScheduledTask resource:
Import-Module StackExchangeResources
Configuration TempCleaner
{
param($NodeName)
Node $NodeName
{
$filePath = "C:\Tasks\TempCleaner.ps1";
ScheduledTask
{
Name = "Clear Temporary Files"
FilePath = $filePath
Daily = $true
FilePath = ""
Hours = 4
Minutes = 0
}
}
}
However, when I execute TempCleaner -Node TestNode, it doesn't actually do anything; no MOF files are written and no errors are thrown.
Now, a lot of examples I've seen involve giving a name to the invocation of the resource, something like this:
File TempCleaner
{
DestinationPath = $filePath
Contents = $(cat $tempCleanerScript | out-string)
Checksum = "SHA-512"
}
But when I try to give it a name like so,
ScheduledTask CleanerTask
{
Name = "Clear Temporary Files"
FilePath = $filePath
Daily = $true
FilePath = ""
Hours = 4
Minutes = 0
}
it will throw an exception:
ScheduledTask : No MSFT_ScheduledTask objects found with property 'TaskName' equal to
'CleanerTask'. Verify the value of the property and retry.
At C:\Users\Steve\Documents\DevOps\DSC\TempCleaner.ps1:13 char:9
+ ScheduledTask CleanerTask
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (CleanerTask:String) [Get-ScheduledTask]
, CimJobException
+ FullyQualifiedErrorId : CmdletizationQuery_NotFound_TaskName,Get-ScheduledTask
When I use the scheduled task resource in conjunction with the file resource as shown above, the file resource is written into the resulting MOF file but no other directives can be seen within.
There must be something I'm missing here. Is there some sort of verbose mode I can enable perhaps? Other logging options that aren't documented? That would be very helpful.
1) To use third party resource, you need to import it using Import-DscResource, not Import-Module.
Import-DscResource -Name StackExchange_ScheduledTask -ModuleName
StackExchangeResources
Also, note that it has to be in the Configuration scope
2) Make sure the resource module you are using is deployed to C:\Program Files\WindowsPowerShell\Modules\
Place whole StackExchangeResources folder with it's contents (DSCResources etc.) there.
3) Resource name is mandatory
ScheduledTask task
{
#...
}
here's the configuration with fixes:
Configuration TempCleaner
{
param($NodeName)
Import-DscResource -Name StackExchange_ScheduledTask -ModuleName StackExchangeResources
Node $NodeName
{
$filePath = "C:\test\TempCleaner.ps1";
ScheduledTask task
{
Name = "Clear Temporary Files"
FilePath = $filePath
Daily = $true
Hours = 4
Minutes = 0
}
}
}
Hope it helps.
If you are looking for an introduction to DSC, then I would suggest starting at:
PowerShell MVP Aman Dhally's blog.
PowerShell MVP Ravikanth C's post on PowerShellMagazine
Can't add comments yet, so editing my response. I think you may have duplicate keys in our resource.
Import-Module StackExchangeResources
Configuration TempCleaner
{
param($NodeName)
Node $NodeName
{
$filePath = "C:\Tasks\TempCleaner.ps1";
ScheduledTask
{
Name = "Clear Temporary Files"
FilePath = $filePath
Daily = $true
#FilePath = "" - Need unique keys. Also, FilePath is only a string not string[]
Hours = 4
Minutes = 0
}
}
}