PowerShell: Change to module shows no effect - some kind of recompile needed? - powershell

I am currently in the process of setting up a DSC pull server using the DSC resource kit wave 8.
Unfortunatelly the module MSFT_xDSCWebService.psm1 has a bug and throws an exception when ever another locale then 'en' is in use.
The exception message tells me it's looking for a file called resource.dll in the wrong place. I have therefor made changes to the module, so it looks in the right place.
However the changes show no effect, even a Write-Host "Test... is not showing up in the output. Is there somekind of cache that needs a refresh?

Make sure that in your Local Configuration Manager (LCM) settings, you set AllowModuleOverwrite to $true.

This may be because DSC has already loaded the previous version of the module - which is in memory. An easy way to refresh is to bring down the DSC process and run the configuration again (do not try this on a production system as other WMI providers are co-hosted with DSC)
gps wmiprvse | ?{$_.modules.ModuleName -ieq 'dsccore.dll'} | kill -Force

Related

Trying to use PowerShell DSC inside of MDT Task Sequence - how to handle reboots?

I am trying to use DSC push config inside of MDT, and currently running into a few issues. Here is kind of my thought process around this:
Because I am doing a "whole server configuration" for a pretty complex config, the LCM is going to require multiple reboots.
I want to make sure that I depend upon the LCM to communicate to MDT that a reboot is required, after which MDT will reboot the server and re-run the configuration.
I'd like for the configuration to continue in spite of any warnings or failures. The BDD.log will show what these are, so that the person provisioning the server can see what failed and either fix it in the code, or just fix it after the fact.
I think that there is some method of using $TSenv:SMSTSRetryRequested and $TSenv:SMSTSRebootRequested
What I've tried so far is something along these lines, however this didn't work:
if ((Get-DscConfigurationStatus).RebootRequested -eq $true) {
$TSenv:SMSTSRebootRequested = $true
$TSenv:SMSTSRetryRequested = $true
}
else {
$TSenv:SMSTSRebootRequested = $false
$TSenv:SMSTSRetryRequested = $false
}
Now what ends up happening here, is that the configuration goes as expected. On every reboot, it will check through existing resources and then start on new ones further down in the configuration, even if there are failures.
But for some reason, it seems that MDT thinks that a reboot is always requested, even if it gets all the way to the end of the configuration. Mind you, there always seem to be failures here and there during the DSC apply phase, so perhaps that's the problem? Otherwise, maybe the $TSenv variables are persisting through the reboots?
I also do know that there is a way inside of the DSC configuration to tell the LCM how to process reboots, but I'm not quite sure how to use that resource or where to put it, or if it will solve my problem. It's my next step, but I wanted to post here to see if anybody has any thoughts or has done something like this before.

Loading assembly dll in powershell doesnt work anymore

I have an exchange script that contains this:
$dllpath = "C:\assembly\Microsoft.Exchange.WebServices.dll"
[void][Reflection.Assembly]::LoadFile($dllpath)
And this did work on my previous system but on my current systen when trying to run this, I get an error message that I want to load an assembly from a remote storage and that sandbox assenblys are disabled in this .net version and that I need to add a 'loadFromRemoteSource' flag to the config. There is also this link http://go.microsoft.com/fwlink/?LinkId=155569
What do I need to do?

How to track installer script in a pipeline not executing?

I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;

Running BitsTransfer from Local Service account

I am working on making some scripts to make my job a little bit easier.
One of the things i need is too download some files to use. I first used powershell with the command Invoke-WebRequest.
It is working really well, however it dont run on windows 7 computeres, as they have powershell 2. As i have about as many windows 7 pc's as win 10 i need to find another way.
I found that Start-BitsTransfer is a good way that should work on most computeres. My problem now is, that when using the script via my remote support session it runs the script on the local service account, and then BitsTransfer wont run and gives me an error. (0x800704DD)
Is there a way to get around that problem, or any command that can be used on both win 7 and 10 and run from the local service account?
You should update PowerShell as gms0ulman states, but if you are not the person who is in charge of this decision, you have to take other steps.
This error code...
0x800704DD
The error message ERROR_NOT_LOGGED_ON, occurs because the System Event Notification Service (SENS) is not receiving user logon notifications. BITS (version 2.0 and up) depends on logon notifications from Service Control Manager, which in turn depends on the SENS service. Ensure that the SENS service is started and running correctly.
By default, BITS runs under the LocalSystem account. To modify, stop or restart BITS, you must be logged on as an administrator. In your situation, when you log on a regular account and start the PS in elevated privilege, the BITS doesn’t run under regular user account. To resolve it, you may need to configure the log on user for BITS. Please visit the following link to configure how a service is started.
Configure How a Service is Started
Services are often run with default settings — for example, a service
may be disabled automatically at startup. However, you can use the
Services snap-in to change the default settings for a service. This is
useful if you are troubleshooting service failures or if you need to
change the security account under which a service runs. Membership in
Account Operators or Domain Admins, Enterprise Admins, or equivalent,
is the minimum required to complete this procedure. Review the details
in "Additional considerations" in this topic.
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc755249(v=ws.10)
I also agree that you should not continue supporting PowerShell 2.0. Ideally, ditch Windows 7 (it's way too old now), if you can't do that, upgrade PowerShell, if you can't do that, find a new job, if you can't do that, then I guess bring on the workarounds!
postanote's answer covers the BITS angle.
The other thing you can do is just use the .Net framework's underlying libraries, which is exactly what Invoke-RestMethod and Invoke-WebRequest do (those cmdlets were introduced in PowerShell 3.0, but the guts of them were around much longer).
try {
$wc = New-Object -TypeName System.Net.WebClient
$wc.DownloadFile($url, $path)
finally {
$wc.Dispose()
}
Most people don't bother disposing IDisposable objects in PowerShell so you'll see a lot of shorthand around like this:
(New-Object Net.WebClient).DownloadFile($url, $path)
Which is probably fine if your script's process isn't going to be around for a while, but it's good to keep in mind in case you incorporate this into something of a larger scale.

Application Deployment with Powershell

I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond