Getting CimException: Invalid property when using Get-Disk with no parameters - powershell

I have a script which makes use of the Get-Disk command in Powershell. Intermittently, I get an error when using Get-Disk with no parameters:
$disk = Get-Disk | Where-Object { $_.Location -eq $Location }
Microsoft.Management.Infrastructure.CimException: Invalid property
at Microsoft.Management.Infrastructure.Internal.Operations.CimAsyncObserverProxyBase`1.ProcessNativeCallback(OperationCallbackProcessingContext callbackProcessingContext, T currentItem, Boolean moreResults, MiResult operationResult, String errorMessage, InstanceHandle errorDetailsHandle)
where $Location is the disk location (similar to PCIROOT(0)#PCI(1500)#PCI(0000)#SAS(P00T01L00)). The script this line is run from is part of our VM provisioning script, which gets run after the clone and VMWare customization script is run. This error does not always happen, and if I go and run the script manually later it succeeds every time leading me to believe it is a race condition of some sort. Any ideas as to why Get-Disk isn't working reliably?
Ultimately, this script is being kicked off from vRealize Orchestrator (vRO, formerly vCenter Orchestrator or vCO) using the Guest Script Manager plugin. This detail may not be relevant, but this script has only failed running when kicked off by this plugin.
Additional details:
Powershell Version: 4.0
OS Version: Windows Server 2012 R2
Hypervisor: VMWare vCenter Version 6.0.0 Build 5112533
vRO Version: 7.2

I ended up provisioning the disks with diskpart instead of the storage cmdlets, which works without issue. Although I did find out that our script is running while the Windows installation is still completing, which may account for the storage cmdlets not working properly.
Follow Up: I did confirm that the storage cmdlets were indeed not working due to the Windows installation still completing. Now that I figured out how to wait for completion, the storage cmdlets work fine every time.

Related

WOL works outside of Powershell

Regardless of what Script I use I can not get PowerShell 5.1 to trigger a boot on my Hyper-V Host.
I can use the solarwinds WakeonLan tool to boot the server, but I would like to find a solution that would work natively.
I tried many scripts I had found online and as a last ditch effort, I installed the "WakeOnLAN 1.0" Module but while it says it executes successfully the server does not boot
PS C:\WINDOWS\system32> Invoke-WakeOnLan 52:a4:4c:52:d7:52 -Verbose
VERBOSE: Wake-on-Lan Packet sent to 52:a4:4c:52:d7:52
What could cause the server only to boot with the SolarWinds WakeOnLan.exe but not natively in Powershell?
As it may be relevant the computer I am attempting to send the MagicPacket from is a MultiNic Machine but only 1 NIC is IP'd on the subnet of the Hyper-V server.
Other Scripts I attempted to use:
https://www.pdq.com/blog/wake-on-lan-wol-magic-packet-powershell/
https://powershell.one/code/11.html
Something like this works for me with remote powershell, going to the same subnet the down computers are on. Fast startup also has to be disabled in the windows 10 registry (HiberbootEnabled=0).
$mac = #{comp002 = '00:11:22:33:44:55'; comp003 = '00:11:22:33:44:56'}
$compsDown = 'comp002','comp003'
# (,) is silly workaround to pass array as invoke-command arguments
icm comp001 invoke-wakeonlan.ps1 -args (,$mac[$compsDown])

How can I resize the Docker Desktop Virtual Machine on Windows 10 from a PowerShell script?

I am attempting to write a PowerShell script (using PS core 7.0) to install and configure a Kubernetes cluster running on Kind on Windows 10 machines used by my teams. I have a working script to start up and configure the cluster the only issue is that I would like to (need to) ensure the Docker Desktop VM has enough memory available to run a few of our micro services inside the cluster at the same time.
I've got a bit of code cobbled together to perform the task and it works up to the very last step where I attempt to get the docker daemon working again after the restart. As soon as I run the command to do that, the VM is reconfigured back to its previous memory size.
Here's what I have to perform the resizing:
Stop-Service *docker*
Get-VM DockerDesktopVM | Stop-VM
Get-VM DockerDesktopVM | Set-VMMemory -StartupBytes 12888MB
Get-VM DockerDesktopVM | Start-VM
Start-Service *docker*
# https://stackoverflow.com/questions/51760214/how-to-restart-docker-for-windows-process-in-powershell
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchDaemon
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchDaemon
Note: I found the post # How to restart docker for windows process in powershell? which is were I got the last 2 lines.
In researching the issue further I have found that I can use the following single line instead, but I still have the same issue in that the memory size is reverted back once the command is run.
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchLinuxEngine
If I do not run either DockerCli.exe -SwitchDaemon twice or DockerCli.exe -SwitchLinuxEngine once then I get the error:
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/containers/json: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Window
s, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Is there a better way to go about resizing the VM memory or to shutdown and restart docker without causing the change to be reverted?
For anyone else who is attempting the same thing, or something similar I got a hint from the Docker Desktop for Windows Community on GitHub that helped me find a solution. In a nutshell the recommendation was to simply change the settings file directly. What I found worked was to:
Stop the Docker Services (There are 2 of them)
Update the settings file (# ~\AppData\Roaming\Docker\settings.json)
Start the Docker Services
Switch the Daemon Context to Linux (Same as it was before, but it appears to need a nudge to pick things up after restarting the services).
Here's the PowerShell:
Stop-Service *docker*
$settingsFile = "$env:APPDATA\Docker\settings.json"
$settings = Get-Content $settingsFile | ConvertFrom-Json
$settings.memoryMiB = 8192
$settings | ConvertTo-Json | Set-Content $settingsFile
Start-Service *docker*
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchLinuxEngine

Task sequence variable for OS, I want to install

I'm verry beginer in powershell, I'm wonking in a project, the goal it's to set the Biossetting like disabling or enabling the secureBoot and UEFI mode, while installing windows 7 or 10 by MDT.
I'm working with Dell and hp computer, I have the script for setting the bios of hp or dell
Hp:
$bios=Get-WmiObject -Namespace root/hp/instrumentedBIOS -Class HP_BIOSSettingInterface
$bios.SetBIOSSetting("UEFI Boot Options", "Enable","")
Dell:
(Get-WmiObject DCIM_BIOSService -namespace root\dcim\sysman -ComputerName .).SetBIOSAttributes($null,$null,"Secure Boot","1")
Then, my first problem these command is not working in any computer I need to install some modules, some cmdlet from hp or dell website, I want to know if make my script ".exe", it's gonna work in every-computer ?
Because I need to run my script with with deployement of windows.
My second and difficult task, I want to know with variable task sequence to use in my script, to detect the os of the tasksequence, I find this code in internet, after too much research in internet
$TaskPath = "$($MdtDrive):\Task Sequences"
$ControlPath = "$MDtroot\Control"
$OSPath = "$($MdtDrive):\Operating Systems"
$OS = (Get-ChildItem -Path $OSPath | Out-GridView -PassThru -Title "Select required OperatingSystem").Name
This code detect if the OS of the task sequence I want install in my computer is windos 7 or windows 10?
Thanks !
If I recall correctly from my days of systems deployment, Dell and HP both make dedicated tools for settigns BIOS configuration. Just make sure you run it in WinPE. Depending on which BIOS settings you change you make even have to boot WinPE twice to make sure the OS installs the way you want.
Dell: http://en.community.dell.com/techcenter/enterprise-client/w/wiki/7532.dell-command-configure
HP: https://deploymentbunny.com/2010/10/18/enable-tpm-via-task-sequence-on-hp-boxes/
Although it is definitely possible to make these settings in WMI I would only look to it as a last resort. Windows has to be compatible with every piece of hardware, whereas Dell/HP tools are targeted at their systems. It's like using a scalpel vs a Swiss army knife.
I have some difficults I’m working in a script who set the bios configuration while installing windows 7 or 10 by MDT, then my first question is:
Wich variable I can use to identify the os of the new task sequence I mean the current os the mdt preparing to install in the computer after the user select the os during the installation.
I’m wondering if this code doying the job
$OS = Get-ChildItem -Path $OSPath | Out-GridView -PassThru -Title “Select required OperatingSystem”
$OSPath = “$($MdtDrive):\Operating Systems”

Get Azure VM Detail by PowerShell

I am trying to run Get-AzureVM PowerShell command, it is running fine but not return any output.
Also tried in following flavor but still blank result any idea?
Get-AzureVM -Name "vmname" |Select-Object name,instancesize,location
You should call Select-AzureSubscription "subscription name" first.
It likely is defaulting to a subscription that doesn't have any virtual machines in it.
To view your current subscription names call:
Get-AzureSubscription | select SubscriptionName
Actually the answer above is only semi-correct.
This had me pulling my virutal hair out trying to do automation (which took 7 hours of manual fudging to get working!).
Simply, you have two types of virtual machine in Azure; Classic, and Resource Manager.
If you Switch-AzureMode -name AzureServiceManagement then use Get-AzureVM you will list all of the classic VM's you have created.
If you Switch-AzureMode -name AzureResourceManager then use Get-AzureVM you will list all of the Resource Manager (or new) VM's you have created.
And remember, if you are trying to do automation, then you need the VM's in the new mode available through the portal, your old VM's (classic) that you created through management are not visable in this mode and you will have to recreate them.
Azure has two types of Management System: AzureServiceManagement (ASM) and AzureResourceManager (ARM)
In order to control these two different type of management systems you should switch between them as described in the main page of the Azure Powershell Github project page, but this is true for the azure powershell versions lower than 1.0.0, you can find more explanation in here
For those who are interested to control ARM (AzureResourceManager) with the powershell version greter than 1.0.0, they should use all Cmdlets with the following format : [Verb]-AzureRm[Noun], for example New-AzureVm becomes New-AzureRmVm, in our case Get-AzureVM became Get-AzureRmVm
In summary:
Powershell versions lower than 1.0.0 you should switch between modes and use Get-AzureVM, which is very confusing in my and lots of others opinion
Powershell versions equal or greater than 1.0.0 you should use Get-AzureVM for ASM and Get-AzureRmVm for ARM.
I know this question has been answered but I tried the answer given and it did not work for me. I found, I needed to switch my AzureMode.
To resolve, I ran the following powershell script.
Switch-AzureMode -Name AzureResourceManager
Switching Azure Powershell mode between AzureServiceManagement and AzureResourceManger is a possible solution if your script is using older features as well as new Azure Resource Manager cmdlets. The switch is needed only for Microsoft Azure Powershell version 0.9.8 or older.

PowerShell Stop-Service/Start-Service not working on a specific server

I have three servers, let's call them Deploy1, Deploy2, Target.
All servers are running Windows Server 2008R2, fully updated.
A domain user, admin1, is configured as administrator on all servers, and this is the user I'm running all the commands with.
The following command works on Deploy1:
Get-Service "MyService" -ComputerName Target | Stop-Service
When running the same command on Deploy2, the command fails with the following message:
Cannot find any service with service name 'MyService'.
On Deploy2, the following command works, and displays the service and its status.
Get-Service "MyService" -ComputerName Target
Now, I know there are other ways to stop/start services via PowerShell, but I like this one as it automatically waits for the server to actually stop/start.
So what could be wrong with Deploy2?
Powershell v2.0 has a bug (feature?) in how the object returned by Get-Service is implemented. It does not actually set the ComputerName property correctly. Because of this, it can only affect local services. If you upgrade to Windows Management Framework 3.0 (and consequently Powershell v3) the bug is fixed and will work correctly.
Does this work? If not, is there an error produced?
(Get-Service "MyService" -ComputerName Target).Stop()