How to determine if EMC PowerPath is installed on an ESX host using PowerCLI - powershell

TL;DR How can I use PowerCLI to determine if EMC PowerPath is installed on an ESX host?
I am attempting to write a script that will perform a host-masking operation when moving a LUN from one storage group to another. This is to accommodate the All Paths Down error that can occur due to a race condition in ESX 4.1. The steps are described in VMWare KB 1015084 and 1009449. These steps are written for use from the service console. I want to avoid scripting SSH activity and instead do the entire thing in Powershell/PowerCLI.
In our environment, we are using EMC PowerPath on most - but not all - of our hosts. This LUN masking only needs to be performed on hosts where PowerPath is installed, so I am attempting to test each host to determine this.
I have been pulling my hair out trying to determine how to do this with PowerCLI. If connected to the ESX service console, the command esxcfg-mpath --list-plugins will show if PowerPath is installed. In the vCenter GUI, it can be determined by:
Select Host -> Configuration -> Storage Adapters -> Select Adapter -> View Devices -> Examine "Owner" column
Using get-scsilun in PowerCLI returns an object that contains all this information except this Owner column.
I am stumped. I had hoped that a get-esxcli object would have some kind of equivalent methods, maybe in satp or nmp, but so far I can't find anything.

As suggested, I'll answer my own question:
The answer is: $esxcli.corestorage.plugin.list() will return a list of plugins installed on the host.

To get this information from PowerCLI 6.5 you can use the following:
(Get-ESXCLI -VMHost <host>).Storage.Core.Plugin.List()

Related

What is the proper way to check if HyperV is running?

I am trying to write a powershell script to install and set up Hyper-V machines. The install seems to be ok, however, I get contradictory responses from the system.
Basically, I use the (gcim Win32_ComputerSystem).HypervisorPresent to determine if HyperV is running.
It return False.
There is a similar class with the same member (gcim CIM_ComputerSystem).HypervisorPresent what is also returning False.
Also found this question How do you check to see if Hyper-V is enabled using PowerShell?
and this state property is Enabled
Do I miss something? These queries aren't the same? Could you point if any of these are deprecated?
Am I totally fooled, and Enabled means the system is capable to run HyperV, but actually it is not running?
CIM and WMI are a long tale but the short summary is that WMI is a Microsoft implementation of the OMI Standards defined by the DMTF, the Distributed Management Task Force, to come up with an industry wide standard. So, of course, creating one new standard resulted in a bunch of different implementations, which are basically their own standard.
But otherwise CIM and WMI can be thought of as different gateways to the same information for Windows computers. Different doors to the same house. More on that history and the distinctions here.
When I run the PowerShell commands you shared (either of them) on my machine with Hyper-V present, even when running as a standard, non-admin user, I get True back for both.
You can also check to see if the BIOS firmware has virtualization enabled by looking in the CIM_Processor class.
(Get-CimInstance win32_processor).VirtualizationFirmwareEnabled
True
You could also check to see if the Windows Feature is installed but that doesn't give you the full picture (what if the Windows feature is enabled in an image applied to a machine without virtualization components enabled in the BIOS, for instance.)
[ADMIN] C:\>(Get-WindowsOptionalFeature -FeatureName Microsoft-Hyper-V-All -Online).State
Enabled
Also, that technique 👆 requires admin permissions.
Another way, and maybe the easiest is to check is to see if the Hyper-V Computer Service is running, which is needed for any VMs to launch, and can only run if everything else on the machine is done correctly to enable Hyper-V.
Get-Service vmcompute
Status Name DisplayName
------ ---- -----------
Running vmcompute Hyper-V Host Compute Service
We used to deploy servers with a MDT Task Sequence and enable Hyper-V along the way. It required reboots and special commands to run to apply the right bios settings. Then, we could enable the Windows Features, but those required two reboots, so it was quite tricky to handle with most imaging systems. Our final 'Sanity Check' was whether the Hyper-V compute service was running.

Powershell Script to list all Domain connected hosts

I'm busy writing a script as a project to Audit Windows Servers for PCI compliance, One of the things my project lead has asked me to attempt to get to try to get a list of all hosts that are connected to a domain, however this script needs to be able to be run on any windows server without being able to import any modules, so I'm stuck with whatever tool already exists on a bare machine.
Ive already written parts of the script that can rely on the 'active directory' modules but I also need to find a way to get information without any DNS or Domain roles installed.
The closest I can get to achieving this is by using the 'netdom' command however this relies on usernames and passwords that I cannot query for in the auditing script.
Ive tried tools like nslookup and a few other things I've come across while looking for answers online, but most of it seems to rely on modules that I cannot install on the machines that the script will need to run on.
Does anyone know if this can actually be done? and if so how can I achieve this?
Edit: for a bit more clarity, I need a way to get a list of all machines in the domain from machines that are NOT a domain controller and I cannot alter these machines at all.
As per boxdog's comment "([adsisearcher]"objectcategory=computer").findall()" command works just fine

Failed to load the provider SiloedPackageProvider.dll and metaDeployProvider.dll

Trying to simulate Raspberry Pi in windows 10 laptop with windows 10 IOT Core.
http://annabooks.com/Articles/Articles_IoT10Core/Windows-10-IoT-Core-VM-Version-1.2.pdf
I found this article very useful but has used pre-built image “For MinnowBoard Turbot/MAX”.
I get these errors and other errors too.
Failed to load the provider SiloedPackageProvider.dll and metaDeployProvider.dll
CFfuMiscHelpersT ValidateNotOnTheSameDisk#904 failed with 0x80070001.
while executing this command from winpe.
Dism.exe /Apply-Image /ImageFile:"d:\Flash.ffu" /ApplyDrive:.\PhysicalDrive0 /SkipPlatformCheck
Failed to load the provider SiloedPackageProvider.dll and metaDeployProvider.dll
Also please tell me a way to copy logs from the VM running though HyperV.
Thanks
This issue occurs when you try to back up a specific library or when you accept the default settings in Windows Backup and Restore.You may try to follow up this document to fix the issue.
There are various ways exist to copy data between a Hyper-V host and its guest machines. You can search the ways from internet, or open a new issue for help.

I used sysprep on a VM (new portal) and lost connectivity to the machine

In the new portal, there's an icon that says 'Capture'. I assume this was for capturing an image of a VM (snapshot), but it was greyed out. Doing a little reading, several posts suggested running sysprep to prepare the machine for a capture.
I ran it according to those instructions, the machine appears to reboot, but all connectivity is lost.
Anyone know what's going on or how to fix it? Also, are there any ways to capture a snapshot in the new portal or do we need to use PS scripts?
the machine appears to reboot, but all connectivity is lost.
It is by design behavior. Before capture a VM image, we should use sysprep to generalize the VM, generalizing a VM removes all your personal account information, among other things, and prepares the machine to be used as an image.
After we run sysprep, we will lost all connection. Run sysprep, we should select shutdown:
For now, we can't via Azure new portal to capture a VM image. We can use PowerShell to capture a VM image, we can refer to this link.
you could create a virtual machine from an image. I can't find the
same function in the new portal.
We can't use Azure new portal to create a VM from image, we can use PowerShell to create a VM from image, we can refer to the link.
Most important:
Before you capture a VM image, you should back up you VM's VHD first, because the process will delete the original virtual machine after it's captured.
The latest version of PowerShell is 3.6.0, you can install it from this page.

Is it possible for DSC to deal with the creation and advanced configuration of Virtual Machines?

I'm trying to create a configuration, using PowerShell DSC, that would help me create a SharePoint farm using Virtual Machines. Assuming that I have a Windows 10 machine with Hyper-V installed I would like my configuration script to create the required VMs, for example DC, SPA1, SPw1, SPW2 and SPDB1, configure their network connections and connect to a domain controller (DC1), then proceed to install the SharePoint/SQL Server prerequisites and installation before going on to configure the farm, once available.
I've created configurations that complete various stages but I am unable to figure out how to connect them to work in an orchestrated manor. For example I can create the VMs or perform the install and configuration of SharePoint but I can't get these configurations to work in tandem.
Having read the DSC documentation I thought that is might be possible using composite resources but I am unable to get the configuration to continue onto the new Virtual Machine after creation.
From the composite resource documentation:
configuration RenameVM
{
Import-DscResource -Module TestCompositeResource
Node localhost
{
xVirtualMachine VM
{
VMName = "Test"
SwitchName = "Internal"
SwitchType = "Internal"
VhdParentPath = "C:\Demo\VHD\RTM.vhd"
VHDPath = "C:\Demo\VHD"
VMStartupMemory = 1024MB
VMState = "Running"
}
}
Node "192.168.10.1"
{
xComputer Name
{
Name = "SQL01"
DomainName = "fourthcoffee.com"
}
}
}
Ideally the node names would be dynamically declared in the configuration data and not explicitly defined I.P addresses. I'm also having trouble with my Hyper-V configuration creating multiple switches but that's a separate issue. So I guess my question is:
Is it possible to create a configuration that deals with the creation and advanced configuration of Virtual Machines?
The problem you are running up against is a conceptual one of what DSC does.
Reading the document that you linked, it says
Configurations are declarative PowerShell scripts which define and configure instances of resources. Upon running the configuration, DSC (and the resources being called by the configuration) will simply “make it so”, ensuring that the system exists in the state laid out by the configuration.
DSC is designed to configure an instance of a resource. At its basic level a DSC configuration is run on a single machine, configuring that machine into a specified state.
DSC scripts should be constrained to work within the boundaries of the machine that they are running on. It seems that this is part of the problem you are experiencing.
If you have two sets of scripts. A Deploy VM script, that runs against a hyper-v server and a Sharepoint build that then configures the VM once it has launched. It seems that what you are trying to do is launch the Sharepoint script from within the hyper-v deploy script. At that stage though the Sharepoint server is outside of the boundary of control of the hyper-v server (apart from its atomic VM capabilities, start,stop, delete etc)
Instead what I would suggest you do is see them as two entirely separate entities. There is no need to have a scripted connection between creating a VM and installing Sharepoint.
At a high level your pipeline would look something like this
Run deploy configuration to create a new VM. At the point where that VM is running that configuration is complete. It has no other actions.
The VM builds and starts, part of its initial configuration is to run a bootstrap script that tells it its function.
The VM contacts the DSC server, tells it its function, and requests any configurations that are available for it.
The VM downloads its configurations, and configures itself as a Sharepoint Server (or SQL Server, etc)
If there are external dependencies, i.e. you can install Sharepoint before SQL has completed, then simply have a dependson for a shared file. i.e. if \\server\share\sqlcompleted.txt exists Or whatever other mechanism fits your environment.
Building servers this way removes dependencies, it means that if you decide you want to switch to ESX then all you need to change is your deploy script. Equally if you move everything to a cloud deployment.