PowerCLI New-CDDrive - invalid datastore path - powershell

I'm trying to mount an iso from a datastore using VMWare PowerCLI. Code example:
$IsoPath = "vmstores:\1.2.2.1900443\datacenter1\datastore1\files.iso"
$cd = New-CDDrive -VM Vm001 -ISOPath $IsoPath
This fails with the error: New-CDDrive The operation for the entity "Vm001" failed with the following message: "Invalid datastore path 'vmstores:\1.2.2.1900443\datacenter1\datastore1\files.iso'
The path is valid. I confirm with:
Get-ChildItem "vmstores:\1.2.2.1900443\datacenter1\datastore1\files.iso"
Output:
Name Type Id
---- ---- --
Files.iso DatastoreFile
What is wrong with the command?

I guess that the vmstores: drive is a PowerShell Provider available on your local system, but the VHost has no idea about it, and the path needs to be in the form it understands, e.g. '[DatastoreName] folder\folder2\file.iso'.

Related

Powershell: validate URL path, incorrect path vs non public path

I am trying to validate a URL that, if valid, I will use to download a file. But I want to validate the domain and the full path BEFORE doing the download, so I can provide a meaningful error log.
Given paths like the following
www.validDomain.com/downloads/validDownload.zip
www.validDomain.com/downloads/invalidDownload.zip
www.invalidDomain.com/downloads/validDownload.zip
I want to be able to report
invalid path: /downloads/invalidDownload.zip
Invalid domain: www.validDomain.com
I can use
$uri = [System.Uri]$path
if (Resolve-DnsName -Name:$uri.host) {}
to test the domain. I can then use
if (Invoke-Webrequest $uri.OriginalString -DisableKeepAlive -UseBasicParsing -Method:head) {}
to test the full path without actually doing the download, but the only error I get is The remote server returned an error: (403) Forbidden., both when the path doesn't exist and when the path does exist but it isn't a publicly available path.
Is there any way, on the PowerShell side, to differentiate the permissions issue vs the incorrect path issue?
Or do I have no choice but to provide a wishy washy error in the log? I know effetely the path does not exist to the user who doesn't have permissions, I am just hoping there is some client side way to differentiate. But I may be running into the limitations of how the web has been implemented.

Azure Add-AzureDisk is not working

I am using PowerShell ISE. When I run Add-AzureDisk
I get a CLI wizard and fill in the DiskName
and I have the vhd file uri in my clipboard (copied from the portal)
When I use the uri without "..." I get:
Add-AzureDisk : Invalid URI: cannot parse the hostname.
At line:1 char:1
Add-AzureDisk
~~~~~~~~~~~~~
CategoryInfo : NotSpecified: (:) [Add-AzureDisk], UriFormatException
FullyQualifiedErrorId : System.UriFormatException,Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.AddAzureDisk
Command
When I do use the "uri here" I get:
Add-AzureDisk : Invalid URI: URI-scheme is invalid. At line:1 char:1
I used this button:
I started to think that my powershell modules are out of date or something, so I ran Get-Module AzureRm.Profile -ListAvailable as suggested here:
Directory: C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 4.0.0 AzureRM.Profile {Disable-AzureRmDataCollection, Disable-AzureRmContextAutosave,...
But I also have v5 (found this on the docs website):
Get-Module -ListAvailable -Name AzureRm.Resources | Select Version
Version
5.0.0
As you might have guessed, I am more used to the webportal. But I am trying to create a new vm with two unmanaged disks vhd's which are in my blob storage container.
Edit I tried the Azure CLI:
az vm create -n "tmp-vm" -g "resource-tmp" --attach-os-disk "https://the-uri-copied-from-ui.blob.core.windows.net/vhd-container/vm-osdisk.vhd" --size Standard_DS1_v2 --use-unmanaged-disk --os-type windows
and got:
At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "DiskBlobPendingCopyOperation",
"message": "Disk blob https://the-uri-copied-from-ui.blob.core.windows.net/vhd-container/vm-osdisk.vhd is not ready. Copy state: Failed. Please retry when the blob is ready."
}
]
}
} Correlation ID: 0e1231a9-aa0e-4d79-8953-92ea43c658eb
I created the vhd with the powershell commands that I have found here:
https://stackoverflow.com/a/45569760/169714 perhaps that failed? I did not got an error or anything? How can I consolidate it?
edit2 I tried both templates and had a hard time getting the debug info. But I have found the error now. And it is the same as before:
The blob seems to have the expected size. The lease state says available. Last modified date is a second ago, so does that mean that the underlying storing is still in process? I tried to run Get-AzureStorageBlobCopyState -Blob tmpvmosdisk.vhd -Container vhd-containers -WaitForComplete but that gives an error about a non required (and unknown to me) argument:
Get-AzureStorageBlobCopyState : Could not get the storage context. Please pass in a storage context or set the current storage context.
edit 3 data disk seems flushed again. It was 512gb and is now back to zero?
so I got the not ready message again when I wanted to add the vhd as disk...
As Hannel said, Add-AzureDisk is a classic command. You could not use it to create a ARM mode VM.
--attach-os-disk it requires a managed disk, now, you give a unmanaged disk(VHD), so, you get the error log. See this link.
According to your scenario, the easy way is to create VM with a template. You could use this template:Create a Virtual Machine from a User Image.
If you have existing VNet, you also could use this template.
I can highlight multiple issues, i will try to answer all as best as i can.
First, Add Add-AzureDisk is a ASM (Classic Deployment) command but you are mentioning AzureRM Module. Is it an ARM or ASM deployment?
Second, you CLI is ARM deployment, that failed because you copied the VHD and the copy operation is not yet done so you cannot used the VHD. You should be able use the az storage blob show command to validate the VHD copy/move is completed before using VHD.
Hope this helps.

Running a vboxmanage clonemedium from a PowerShell script

This is probably nothing but I'm not able to make a simple VirtualBox vboxmanage clonemedium command run correctly in a PS script.
Seams like there's a quote issue. Tried a lot, got no results.
My paths to the source and destination files are previously stored into 2 variables.
If I trace them, they are perfectly OK.
$sourceFile = "C:\VirtualBox VMs\OdooV8-Clone1\Odoo-imchem-64b-Clone1.vdi\"
$destinationFile = "E:\TestVMbackup\$thedate\OdooV8-imchem-clone_$theShortDate.vdi"
vboxmanage clonemedium disk '$sourceFile' '$DestinationFile' --variant Fixed
This last try returns:
VBoxManage.exe: error: Could not find file for the medium 'C:\Program Files\Oracle\VirtualBox\$sourceFile' (VERR_FILE_NOT_FOUND)
VBoxManage.exe: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component MediumWrap, interface IMedium, callee IUnknown
Using:
vboxmanage clonemedium disk "$sourceFile" "$DestinationFile" --variant Fixed
was worst.

rsync: #ERROR: auth failed on module tomcat_backup

I just can't figure out what's going on with my RSync. I'm running RSync on RHEL5, ip = xx.xx.xx.97. It's getting files from RHEL5, ip = xx.xx.xx.96.
Here's what the log (which I specified on the RSync command line) shows on xx.97 (the one requesting the files):
(local time)
2015/08/30 13:40:01 [17353] #ERROR: auth failed on module tomcat_backup
2015/08/30 13:40:01 [17353] rsync error: error starting client-server protocol (code 5) at main.c(1530) [receiver=3.0.6]
Here's what the log(which is specified in the rsyncd.conf file) shows on xx.96 (the one supplying the files):
(UTC time)
2015/08/30 07:40:01 [8836] name lookup failed for xx.xx.xx.97: Name or service not known
2015/08/30 07:40:01 [8836] connect from UNKNOWN (xx.xx.xx.97)
2015/08/30 07:40:01 [8836] auth failed on module tomcat_backup from unknown (xx.xx.xx.97): password mismatch
Here's the actual rsync.sh command called from xx.xx.xx.97 (the requester):
export RSYNC_PASSWORD=rsyncclient
rsync -havz --log-file=/usr/local/bin/RSync/test.log rsync://rsyncclient#xx.xx.xx.96/tomcat_backup/ProcessSniffer/ /usr/local/bin/ProcessSniffer
Here's the rsyncd.conf on xx.xx.xx.97:
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[files]
name = tomcat_backup
path = /usr/local/bin/
comment = The copy/backup of tomcat from .96
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.96/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
Here's the rsyncd.conf on xx.xx.xx.96 (the supplier of files):
Note: there is a 'cwrsync' (Windows version of rsync) successfully calling for files also (xx.xx.xx.100)
Note: yes, there is the possibility of xx.96 requesting files from xx.97. However, this is NOT actually happening.
It's commented out of the init.d mechanism.
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
pid file = /var/run/rsync.pid
strict modes = false
[files]
name = tomcat_backup
path = /usr/local/bin
comment = The copy/backup of tomcat from xx.97
uid = tomcat
gid = tomcat
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = xx.xx.xx.97/255.255.255.0, xx.xx.xx.100/255.255.255.0
Here's the rsyncd.secrets on xx.xx.xx.97:
files:files
It was something else. I had a script calling the rsync command, and that was causing the problem. The actual rsync command line was ok.
Apologies.
This is what I have been through when I got this error. My first thinking was to check rsync server log. and it is not in the place configured in rsync.conf. Then I checked the log printed in systemctl status rsyncd
rsyncd[23391]: auth failed on module signaling from unknown (172.28.15.10): missing secret for user "rsync_backup"
rsyncd[23394]: Badly formed boolean in configuration file: "no # rsync daemon before transmission, change to the root directory and limited within.".
rsyncd[23394]: params.c:Parameter() - Ignoring badly formed line in configuration file: ignore errors # ignore some io error informations.
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot upload file to this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, cannot download file from this server.".
rsyncd[23394]: Badly formed boolean in configuration file: "false # if true, can only list files here.".
Combining the fact that log configuration does not come into play. It seems that the comment after each line of configuration in rsync.conf makes configurations invalid. So I deleted those # ... and restart rsyncd.

Cannot bind argument to parameter 'Path' because it is an empty string

I am trying to run custom script on windows AWS AMI. The steps I am using is as given here:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/mon-scripts-powershell.html
My Instance is already associated with IAM role and credential file path is set.
I am trying to run following command in power-shell:
.\mon-put-metrics-mem.ps1 -mem_util -mem_used -mem_avail -page_avail -page_used -page_util -memory_units Megabytes
The error I am getting is:
Cannot bind argument to parameter 'Path' because it is an empty string.
Your script is not able to see AWS_CREDENTIAL_FILE env var.
So try loading ur credential file
setx AWS_CREDENTIAL_FILE C:\aws\myCredentialFile.txt'
Then open a new powershell window because if you attempt to run it in the same window, it will not see the AWS_CREDENTIAL_FILE env var. Now try running
.\mon-put-metrics-mem.ps1 -mem_util -mem_used -mem_avail -page_avail -page_used -page_util -memory_units Megabytes -verbose