What is wrong with this EC2 UserData powershell script? - powershell

I'm using AWS S3 to launch a new fleet of spot instances and trying to set up a powershell script to run at luanch as per the instructions here(http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html#user-data-powershell)
Here's the powershell script which generates a name for the server with a random string and then restarts the computer:
<powershell>
$rnd = ([char[]] (65..90) | get-random -count 3) -join ''
$name = "spot-server-$rnd"
Rename-Computer -NewName $name
shutdown /r /f
</powershell>
I've added the script to the UserData field when launching the instances, however, when the instance launches it doesn't seem to work.
NOTE: when i run the script using the powershell terminal it works (renames the computer and restarts the machine)
UPDATE/SOLUTION: I believe the issue was that the AMI I used did not have User Data Enabled in the EC2 Service Properties. I solved the issue by following these steps:
Go to EC2 Service Properties, check the User Data checkbox / click on "Shutdown with Sysprep"
Create a new AMI from that instance that just got shut down
Launch a new instance using the new AMI and specify the User Data script

Related

DFS installation through GCE instance startup script not working

I'm trying to install DFS on a Windows 2012R2 instance in GCP. The instance has a startup script, and in the startup script, it does this:
$code = '
Write-Host "Setting up DFS Replication for Assets"
Start-Sleep 5
Add-DfsrMember -GroupName "CMS" -ComputerName $env:ComputerName
Start-Sleep 5
Set-DfsrMembership -GroupName "CMS" -FolderName "Assets" -ComputerName $env:ComputerName -ContentPath "C:\web\Proof_web\Website\Assets" -ReadOnly 1 -Force
Start-Sleep 5
Add-DfsrConnection -GroupName "CMS" -SourceComputerName gcp-staging-app-1 -DestinationComputerName $env:ComputerName
dfsrdiag StaticRPC /port:49200 /Member:$env:ComputerName
Start-Sleep 5
Restart-Service "DFSR"
Start-Sleep 5
Dfsrdiag PollAD /Member:gcp-staging\$env:computername
'
echo $code
Write-Host "Running powershell to install and configure DFS"
Start-Process -FilePath powershell.exe -ArgumentList $code -verb RunAs -WorkingDirectory C:\installers
I can see in the serial output that all these things look to be happening. When I RDP onto the instance and run a "Get-DFSReplicationGroup", I see what I expect, BUT when I open DFS Management mmc, there's nothing there. The "Namespaces" and "Replication" headers are there, but there's nothing underneath them.
I can then take the same code, run it manually in Powershell ISE, and it all works as expected, after a service restart on the memeber and the source instance.
Somebody, please tell me what sort of idiot I am. Be gentle.
Updates: Gave up on the startup script approach, pretty sure it's permissions, am finding articles where MS advisors are saying that the user has to be a domain admin, which seems pretty whack. But i'm now trying to run the script from a scheduled task, and same issue, permissions. If I add the service account to delegated permissions in DFS, I get this error now; –
"Could not add the computer to the replication group. Computer: WEB-QZL Replication group: "CMS" Retrieving the COM class factory for remote component with CLSID {CEFE3B33-B60F-44FC-BFE4-D354A1CE39EE} from machine WEB-QZL.domain.local failed due to the following error: 80070005 WEB-QZL.domain.local." Why is this process so overally complicated! –
And just to clarify, if I add the svc account to domain admins in AD, it works. I don't want to have a svc account as a domain admin. Just tell me the specific permission MS! this is killing me
Spent a bit of time messing about with this now, went with a run-once scheduled task in the end, that calls the PS script, as can't get it to work on startup without passing credentials in the script which we didn't want to do, and I'm not aware if there's anyway to change the account the startup scripts run under in GCP.
So, for a domain user / service account to have the ability to do this via the script called from a scheduled task, had to give the service account permissions via GPO. The policy / right is called "Synchronize directory service data". Once this service account had that privilege, ran the scheduled task and the new member was added, directories targeted etc.
Thanks all for your help. Hope this helps someone else in the future.
All the best.

How can I resize the Docker Desktop Virtual Machine on Windows 10 from a PowerShell script?

I am attempting to write a PowerShell script (using PS core 7.0) to install and configure a Kubernetes cluster running on Kind on Windows 10 machines used by my teams. I have a working script to start up and configure the cluster the only issue is that I would like to (need to) ensure the Docker Desktop VM has enough memory available to run a few of our micro services inside the cluster at the same time.
I've got a bit of code cobbled together to perform the task and it works up to the very last step where I attempt to get the docker daemon working again after the restart. As soon as I run the command to do that, the VM is reconfigured back to its previous memory size.
Here's what I have to perform the resizing:
Stop-Service *docker*
Get-VM DockerDesktopVM | Stop-VM
Get-VM DockerDesktopVM | Set-VMMemory -StartupBytes 12888MB
Get-VM DockerDesktopVM | Start-VM
Start-Service *docker*
# https://stackoverflow.com/questions/51760214/how-to-restart-docker-for-windows-process-in-powershell
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchDaemon
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchDaemon
Note: I found the post # How to restart docker for windows process in powershell? which is were I got the last 2 lines.
In researching the issue further I have found that I can use the following single line instead, but I still have the same issue in that the memory size is reverted back once the command is run.
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchLinuxEngine
If I do not run either DockerCli.exe -SwitchDaemon twice or DockerCli.exe -SwitchLinuxEngine once then I get the error:
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/containers/json: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Window
s, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Is there a better way to go about resizing the VM memory or to shutdown and restart docker without causing the change to be reverted?
For anyone else who is attempting the same thing, or something similar I got a hint from the Docker Desktop for Windows Community on GitHub that helped me find a solution. In a nutshell the recommendation was to simply change the settings file directly. What I found worked was to:
Stop the Docker Services (There are 2 of them)
Update the settings file (# ~\AppData\Roaming\Docker\settings.json)
Start the Docker Services
Switch the Daemon Context to Linux (Same as it was before, but it appears to need a nudge to pick things up after restarting the services).
Here's the PowerShell:
Stop-Service *docker*
$settingsFile = "$env:APPDATA\Docker\settings.json"
$settings = Get-Content $settingsFile | ConvertFrom-Json
$settings.memoryMiB = 8192
$settings | ConvertTo-Json | Set-Content $settingsFile
Start-Service *docker*
&$Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchLinuxEngine

Getting CimException: Invalid property when using Get-Disk with no parameters

I have a script which makes use of the Get-Disk command in Powershell. Intermittently, I get an error when using Get-Disk with no parameters:
$disk = Get-Disk | Where-Object { $_.Location -eq $Location }
Microsoft.Management.Infrastructure.CimException: Invalid property
at Microsoft.Management.Infrastructure.Internal.Operations.CimAsyncObserverProxyBase`1.ProcessNativeCallback(OperationCallbackProcessingContext callbackProcessingContext, T currentItem, Boolean moreResults, MiResult operationResult, String errorMessage, InstanceHandle errorDetailsHandle)
where $Location is the disk location (similar to PCIROOT(0)#PCI(1500)#PCI(0000)#SAS(P00T01L00)). The script this line is run from is part of our VM provisioning script, which gets run after the clone and VMWare customization script is run. This error does not always happen, and if I go and run the script manually later it succeeds every time leading me to believe it is a race condition of some sort. Any ideas as to why Get-Disk isn't working reliably?
Ultimately, this script is being kicked off from vRealize Orchestrator (vRO, formerly vCenter Orchestrator or vCO) using the Guest Script Manager plugin. This detail may not be relevant, but this script has only failed running when kicked off by this plugin.
Additional details:
Powershell Version: 4.0
OS Version: Windows Server 2012 R2
Hypervisor: VMWare vCenter Version 6.0.0 Build 5112533
vRO Version: 7.2
I ended up provisioning the disks with diskpart instead of the storage cmdlets, which works without issue. Although I did find out that our script is running while the Windows installation is still completing, which may account for the storage cmdlets not working properly.
Follow Up: I did confirm that the storage cmdlets were indeed not working due to the Windows installation still completing. Now that I figured out how to wait for completion, the storage cmdlets work fine every time.

Exception calling AWS Send-SQSMessage PowerShell command

Environment
Windows 7
WMF / PowerShell 5.0 installed
AWS PowerShell version 3.3.36.0
Scenario
I'm trying to use the Amazon Web Services (AWS) PowerShell module to send a message to a queue. However, when I invoke the Send-SQSMessage command, I'm getting an exception thrown:
Send-SQSMessage : The specified queue does not exist for this wsdl version.
I've already set up my AWS credentials in the ~/.aws/credentials file, using the Set-AWSCredentials command. Here's the command I'm calling:
$text = (Get-ChildItem)[1] | ConvertTo-Json -Depth 1
Send-SQSMessage -QueueUrl https://sqs.us-east-1.amazonaws.com/redacted/myqueuename -MessageBody $text -ProfileName TrevorAWS
This error message can pop up when the configured region doesn't match up to the AWS resource that you're working with. To configure the region correctly, you have a couple of choices:
Use the Set-DefaultAWSRegion command to configure the default region. This prevents the need to specify the -Region parameter on
You can specify the -Region parameter on any AWS cmdlet, and force it to use that region for that single command invocation.
Use the Initialize-AWSDefaults command to set up your PowerShell environment.
After configuring the correct region, that correlates to your AWS SQS queue, the Send-SQSMessage command should execute without throwing any exceptions.

Drive Mapping with Azure Scale Sets using Desired State Configuration

I am running into an interesting issue. Maybe you fine folks can help me understand what's happening here. If there's a better method, I'm all ears.
I am running a DSC Configuration on Azure and would like to map a drive. I've read this really isn't what DSC is for, but I am not aware of any other way of doing this outside of DSC with Azure Scalesets. Here's the portion of the script I am running into issues:
Script MappedDrive
{
SetScript =
{
$pass = "passwordhere" | ConvertTo-SecureString -AsPlainText -force
$user = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "username",$pass
New-PSDrive -Name W -PSProvider FileSystem -root \\azurestorage.file.core.windows.net\storage -Credential $user -Persist
}
TestScript =
{
Test-Path -path "W:"
}
GetScript =
{
$hashresults = #{}
$hashresults['Exists'] = test-path W:
}
}
I've also attempted this code in the SetScript section:
(New-Object -ComObject WScript.Network).MapNetworkDrive('W:','\\azurestorage.file.core.windows.net\storage',$true,'username','passwordhere')
I've also tried a simple net use command to map the drive instead of the fancy, New-Object or New-PSDrive cmdlets. Same behavior.
If I run these commands (New-Object/Net Use/New-PSDrive) manually, the machine will map the drive if I run it with a separate drive letter. Somehow, the drive is attempting to be mapped but isn't mapping.
Troubleshooting I've done:
There is no domain in my environment. I am simply attempting to create a scale set and run DSC to configure the machine using the storage account credentials granted upon creation of the storage account.
I am using the username and password that is given to me by the Storage Account user id and access key (randomly generated key, with usually the name of the storage account as the user).
Azure throws no errors on running the DSC module (No errors in Event Log, Information Only - Resource execution sequence properly lists all of my sequences in the DSC file.)
When I log into the machine and check to see if the drive is mapped, I run into a disconnected network drive on the drive letter I want (W:).
If I open Powershell, I receive an error: "Attempting to perform the InitializeDefaultDrives operation on the 'FileSystem' provider failed."
If I run "Get-PSDrive" the W: drive does not appear.
If I run the SetScript code manually inside a Powershell Console, the mapped drive works fine under a different drive letter.
If I try to disconnect the W: drive, I receive "This network connection does not exist."
I thought maybe DSC needed some time before mapping and added a Sleep Timer, but that didn't work. Same behavior.
I had a similar problem before, while it didn't involve DSC, mounting an Azure File share would be fine until the server would be restarted, then it would appear as a disconnected drive. This happend if i used New-Object/Net Use/New-PSDrive with the persist option.
The answer to that issue, i found in the updated docs
Persist your storage account credentials for the virtual machine
Before mounting to the file share, first persist your storage account
credentials on the virtual machine. This step allows Windows to
automatically reconnect to the file share when the virtual machine
reboots. To persist your account credentials, run the cmdkey command
from the PowerShell window on the virtual machine. Replace
with the name of your storage account, and
with your storage account key.
cmdkey /add:<storage-account-name>.file.core.windows.net /user:<storage-account-name> /pass:<storage-account-key>
Windows will now reconnect to your file share when the virtual machine
reboots. You can verify that the share has been reconnected by running
the net use command from a PowerShell window.
Note that credentials are persisted only in the context in which
cmdkey runs. If you are developing an application that runs as a
service, you will need to persist your credentials in that context as
well.
Mount the file share using the persisted credentials
Once you have a remote connection to the virtual machine, you can run
the net use command to mount the file share, using the following
syntax. Replace with the name of your storage
account, and with the name of your File storage share.
net use <drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name>
example :
net use z: \\samples.file.core.windows.net\logs
Since you persisted your storage account credentials in the previous
step, you do not need to provide them with the net use command. If you
have not already persisted your credentials, then include them as a
parameter passed to the net use command, as shown in the following
example.
Edit:
I don't have an Azure VM free to test it on, but this works fine on a Server 2016 hyper-v vm
Script MapAzureShare
{
GetScript =
{
}
TestScript =
{
Test-Path W:
}
SetScript =
{
Invoke-Expression -Command "cmdkey /add:somestorage.file.core.windows.net /user:somestorage /pass:somekey"
Invoke-Expression -Command "net use W: \\somestorage.file.core.windows.net\someshare"
}
PsDscRunAsCredential = $credential
}
In my brief testing the drive would only appear after the server was rebooted.
What I imagine is happening here:
DSC runs under the NT AUTHORITY\SYSTEM account and unless the Credential attribute has been set, the Computer account is used when pulling the files from a network share. But looking at how Azure Files operate, permissions shouldn't be an issue, but running this whole process under NT AUTHORITY\SYSTEM could. I suggest you try to run DSC as a user of your VM's and see if that works.
ps. You could also try to perform the same operation against a VM with network share where you are confident that share\ntfs permissions are correct. You might need to enable anonymous user to access your share for that to work.