Hello everyone and thanks in advance for the possible answers.
Where I work we have different WS2016 virtual machines and we read that the updates could be a pain due to the very long time they could take and we can't stay a lot with the services down (we have several virtual machines to update soon).
In the same thread we read an advice: cleaning the WinSXS folder could drastically reduce this time.
WS2016 already has this scheduled but it has got a 1 hour timeout so if it takes more than that the process gets killed.
The solution is creating the schedule manually so we made a script for this that checks the current date and the last update date and, if the difference is more than 30 days, it runs the command:
dism.exe /Online /Cleanup-Image /AnalyzeComponentStore
and then the command:
dism.exe /Online /Cleanup-Image /StartComponentCleanup
Now the real question...One of the results of AnalyzeComponentStore is:
Component Store Cleanup Recommended
And the answer could be Yes or No
Is there a way to check if this value is "Yes" (so launch the StartComponentCleanup) or "No" (so exit from the script)?
Thanks again!
#Doug Maurer...this is the result of the AnalyzeComponentStore
PS C:> dism.exe /Online /Cleanup-Image /AnalyzeComponentStore
Deployment Image Servicing and Management tool
Version: 10.0.14393.3750
Image Version: 10.0.14393.3241
[===========================99.7%========================= ]
Component Store (WinSxS) information:
Windows Explorer Reported Size of Component Store : 8.08 GB
Actual Size of Component Store : 7.94 GB
Shared with Windows : 6.12 GB
Backups and Disabled Features : 1.49 GB
Cache and Temporary Data : 323.47 MB
Date of Last Cleanup : 2016-09-12 13:40:35
Number of Reclaimable Packages : 0
Component Store Cleanup Recommended : Yes
The operation completed successfully.
PS C:>
There are several ways to achieve this, I will list two and you can choose the one you like better. Others may offer alternative approaches.
First using Select-String - simply pipe the output into select string
$output = #'
Deployment Image Servicing and Management tool Version: 10.0.14393.3750
Image Version: 10.0.14393.3241
[===========================99.7%========================= ]
Component Store (WinSxS) information:
Windows Explorer Reported Size of Component Store : 8.08 GB
Actual Size of Component Store : 7.94 GB
Shared with Windows : 6.12 GB
Backups and Disabled Features : 1.49 GB
Cache and Temporary Data : 323.47 MB
Date of Last Cleanup : 2016-09-12 13:40:35
Number of Reclaimable Packages : 0
Component Store Cleanup Recommended : Yes
The operation completed successfully.
'#
$output | Select-String "Component Store Cleanup Recommended : (\w*)" | foreach {$_.matches.groups[1].value}
I used the outvariable paremeter of Foreach, you could also just assign normally
$cleanup = $output | Select-String "Component Store Cleanup Recommended : (\w*)" | foreach {$_.matches.groups[1].value}
Second suggestion is to use -Match
$cleanup = if($output -match "Component Store Cleanup Recommended : (\w*)"){$matches[1]}
Both will end up setting $cleanup to the yes/no value you're after.
Get-Variable cleanup
Name Value
---- -----
cleanup {Yes}
Now you can simply check if it's yes and run the cleanup if so.
if($cleanup -eq 'yes'){"run cleanup code"}
Related
I'm using TigerVNC, trying to point to specific xstartup because I will need several unique startups for a given user. This is what I'm trying:
vncserver :5 -name "MyServer" -geometry 600x320 -depth 24 -AlwaysShared -fp /usr/share/X11/fonts/misc,/usr/share/X11/fonts/Type1,/usr/share/X11/fonts/100dpi -IdleTimeout 0 -SecurityTypes VncAuth -rfbauth /home/frogger123/.vnc/passwd -xstartup /home/frogger123/.vnc/mystartup
I am consistently getting
Unrecognized option: -xstartup
The docs on the TigerVNC page list this as a valid option. What am I doing wrong? Thanks
edit:From the manual :
vncserver [:display#] [−name desktop-name] [−geometry widthxheight] [−depth depth] [−pixelformat format] [−fp font-path] [−fg] [−autokill] [−noxstartup] [−xstartup script] [Xvnc-options...]
−xstartup script
Run a custom startup script, instead of %HOME/.vnc/xstartup, after launching Xvnc. This is useful to run full-screen applications.
I was using the wrong version of TigerVNC
I am using PowerShell ISE. When I run Add-AzureDisk
I get a CLI wizard and fill in the DiskName
and I have the vhd file uri in my clipboard (copied from the portal)
When I use the uri without "..." I get:
Add-AzureDisk : Invalid URI: cannot parse the hostname.
At line:1 char:1
Add-AzureDisk
~~~~~~~~~~~~~
CategoryInfo : NotSpecified: (:) [Add-AzureDisk], UriFormatException
FullyQualifiedErrorId : System.UriFormatException,Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.AddAzureDisk
Command
When I do use the "uri here" I get:
Add-AzureDisk : Invalid URI: URI-scheme is invalid. At line:1 char:1
I used this button:
I started to think that my powershell modules are out of date or something, so I ran Get-Module AzureRm.Profile -ListAvailable as suggested here:
Directory: C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 4.0.0 AzureRM.Profile {Disable-AzureRmDataCollection, Disable-AzureRmContextAutosave,...
But I also have v5 (found this on the docs website):
Get-Module -ListAvailable -Name AzureRm.Resources | Select Version
Version
5.0.0
As you might have guessed, I am more used to the webportal. But I am trying to create a new vm with two unmanaged disks vhd's which are in my blob storage container.
Edit I tried the Azure CLI:
az vm create -n "tmp-vm" -g "resource-tmp" --attach-os-disk "https://the-uri-copied-from-ui.blob.core.windows.net/vhd-container/vm-osdisk.vhd" --size Standard_DS1_v2 --use-unmanaged-disk --os-type windows
and got:
At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "DiskBlobPendingCopyOperation",
"message": "Disk blob https://the-uri-copied-from-ui.blob.core.windows.net/vhd-container/vm-osdisk.vhd is not ready. Copy state: Failed. Please retry when the blob is ready."
}
]
}
} Correlation ID: 0e1231a9-aa0e-4d79-8953-92ea43c658eb
I created the vhd with the powershell commands that I have found here:
https://stackoverflow.com/a/45569760/169714 perhaps that failed? I did not got an error or anything? How can I consolidate it?
edit2 I tried both templates and had a hard time getting the debug info. But I have found the error now. And it is the same as before:
The blob seems to have the expected size. The lease state says available. Last modified date is a second ago, so does that mean that the underlying storing is still in process? I tried to run Get-AzureStorageBlobCopyState -Blob tmpvmosdisk.vhd -Container vhd-containers -WaitForComplete but that gives an error about a non required (and unknown to me) argument:
Get-AzureStorageBlobCopyState : Could not get the storage context. Please pass in a storage context or set the current storage context.
edit 3 data disk seems flushed again. It was 512gb and is now back to zero?
so I got the not ready message again when I wanted to add the vhd as disk...
As Hannel said, Add-AzureDisk is a classic command. You could not use it to create a ARM mode VM.
--attach-os-disk it requires a managed disk, now, you give a unmanaged disk(VHD), so, you get the error log. See this link.
According to your scenario, the easy way is to create VM with a template. You could use this template:Create a Virtual Machine from a User Image.
If you have existing VNet, you also could use this template.
I can highlight multiple issues, i will try to answer all as best as i can.
First, Add Add-AzureDisk is a ASM (Classic Deployment) command but you are mentioning AzureRM Module. Is it an ARM or ASM deployment?
Second, you CLI is ARM deployment, that failed because you copied the VHD and the copy operation is not yet done so you cannot used the VHD. You should be able use the az storage blob show command to validate the VHD copy/move is completed before using VHD.
Hope this helps.
I am trying to script the elimination/backup of the OEM partition (which just brings back the system to an outdated version of no practical use).
On many systems, using DISKPART list partition returns more recovery type partitions: one is the official Microsoft Recovery Tools partition (WinRE) and others come from the OEMs.
The first step is to safely identify the position of the WinRE partition. I did not find any straight way in bcdedit or PS other than:
$renv=(bcdedit /enum "{default}" | Select-String "^recoverysequence" | Out-String | Select-String "{.+}").Matches.Value
(bcdedit /enum $renv | Select-String "^device" | Out-String | Select-String "\[.+\]").Matches.Value
This returns a string like:
[\Device\HarddiskVolume1]
where the volume number is the partition to use in Diskpart. (Remaining recovery partitions and the OEM type partitions can be backupped).
Is this the correct procedure to identify the WinRE partition?
Any more straight and/or better approach?
There's a command line tool called ReagentC, and it's in the path, so you can call it from any administrative command prompt.
reagentc /info
...will produce some output like:
Windows RE status: Enabled
Windows RE location: \\?\GLOBALROOT\device\harddisk0\partition4\Recovery\WindowsRE
Boot Configuration Data (BCD) identifier: 496c58c4-71cb-11e9-af8f-001c42903d2e
Recovery image location:
Recovery image index: 0
Custom image location:
Custom image index: 0
Also, if you're writing code to do the work, you can discover the recovery partition by calling a winapi function to do the work. It's an obnoxiously complicated api to call...but for what it's worth, it's DeviceIOControl with the control code of IOCTL_DISK_GET_PARTITION_INFO_EX. If you're not using C or some language that defines unions, this is a pain. The structure you get back varies with whether the disk is GPT or MBR format.
If the disk is MBR, the returned partition type will be 0x27, and if it's a GPT drive the partition type will be the guid: de94bba4-06d1-4d40-a16a-bfd50179d6ac.
Aside from streamlining the Select-String with a Lookbehind-RE
I dont't see a better approach ATM.
$renv=(bcdedit /enum "{default}" | Select-String "(?<=^recoverysequence\s+)({.+})").Matches.Value
(bcdedit /enum $renv | Select-String "(?<=^device.+)\[.+\]").Matches.Value
[\Device\HarddiskVolume5]
I'm running this PerfView command:
PerfView.exe /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
but it seems that even if I defined /MaxCollectSec:30 to 30 seconds the actual data collection process is not stopping and keep adding data to PerfViewData.etl file
This is the output from console windows that Perfview open when running command:
VERBOSE LOG IN: PerfViewData.log.txt
EXECUTING: PerfView /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
Pre V4.0 .NET Rundown disabled, Type 'E' to enable symbols for V3.5 processes.
Do NOT close this console window. It will leave collection on!
Type S to stop collection, 'A' will abort.
Kernel Log: C:\PerfView\PerfViewData.kernel.etl
User mode Log: C:\PerfView\PerfViewData.etl
Starting collection at 12/07/2017 14:26:32
Collecting 10 sec: Size= 10.5 MB.
Collecting 20 sec: Size= 16.4 MB.
Exceeded MaxCollectSec 30
So here it is: Exceeded MaxCollectSec 30 but keep writing to etl files.
I want to send to client an Perfview command to collect system wide data and send me back the zip file with all ETL files from Perfview. Currently command does not stop - somebody know why ? What should I add/remove from command so it will stop automatically after 30 seconds ?
I know it's been a while, but it looks like the /DumpHeap switch is the problem here - if you remove it, the trace will finish on time. I checked the PerfView source code and when DumpHeap is selected there is some interaction with the GUI window:
if (parsedArgs.DumpHeap)
{
// Take a heap snapshot.
GuiHeapSnapshot(parsedArgs, true);
// Ensure that we clean up the heap snapshot state.
parsedArgs.DumpHeap = false;
}
You may create an issue in perfview describing your problem.
I use the following code to swap my newly deployed application from the staging slot into the production slot (swap VIP):
Get-HostedService -serviceName $serviceName -subscriptionId $subcription -certificate $certificate | Get-Deployment -slot staging | Move-Deployment |Get-OperationStatus –WaitToComplete
I thought that the -WaitToComplete flag would make sure all VMs have fully initialize before doing the swap however it doesn't and it performs the swap at which time the newly deployed application in the production slot is still initializing and is unavailable for about 5/10min while it initializes fully.
What is the best way to make sure that the application is fully initialized before doing the Swap VIP operation?
This PowerShell snippet will wait until every instance is ready (building on the answer #astaykov gave).
It queries the state of the running instances in the staging slot, and only if all are showing as 'ready' will it leave the loop.
$hostedService = "YOUR_SERVICE_NAME"
do {
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService `
-Slot Staging `
-InstanceDetails).InstanceStatus
# total number of instances
$total = $list.Length
# count the number of ready instances
$ready = ($list | Where-Object { $_ -eq "ReadyRole" }).Length
Write-Host "$ready out of $total are ready"
$notReady = ($ready -ne $total)
If ($notReady) {
Start-Sleep -s 10
}
}
while ($notReady)
I am guessing that what you might actually be seeing is the delay that it takes for the DNS entries to be propagated and become available.
What you should find is that once the status is reported as Ready you may not be able to access your site using the staging URL "http://.cloudapp.net" you will find that it might not come up... but if you look on the Management Portal you will see at the bottom of the Properties a value for 'VIP' - if you use that IP address "http://xxx.xxx.xxx.xxx you should be able to get to your site.
When you do a SWAP you will find similar behavior. It will take some time for the DNS updates to propagate, but you will likely see that you can still access the site with either the IP address or the staging address (if it has become available).
Finally, 1 question... based on your question it sounds like you might be deploying to staging as part of your build then immediately promoting to a production deployment... is this correct, and if so why not just deploy to the production deployment? (I'm not suggesting that deploying directly into production is a best practice... but if that is your workflow I see no benefit to the temporary deployment to staging)
Hope this helps!
I am not very familiar with PowerShell, but from my experience with shells at all you are pipelining commands. Each set before a pipe charachter (|) represents a single command which would pass its result to the next command in the pipe (command after the pipe character). And because you are executing these commands before the depolyment is fully complete, that's why you get the newly deployed app swapped to the production slot.
First thing to note here is that you have "-WaitToComplete" argument just for the last command, which is actually Get-OperationStatus.
Other thing that I see is that this powershell commands will just do the vip swap. What about deployment?
From what you descriped it appears that your build server is auto deploying to staging, and you have post-build event that executes the swap script. What Mike Erickson suggests here would make sense, if your flow is like that - immediately swap after depoloy to staging. Why would you deploy to staging, if you are going to make a swap without checking application health first? However I would not recommend direct depolyment to the server (delete + deploy), but a service upgrade. Because when we do service upgrade, our deployment keeps its public IP address. If we delete + deploy we get a new public IP address. And the public IP address for a hosted service is already guaranteed to not be changed untill deployment is deleted.
Finally, you shall expand your PowerShell script a bit. First include a routine which will check (and wait untill) the staging slot to be "ready", and then perform the swap. As I said, I'm not much into powershell, but I'm sure this is feasible.
Just my 2 cents.
UPDATE
After revisiting this guide, I now understand something. You are waiting for operation to complete, but this is the VIP-SWAP operation which you are waiting to complete. If your stating deployment is not yet ready, you have to wait for it to become ready. And also like Mike mentioned, there might be DNS delay, which is noted at the end of the guide:
Note:
If you visit the production site shortly after its promotion, the DNS
name might not be ready. If you encounter a DNS error (404), wait a
few minutes and try again. Keep in mind that Windows Azure creates DNS
name entries dynamically and that the changes might take few minutes
to propagate.
UPDATE 2
Well, you will have to query for all the roles and all of their instances and wait for all of them to be ready. Technical you could conduct the VIP swap with at least one instance per role being ready, but I think that would complicate the script even more.
Here's a minor tweak to Richard Astbury's example above that will retry a limited number of times. All credit to him for original sample code, so I'd vote for him as the answer most to the point. Simply posting this variation here as an alternative for people to copy/paste as needed:
$hostedService = "YOUR_SERVICE_NAME"
# Wait roughly 10 minutes, plus time required for Azure methods
$remainingTries = 6 * 10
do {
$ready=0
$total=0
$remainingTries--
# query the status of the running instances
$list = (Get-AzureRole -ServiceName $hostedService -Slot Staging -InstanceDetails).InstanceStatus
# count the number of ready instances
$list | foreach-object { IF ($_ -eq "ReadyRole") { $ready++ } }
# count the number in total
$list | foreach-object { $total++ }
"$ready out of $total are ready"
if (($ready -ne $total) -and ($remainingTries -gt 0)) {
# Not all ready, so sleep for 10 seconds before trying again
Start-Sleep -s 10
}
else {
if ($ready -ne $total) {
throw "Timed out while waiting for service to be ready: $hostedService"
}
break;
}
}
while ($true)