Retrieve Values from text file and use them as Variable in PowerShell - powershell

Please do not kill me as I am a newbie in Windows system administration ;) started few months ago and so I would kindly need a bit of help:
Basically , I would like to use values issued from a text file as variables:
($Clustername and $NewFSWPath) as described into the piece of PS code below:
$ClusterName = "GLA-CLU"
$NewFSWPath = "\\DC01\SQL-CLU"
#As quorum file share witness is a cluster core resource, the only way to remove an existing FSW in PS is to switch the cluster to node majority. It will remove the existing Cluster File Share Witness from the chosen cluster
Set-ClusterQuorum -Cluster $ClusterName -NodeMajority
#Set New Quorum File Share Witness for the cluster
#Add-ADGroupMember $FileShareSecurityGroup -Members "$ClusterName"
$t = $host.ui.RawUI.ForegroundColor
$host.ui.RawUI.ForegroundColor = "Yellow"
Write-Output "Setting A New Location for File Share Witness on cluster: '$($ClusterName)':"
Write-Host "`r"
$host.ui.RawUI.ForegroundColor = $t
Set-ClusterQuorum -Cluster $ClusterName -NodeAndFileShareMajority $NewFSWPath
Write-Host "`r"
$host.ui.RawUI.ForegroundColor = $t
$host.ui.RawUI.ForegroundColor = "Yellow"
Write-Output "Checking New File Share Witness Availablity:"
$host.ui.RawUI.ForegroundColor = $t
Get-clusterresource -cluster $ClusterName | where-object {$_.ResourceType -like "File Share Witness"} | get-clusterparameter
Much appreciated.
Thank you.

Assuming your TEXT_FILE.TXT contains only two lines:
Clustername = GLA-CLU
NewFSWPath = \\DC01\SQL-CLU
then
$Path = "C:\TEXT_FILE.TXT"
$Lines = [System.IO.File]::ReadLines($Path)
$Token = $Lines.Split('=').Trim()
$ClusterName = $Token[1]
$NewFSWPath = $Token[3]
Write-Host $ClusterName
write-host $NewFSWPath
output:
GLA-CLU
\\DC01\SQL-CLU
now you can use your variables $ClusterName, $NewFSWPath in your script as required.

Thanks, but I managed to sort it out using:
Get-Content 'fswsiteconf.txt' | Foreach-Object{
$var = $_.Split('=')
New-Variable -Name $var[0] -Value $var[1] }
It works for me now using the piece of code indicated above
What do you think as a solution?
Thanks for your help.

Related

export to csv powershell script using multiple foreach statements

I have following powershell script reading from csv and exporting to another csv. It's working in terms of basic functionality. Script below is currently exporting as such:
USERS
jdoe
mprice
tsmith
Add-PSSnapin microsoft.sharepoint.powershell -ErrorAction SilentlyContinue
# csv file name
[parameter(Mandatory=$false)][string]$CsvFilePath = ".\AllSiteCollectionsLocal.csv"
$csvItems = Import-Csv $CsvFilePath
$resultsarray = #()
$firstObject = New-Object PSObject
# iterate lines in csv
foreach($Item in $csvItems)
{
$site = new-object Microsoft.SharePoint.SPSite($Item.SiteCollection)
$web = $site.openweb()
$siteUsers = $web.SiteUsers
Write-Host $Item.SiteCollection -ForegroundColor Green
foreach($user in $siteUsers)
{
Write-Host $user.LoginName
$loginnames = #{
USERS = $user.LoginName
}
$resultsarray += New-Object PSObject -Property $loginnames
}
$web.Dispose()
$site.Dispose()
$resultsarray | export-csv -Path c:\temp\sitesandusers.csv -NoTypeInformation
}
I need to export as below. Note, I dont even need a header, but do need $Item.SiteCollection value to print out between each iteration of users under each site, so the outer foreach needs to print $Item.SiteCollection then the inner foreach would print $user.LoginName
http://test1.com
jdoe
mprice
http://test2.com
tsmith
I'm guessing you wanted to do parameters for your script to be called from elsewhere? As of now, your metadata attribute on $CsvFilePath are redundant to what PowerShell already does for you.
As for your question, you would just have to append $Item.SiteCollection to your PSObject. This too isn't needed as PowerShell streaming capabilities allow you to assign directly to a variable; so no need for += - which can be computationally expensive on larger lists slowing overall performance. Now we end up with:
Param (
[parameter(Mandatory=$false)]
[string]$CsvFilePath = ".\AllSiteCollectionsLocal.csv"
)
Add-PSSnapin microsoft.sharepoint.powershell -ErrorAction SilentlyContinue
$csvItems = Import-Csv $CsvFilePath
$variable = foreach($Item in $csvItems)
{
$site = new-object Microsoft.SharePoint.SPSite($Item.SiteCollection)
$web = $site.openweb()
$siteUsers = $web.SiteUsers
Write-Host -Object $Item.SiteCollection -ForegroundColor Green
Write-Output -InputObject $Item.SiteCollection
foreach($user in $siteUsers)
{
Write-Host -Object $user.LoginName
Write-Output -InputObject $user.LoginName
}
$null = $web.Dispose()
$null = $site.Dispose()
}
$variable | Out-File -FilePath 'c:\temp\sitesandusers.csv'
Bypassing $variable you can assign the output directly to the file placing the export outside the first foreach statement.
This requires the use of a sub-expression operator $() to wrap around the loop.
Also added a Param ( ) statement for your parameter declaration.
Didn't mess with the parameter attributes as it can show the Authors intentions regardless if it's needed or not.
Probably should add that, Write-Output will explicitly write to the success stream allowing the values to be assigned to the variable, whereas Write-Host writes to the information stream, so no object pollution (duplicates) occur.

How to calculate size of all Azure Storage Tables from a Subscription using Powershell

How to calculate size of all Azure Storage Tables from a Subscription using Powershell.
I tried to search online if there is any direct way of querying table sizes but looks like there isn't.
Can you someone please give me a working model of calculating a Azure Storage Table size.
Please.
FOREACH ($SubscriptionID in $Subscriptions) {
Write-Host -ForegroundColor Green "Working on $N. $SubscriptionID"
$StorageAccounts = Get-AzStorageAccount
FOREACH ($StorageAccount in $StorageAccounts) {
$StorageAccountName = $StorageAccount.StorageAccountName
Write-Host -ForegroundColor Yellow "Working on $StorageAccountName"
$AllTables = Get-AzStorageTable -Context $StorageAccount.Context
FOREACH ($TableName in $AllTables) {
$Name = $TableName.Name
Write-Host -ForegroundColor Green "Working on $StorageAccountName,$Name"
Get-AzStorageTable –Name $TableName.Name –Context $StorageAccount.Context
}
}
$N = $N+1
}
You can calculate the size of all Azure Storage Tables, but the minimum granularity could just be all the tables in a storage account, not a specific table.
Try the command as below, it works fine on my side.
$StorageAccounts = Get-AzStorageAccount
foreach($item in $StorageAccounts){
$id = $item.Id+"/tableServices/default"
$name = $item.StorageAccountName
$metric = Get-AzMetric -ResourceId $id -MetricName "TableCapacity" -WarningAction Ignore
$data = $metric.Data.Average/1024/1024
Write-Output "Tables in $name : $data MB"
}
Besides, looks you want to use the command in several subscriptions, if so, I think you need to run Set-AzContext to set the subscription before running the command above.
Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"

How to Load Component Services/DCOM Config SnapIn

I have a PS script to do some DCOM configuration. It works fine as long as I have the Component Services/DCOM Config snapin loaded. I want to load that programmatically so I can do all of this as part of an install package. Does anyone know how to do it? I don't know the name of the snapin to add/import.
To load the snapin I run comexp.msc -32 and click Component Services, Computers, My Computer, DCOM Configuration.
Thanks
I faced a similar problem. I couldn't find a way of loading Component services on the DCOM Config spapIn. But I found a workaround to add the user the Default DCOM Launch and Activation permissions using this powershell script:
https://www.peppercrew.nl/index.php/2012/03/set-dcom-remote-access-via-powershell/
That way, you don't need to assign the user to that particular DCOM App.
Hope this help
This is the powershell script:
PARAM(
[string]$Principal = $(throw "`nMissing -Principal DOMAIN\Group"),
$Computers = $(throw "`nMissing -Computers ('server01','server02')"))
# USAGE:
# .\Set-RemotePermission-DCOM.ps1 -Principal "DOMAIN\" -Computers ('', '',...)
#
# EXAMPLE:
# .\Set-RemotePermission-DCOM.ps1 -Principal "DOMAIN\LG-Citrix-Admins" -Computers ('CTX_DC001', 'CTX_DC002')
#
# Inspired by Karl Mitschke's post:
# http://unlockpowershell.wordpress.com/2009/11/20/script-remote-dcom-wmi-access-for-a-domain-user/
#
# And inspired Brad Turner's post:
# http://social.technet.microsoft.com/Forums/en-US/ilm2/thread/5db2707c-87c9-4bb2-a0eb-912363e2814a/
function get-sid
{
PARAM ($DSIdentity)
$ID = new-object System.Security.Principal.NTAccount($DSIdentity)
return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()
}
$sid = get-sid $Principal
#DefaultLaunchPermission - Local Launch, Remote Launch, Local Activation, Remote Activation
$DCOMSDDLDefaultLaunchPermission = "A;;CCDCLCSWRP;;;$sid"
#DefaultAccessPermision - Local Access, Remote Access
$DCOMSDDLDefaultAccessPermision = "A;;CCDCLC;;;$sid"
#PartialMatch
$DCOMSDDLPartialMatch = "A;;\w+;;;$sid"
foreach ($strcomputer in $computers)
{
write-host "`nWorking on $strcomputer with principal $Principal ($sid):"
# Get the respective binary values of the DCOM registry entries
$Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"
$DCOMDefaultLaunchPermission = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","DefaultLaunchPermission").uValue
$DCOMDefaultAccessPermission = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","DefaultAccessPermission").uValue
# Convert the current permissions to SDDL
write-host "`tConverting current permissions to SDDL format..."
$converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper
$CurrentDCOMSDDLDefaultLaunchPermission = $converter.BinarySDToSDDL($DCOMDefaultLaunchPermission)
$CurrentDCOMSDDLDefaultAccessPermission = $converter.BinarySDToSDDL($DCOMDefaultAccessPermission)
# Build the new permissions
if (($CurrentDCOMSDDLDefaultLaunchPermission.SDDL -match $DCOMSDDLPartialMatch) -and ($CurrentDCOMSDDLDefaultLaunchPermission.SDDL -notmatch $DCOMSDDLDefaultLaunchPermission))
{
$NewDCOMSDDLDefaultLaunchPermission = $CurrentDCOMSDDLDefaultLaunchPermission.SDDL -replace $DCOMSDDLPartialMatch, $DCOMSDDLDefaultLaunchPermission
}
else
{
$NewDCOMSDDLDefaultLaunchPermission = $CurrentDCOMSDDLDefaultLaunchPermission.SDDL + "(" + $DCOMSDDLDefaultLaunchPermission + ")"
}
if (($CurrentDCOMSDDLDefaultAccessPermission.SDDL -match $DCOMSDDLPartialMatch) -and ($CurrentDCOMSDDLDefaultAccessPermission.SDDL -notmatch $DCOMSDDLDefaultAccessPermision))
{
$NewDCOMSDDLDefaultAccessPermission = $CurrentDCOMSDDLDefaultAccessPermission.SDDL -replace $DCOMSDDLPartialMatch, $DCOMSDDLDefaultAccessPermision
}
else
{
$NewDCOMSDDLDefaultAccessPermission = $CurrentDCOMSDDLDefaultAccessPermission.SDDL + "(" + $DCOMSDDLDefaultAccessPermision + ")"
}
# Convert SDDL back to Binary
write-host "`tConverting SDDL back into binary form..."
$DCOMbinarySDDefaultLaunchPermission = $converter.SDDLToBinarySD($NewDCOMSDDLDefaultLaunchPermission)
$DCOMconvertedPermissionDefaultLaunchPermission = ,$DCOMbinarySDDefaultLaunchPermission.BinarySD
$DCOMbinarySDDefaultAccessPermission = $converter.SDDLToBinarySD($NewDCOMSDDLDefaultAccessPermission)
$DCOMconvertedPermissionsDefaultAccessPermission = ,$DCOMbinarySDDefaultAccessPermission.BinarySD
# Apply the changes
write-host "`tApplying changes..."
if ($CurrentDCOMSDDLDefaultLaunchPermission.SDDL -match $DCOMSDDLDefaultLaunchPermission)
{
write-host "`t`tCurrent DefaultLaunchPermission matches desired value."
}
else
{
$result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","DefaultLaunchPermission", $DCOMbinarySDDefaultLaunchPermission.binarySD)
if($result.ReturnValue='0'){write-host " Applied DefaultLaunchPermission complete."}
}
if ($CurrentDCOMSDDLDefaultAccessPermission.SDDL -match $DCOMSDDLDefaultAccessPermision)
{
write-host "`t`tCurrent DefaultAccessPermission matches desired value."
}
else
{
$result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","DefaultAccessPermission", $DCOMbinarySDDefaultAccessPermission.binarySD)
if($result.ReturnValue='0'){write-host " Applied DefaultAccessPermission complete."}
}
}
#----------------------------------------------------------------------------------------------------------
trap
{
$exMessage = $_.Exception.Message
if($exMessage.StartsWith("L:"))
{write-host "`n" $exMessage.substring(2) "`n" -foregroundcolor white -backgroundcolor darkblue}
else {write-host "`nError: " $exMessage "`n" -foregroundcolor white -backgroundcolor darkred}
Exit
}
#----------------------------------------------------------------------------------------------------------
I faced the same issue and, I believe, it's because there's no equivalent 64-bit registry entry so PowerShell doesn't see it. Launching mmc compexp.msc /32 and expanding DCOM Config seems to create the entry in the background.
The work-around is to manually add the 64-bit AppID yourself which is simply done by the following code,
$appGUID = 'YOUR_APPNAME_OR_GUID'
New-PSDrive -PSProvider Registry -Name HKCR -Root HKEY_CLASSES_ROOT
New-Item -Path HKCR:\AppID\$appGUID -Value $appGUID
#New-Item -Path HKCR:\Wow6432Node\AppID\$appGUID -Value $appGUID
Remove-PSDrive HKCR
I've left the 32-bit location in the above code too although that should already exist. Once you run the above then PowerShell should be able to see the COM component,
Get-WMIObject -query ('SELECT * FROM Win32_DCOMApplicationSetting WHERE AppID = "' + $appGUID + '"') -EnableAllPrivileges
Hope this helps someone as it was driving me bananas for hours!

Set-S3Acl Powershell cmdlet not working post 20-30k objects

My client had an issue, i.e., they accidentally copied 13 million Objects (files) to S3 bucket with wrong permissions. They have asked my team to fix it. We have to update each 13 million files in the S3 bucket with correct ACLs. We are using below powershell script to fix it. However, when the script runs on a folder with more than 20-30k objects, it fails to set the ACLs. [It iterates thru the loop, but it wont set the permission post 20-30k objects, no exception either]
I am suspecting that the requests might be getting throttled. Have any one of you came across such issue. Please help me on how to proceed.
I am looking for answers for the below questions:
1. If the API calls are getting throttled # 20-30k objects, how can I modify my script to overcome it.
2. What is the best practice in terms of scripting to "modify" AWS resources (like set ACL permission to S3 objects) for millions of objects
(I am not looking for the "BucketPolicy" approach, as we have to do it with a script and apply the ACLs to every S3 object)
Param (
[Parameter(Position=0,Mandatory=$true)]
[string]$profile,
[Parameter(Position=1,Mandatory=$true)]
[string]$switchToAccount,
[Parameter(Position=2,Mandatory=$true)]
[string]$roleName,
[Parameter(Position=3,Mandatory=$true)]
[string]$keyPrefix
)
#Set base AWS credentials
Set-AWSCredentials -ProfileName $profile
Set-DefaultAWSRegion -Region $region
#Get and set MFA device ARN
$userName = (Get-IAMUser).UserName
$mfaArn = "arn:aws:iam::xxxxxxxxx:mfa/" + "$userName"
#Configure CAA roles
$roleArn = "arn:aws:iam::" + "$switchToAccount" + ":role/" + "$roleName"
$roleSessionName = "xxxxxxxxxxxx"
#Prompt for MFA token and perform CAA request
$tokenCode = Read-Host -Prompt "Enter MFA token for $accountNumber"
$switchRole = Use-STSRole -RoleSessionName $roleSessionName -RoleArn $roleArn -TokenCode $tokenCode -SerialNumber $mfaArn
#Set new role for CAA
Set-AWSCredentials -Credential $switchRole.Credentials
#Declare access level for S3 Object ACL grantees
$FULL_CONTROL = [Amazon.S3.S3Permission]::FULL_CONTROL
$grants = #();
#Grant FULL_CONTROL access to xxxxxxxxxxxxxxxxxxxxx
$grantee1 = New-Object -TypeName Amazon.S3.Model.S3Grantee
$grantee1.EmailAddress = "xxxxxxxxxxxxxxxxxxx"
#Grant FULL_CONTROL access to xxxxxxxxxxxxxxxxx
$grantee2 = New-Object -TypeName Amazon.S3.Model.S3Grantee
$grantee2.EmailAddress = "xxxxxxxxxxxxxxxxxxx"
#Grant FULL_CONTROL access to xxxxxxxxxxxxxxxxxxxx
$grantee3 = New-Object -TypeName Amazon.S3.Model.S3Grantee
$grantee3.EmailAddress = "xxxxxxxxxxxxxxxxxxxxx"
#Create grant and add to grant list
$grant1 = New-Object -TypeName Amazon.S3.Model.S3Grant
$grant1.Grantee = $grantee1
$grant1.Permission = $FULL_CONTROL
$grants += $grant1
#Create grant and add to grant list
$grant2 = New-Object -TypeName Amazon.S3.Model.S3Grant
$grant2.Grantee = $grantee2
$grant2.Permission = $FULL_CONTROL
$grants += $grant2
#Create grant and add to grant list
$grant3 = New-Object -TypeName Amazon.S3.Model.S3Grant
$grant3.Grantee = $grantee3
$grant3.Permission = $FULL_CONTROL
$grants += $grant3
#Set bucket name for S3 objects
$bucketName = "xxxxxxxxxxxxxxxxxxxxxxxxx"
#Get all S3 Objects in specified bucket
$s3Objects = Get-S3Object -BucketName $bucketName -KeyPrefix $keyPrefix
#Count for progress bar
$totalObjects = $s3Objects.length
$i = 1
$fail_count = 0
$current_count = 0
$file_path = "C:\Users\Administrator\Desktop\Failed_Objects_new\" + $keyPrefix.Replace("/","_") + ".txt"
$file_path_retry = "C:\Users\Administrator\Desktop\Failed_Objects_new_retry\" + $keyPrefix.Replace("/","_") + ".txt"
new-item $file_path -ItemType file
new-item $file_path_retry -ItemType file
"Total Object Count:" + $totalObjects + "`n" | Out-File $file_path -Append
foreach($s3Object in $s3Objects){
$owner = $s3Object.owner.id
$s3Object.name | Write-Output
$current_count++
#Extracts Key for each S3 object in bucket
$key = $s3Object.Key
#Logging
Write-Host "Setting $bucketName | $key | $grants"
# Pick objects that were modified on or before July 15th
try {
if (($s3Object.LastModified.month -lt 7)) {
Set-S3ACL -BucketName $bucketName -Key $key -Grant $grants -OwnerId $owner
$owner | Write-Host
}
elseif(($s3Object.LastModified.month -eq 7) -and ($s3Object.LastModified.day -le 15)) {
Set-S3ACL -BucketName $bucketName -Key $key -Grant $grants -OwnerId $owner
$owner | Write-Host
}
}catch{
"Failed $bucketName | $key | $grants" | out-file $file_path -Append
$key | Out-File $file_path_retry -Append
$fail_count++
}
Write-Host "progress: " $current_count "/" $totalObjects
#Update progress bar
$percentComplete = $i/$totalObjects
Write-Progress -Activity "Setting S3 Object ACL's" -Status "$i% complete" -PercentComplete $percentComplete
$i++
}
"`n`n Total Fail Count:" + $fail_count | Out-File $file_path -Append
Steps to debug the problem:
Make sure if it is throttling issue. In for loop; break after 10k objects and see if everything works fine.
Also, put print statements inside try block both if and else.. to make sure if its reaching there or not; and when is it failing.

How to delete ALL Azure resources with powershell

I need to empty my Azure account from all resources and there's too much to remove individually in the portal. Looking for a powershell script to do this. Thanks.
As resources in Azure are grouped into resource groups(RG), that would probably be the easiest way to go about this. Use these cmdlets to do this.
Get-AzureRmResourceGroup
Remove-AzureRmResourceGroup
Once you have retrieved all the RGs, you can pipe the results with the | character to the Remove cmdlet and iterate through them with a ForEach loop. Give it a go, it is the best way to learn, as opposed to simply asking for the solution on here.
Alternatively, if you don't want to use powershell, just delete your RGs from the portal. I assume you think it would take too long because you are looking at the individual resources and not their RGs, but if you really do have that many RGs, then scripting is best.
#It will delete all resources without asking any confirmation
Login-AzureRmAccount
$rgName = Get-AzureRmResourceGroup
Foreach($name in $rgName)
{
Write-Host $name.ResourceGroupName
Remove-AzureRmResourceGroup -Name $name.ResourceGroupName -Verbose -Force
}
A script like that could be really harmful... but also very useful.
I've created a little script and add little security on it to avoid nuking the wrong subscription.
The script asks you to login-in then list all the subscriptions that this account has access. Once you specify which one, it will list all the resource grouped by resource group. Then as a final warning, it will require one last validation before nuking everything.
# Login
Login-AzureRmAccount
# Get a list of all Azure subscript that the user can access
$allSubs = Get-AzureRmSubscription
$allSubs | Sort-Object SubscriptionName | Format-Table -Property SubscriptionName, SubscriptionId, State
$theSub = Read-Host "Enter the subscriptionId you want to clean"
Write-Host "You select the following subscription. (it will be display 15 sec.)" -ForegroundColor Cyan
Get-AzureRmSubscription -SubscriptionId $theSub | Select-AzureRmSubscription
#Get all the resources groups
$allRG = Get-AzureRmResourceGroup
foreach ( $g in $allRG){
Write-Host $g.ResourceGroupName -ForegroundColor Yellow
Write-Host "------------------------------------------------------`n" -ForegroundColor Yellow
$allResources = Find-AzureRmResource -ResourceGroupNameContains $g.ResourceGroupName
if($allResources){
$allResources | Format-Table -Property Name, ResourceName
}
else{
Write-Host "-- empty--`n"
}
Write-Host "`n`n------------------------------------------------------" -ForegroundColor Yellow
}
$lastValidation = Read-Host "Do you wich to delete ALL the resouces previously listed? (YES/ NO)"
if($lastValidation.ToLower().Equals("yes")){
foreach ( $g in $allRG){
Write-Host "Deleting " $g.ResourceGroupName
Remove-AzureRmResourceGroup -Name $g.ResourceGroupName -Force -WhatIf
}
}
else{
Write-Host "Aborded. Nothing was deleted." -ForegroundColor Cyan
}
The code is available on GitHub: AzurePowerTools
switch to the poweshell shell in Azure and run this command to wipe everything..
Get-AzureRmResourceGroup | Remove-AzureRmResourceGroup -verbose -Force
I know the ask was for Powershell, but if anyone is interested here is for Azure CLI
#!/bin/bash
# NOTE: Be careful as this code in intended to delete ALL Resources in a subscription. Use at your own risk.
# Set The correct Subscription
az account set -s "<Subscription_name / Id>"
# Get All resource groups and loop to delete them
for rg_name in `az group list -o tsv --query [*].name`; do
echo Deleting ${rg_name}
az group delete -n ${rg_name} --yes --no-wait
done
Updated for new Azure PowerShell module Az
# Login
Connect-AzAccount
# Get a list of all Azure subscript that the user can access
$allSubs = Get-azSubscription
$allSubs | Sort-Object SubscriptionName | Format-Table -Property SubscriptionName, SubscriptionId, State
$theSub = Read-Host "Enter the subscriptionId you want to clean"
Write-Host "You select the following subscription. (it will be display 15 sec.)" -ForegroundColor Cyan
Get-azSubscription -SubscriptionId $theSub | Select-azSubscription
#Get all the resources groups
$allRG = Get-azResourceGroup
foreach ( $g in $allRG){
Write-Host $g.ResourceGroupName -ForegroundColor Yellow
Write-Host "------------------------------------------------------`n" -ForegroundColor Yellow
$allResources = Get-azResource -ResourceGroupName $g.ResourceGroupName | FT
if($allResources){
$allResources | Format-Table -Property Name, ResourceName
}
else{
Write-Host "-- empty--`n"
}
Write-Host "`n`n------------------------------------------------------" -ForegroundColor Yellow
}
$lastValidation = Read-Host "Do you wich to delete ALL the resouces previously listed? (YES/ NO)"
if($lastValidation.ToLower().Equals("yes")){
foreach ( $g in $allRG){
Write-Host "Deleting " $g.ResourceGroupName
Get-AzResourceGroup -Name $g.ResourceGroupName | Remove-AzResourceGroup -Verbose -Force
}
}
else{
Write-Host "Aborded. Nothing was deleted." -ForegroundColor Cyan
}
To remove all resources from Azure Resource Group but to keep the group with its settings:
Get-AzResource -ResourceGroupName $ResourceGroupName | Remove-AzResource -Force
Below command can be used to delete all resources
Get-AzResource | Remove-AzResource -force
here is a one-line (please log in using az login)
az group list | ConvertFrom-Json | % {az group delete --name $_.name -y}
Simple pipeline with Az module.
Get-AzResourceGroup | Remove-AzResourceGroup -Force