Check whether a Cosmos DB exists using PowerShell - powershell

I am trying to check whether a Cosmos DB exists or not using Powershell in Octopus. If Not I need to create it. Thats the requirement
$ApplicationShortName = "stc"
$resourceGroup = $OctopusParameters["AzurePlatform.Application[$ApplicationShortName].ResourceGroup.Name"]
$CosmosAccount = $OctopusParameters["AzurePlatform.Application[$ApplicationShortName].CosmosDbAccount.Name"]
$databaseName='sdsd'
Write-Host "Resource Group : $resourceGroup"
Write-host "Cosmos Account : $CosmosAccount"
#Check whether database exists
Get-AzCosmosDBSqlDatabase -ResourceGroupName $resourceGroup -AccountName $CosmosAccount -Name $databaseName
But here the problem is if DB actually exists, the above function works fine. But if DB not exists, it simply triggers an error.
So how to check whether the DB exists or not. So if not exists I need to fire this command
New-AzCosmosDBSqlDatabase -AccountName $CosmosAccount -Name $databaseName -ResourceGroupName $resourceGroup

You should be able to handle this process within your Octopus script - take a look at the documentation for error handling for Octopus scripts
The best practice here is to always check the exit code when invoking programs:
& ping 255.255.255.0
if ($LastExitCode -ne 0) {
throw "Couldn't find 255.255.255.0"
}
By checking the $LastExitCode, you can determine whether there was a success in your step and drive your database creation.
Also worth noting that if you use the az cli instead of the PowerShell commands, there's a specific command for what you're doing here that returns a boolean! Check out az cosmosdb database exists if you want to try and get it that way without having to manually check exit codes.

Related

Best way to authenticate an Azure Automation Powershell script

I'm trying to implement a fairly simple PowerShell query, hosted in Azure Automation, to manage External Identities
I've set up a System Managed Identity and have successfully connected using Connect-AzAccount -Identity
But when I run it, it says You must call the Connect-AzureAD cmdlet before calling any other cmdlets
The next cmdlet is Get-AzureADPolicy, which I think triggered the above message
Following this blog, I tried this:
$AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext -ErrorAction Stop
Connect-AzureAD -TenantId $AzureContext.Tenant.TenantId -AccountId $AzureContext.Account.Id
and I get this: Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iphlpapi.dll'
Am not at all sure now what to do; any help appreciated
PS: I'm aware there are quite few related questions, but I have not been able to find an answer to this particular query ...
I was having the same issue and I resolved it by using the below commands. I have added comments to underline what each statement is meant for.
# Ensures you do not inherit an AzContext in your runbook. Out-Null is used to disable any output from this Cmdlet.
Disable-AzContextAutosave -Scope Process | Out-Null
# Connect to Azure with system-assigned managed identity.
$AzureContext = (Connect-AzAccount -Identity).context
# set and store context. Out-Null is used to disable any output from this Cmdlet.
Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext | Out-Null
With help from M/S support, I can now clarify the issue. The core point is that it is not possible to authenticate for AzureAD (with Connect-AzureAD) using Managed Identity; a Run As account must be used, at least currently
Further, for our use case, the Run As account had to have "Global Admin" role; "Owner" was not sufficient
It is of course possible to use Managed Identity for managing other Azure Resources (using Connect-AzAccount)

Azure Powershell Runbook - Get-AzureRMWebAppSlot SiteConfig.ConnectionStrings[0] erroring cannot index into a null array

I'm trying to execute below command in a PowerShell Workflow Runbook. I'm getting the error "cannot index into a null array.", which is not true as the same script which ran perfectly on my local machine is not executing while in the Azure portal as a PowerShell Workflow Runbook.
Can anyone please look into this?
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $ResourceGroupName -Name $WebSiteName -Slot $WebSiteSlot
$webApp
"Printing Website ConncectionString"
$webApp.SiteConfig.ConnectionStrings.ConnectionString[0]
Some types do not serialize/deserialize correctly, and in PowerShell Workflow that is a problem because PowerShell Workflow relies on object serialization/deserialization (that's how PSWF is able to checkpoint, suspend, and resume -- it converts all objects to a string form when checkpointing/suspending, and restores back to full objects from those strings when resuming).
It would appear Get-AzureRMWebAppSlot's output object is one of those types that does not serialize/deserialize correctly. From your screenshot I can see that the SiteConfig property of $webApp is a string containing Microsoft.Azure.Management.WebSites.Model.SiteConfig rather than an object as you're expecting. Clearly, the object is not deserializing correctly back to its original form, where SiteConfig is a complex object.
The way to work around this is to only interact with the object in PowerShell script context, rather than workflow context. For example:
workflow foo {
$ResourceGroupName = "RG"
$WebSiteName = "WS"
$WebSiteSlot = "Slot"
$ConnectionString = InlineScript {
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $using:ResourceGroupName -Name $using:WebSiteName -Slot $using:WebSiteSlot
$webApp.SiteConfig.ConnectionStrings.ConnectionString[0]
}
"Printing Website ConnectionString"
$ConnectionString
}

Azure Powershell script fails when run through task scheduler

I have a powershell script that I wrote to backup a local sqlserver to an azure blob. Its based on one I took from MSDN, but I added an extra feature to delete any old backups that are over 30 days old. When I run this as a user, it works fine. When I added this to task scheduler, set to run as me, and I manually ask for it to run, it works fine. (All output is captured in a log file, so I can see that its all working). When run from the task scheduler at night when I'm not logged in (the task scheduler is set to run the script as me) it fails. Specifically, it claims my azure subscription name is not know when I call Set-AzureSubscription. Then, fails when trying to delete the blob with:
Get-AzureStorageBlob : Can not find your azure storage credential. Please set current storage account using "Set-AzureSubscription" or set the "AZURE_STORAGE_CONNECTION_STRING" environment variable.
The script in question:
import-module sqlps
import-module azure
$storageAccount = "storageaccount"
$subscriptionName = "SubName"
$blobContainer = "backup"
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"
$credentialName = "creds"
Set-AzureSubscription -CurrentStorageAccountName $storageAccount -SubscriptionName $subscriptionName
$path = "sqlserver:\sql\servername\SQLEXPRESS\databases"
$alldatabases = get-childitem -Force -path $path | Where-object {$_.name -eq "DB0" -or $_.name -eq "DB1"}
foreach ($db in $alldatabases)
{
Backup-SqlDatabase -BackupContainer $backupUrlContainer -SqlCredential $credentialName $db
}
$oldblobs = Get-AzureStorageBlob -container backup | Where-object { $_.name.Contains("DB") -and (-((($_.LastModified) - $([DateTime]::Now)).TotalDays)) -gt $(New-TimeSpan -Days 30).TotalDays }
foreach($blob in $oldblobs)
{
Write-Output $blob.Name
Remove-AzureStorageBlob -Container "backup" -Blob $blob.Name
}
The backup part of the script works, just not the blob deletion parts. It would appear that something is being done to the environment when I log in that allows the azure powershell scripts to work but that isn't being done when I run the command at night when I'm not logged in.
Any one have any idea what that might be?
Task scheduler is set to run the command with a
Powershell -Command "C:\Scripts\BackupDatabases.ps1" 2>&1 >> "C:\Logs\backup.log"
The Azure PowerShell environment just needs to understand what Azure subscription to work with by default. You probably did this for your own environment, but the task scheduler is running in a different environment.
You just need to add an additional command to the beginning of your script to set the Azure subscription. Something like this:
Set-AzureSubscription -SubscriptionName
The documentation for this command is here. You can also set by SubscriptionID etc. instead of SubscriptionName.
In addition, this article walks through how to connect your Azure subscription to the PowerShell environment.
UPDATE: I messed around and got it working. Try adding a "Select-AzureSubscription" before your Set-AzureSubscription command.
Select-AzureSubscription $subscriptionName
Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccountName $storageAccount
The documentation for Select-AzureSubscription is here. If you aren't relying on that storage account being set, you may be able to remove the Set-AzureSubscription command.
I was never able to make the powershell script work. I assume I could have made it work if I had set the credentials in the environment variable, as it said, but I instead wrote a little program to do the work for me.
Visit https://github.com/sillyotter/BackupDBToAzure if you need a tool to backup things to azure blobs and delete old leftover backups.
Thanks for the help!

Configure SharePoint 2010 UPS with PowerShell

SOLUTION FOUND: For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
NOTE: I've only included the parts of my script I believe to be relevant. I can provide other parts as needed.
I'm at my wit's end on this one. Everything seems to work fine, except the UPS Synchronization service is stuck on "Starting", and I've left it over 12 hours.
It works fine when it's set up through the GUI, but I'm trying to automate every step possible. While automating I'm trying to include every option available from the GUI so that it's present if it ever needs to be changed.
Here's what I have so far:
$domain = "DOMAIN"
$fqdn = "fully.qualified.domain.name"
$admin_pass = "password"
New-SPManagedPath "personal" -WebApplication "http://portal.$($fqdn):9000/"
$upsPool = New-SPServiceApplicationPool -Name "SharePoint - UPS" -Account "$domain\spsvc"
$upsApp = New-SPProfileServiceApplication -Name "UPS" -ApplicationPool $upsPool -MySiteLocation "http://portal.$($fqdn):9000/" -MySiteManagedPath "personal" -ProfileDBName "UPS_ProfileDB" -ProfileSyncDBName "UPS_SyncDB" -SocialDBName "UPS_SocialDB" -SiteNamingConflictResolution "None"
New-SPProfileServiceApplicationProxy -ServiceApplication $upsApp -Name "UPS Proxy" -DefaultProxyGroup
$upsServ = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Service"}
Start-SPServiceInstance $upsServ.Id
$upsSync = Get-SPServiceInstance | Where-Object {$_.TypeName -eq "User Profile Synchronization Service"}
$upsApp.SetSynchronizationMachine("Portal", $upsSync.Id, "$domain\spfarm", $admin_pass)
$upsApp.Update()
Start-SPServiceInstance $upsSync.Id
I've tried running each line one at a time by just copying it directly into the shell window after defining the variables, and none of them give an error, but there has to be something the CA GUI does that I'm missing.
For anyone else that happens to come across this problem, have a look-see at this: http://www.harbar.net/archive/2010/10/30/avoiding-the-default-schema-issue-when-creating-the-user-profile.aspx
TL;DR When you create UPS through CA, it creates a dbo user and schema on the SQL server using the farm account, however when doing it through powershell it creates it with a schema and user named after the farm account, but still tries to manage SQL using the dbo schema, which of course fails terribly.
The workaround is to put my code into its own script file, and then use Start-Process to run the script as the farm account (it's a lot cleaner than the Job method described in the linked article):
$credential = Get-Credential ("$domain\spfarm", $SecureString)
Start-Process -FilePath powershell.exe -ArgumentList "-File C:\upsSync.ps1" -Credential $credential

Powershell - Copying File to Remote Host and Executing Install exe using WMI

EDITED: Here is my code now. The install file does copy to the remote host. However, the WMI portion does not install the .exe file, and no errors are returned. Perhaps this is a syntax error with WMI? Is there a way to just run the installer silently with PsExec? Thanks again for all the help sorry for the confusion:
#declare params
param (
[string]$finalCountdownPath = "",
[string]$slashes = "\\",
[string]$pathOnRemoteHost = "c:\temp\",
[string]$targetJavaComputer = "",
[string]$compname = "",
[string]$tempPathTarget = "\C$\temp\"
)
# user enters target host/computer
$targetJavaComputer = Read-Host "Enter the name of the computer on which you wish to install Java:"
[string]$compname = $slashes + $targetJavaComputer
[string]$finalCountdownPath = $compname + $tempPathTarget
#[string]$tempPathTarget2 =
#[string]$finalCountdownPath2 = $compname + $
# say copy install media to remote host
echo "Copying install file and running installer silently please wait..."
# create temp dir if does not exist, if exist copy install media
# if does not exist create dir, copy dummy file, copy install media
# either case will execute install of .exe via WMII
#[string]$finalCountdownPath = $compname + $tempPathTarget;
if ((Test-Path -Path $finalCountdownPath) )
{
copy c:\hdatools\java\jre-7u60-windows-i586.exe $finalCountdownPath
([WMICLASS]"\\$targetJavaComputer\ROOT\CIMV2:win32_process").Create("cmd.exe /c c:\temp\java\jre-7u60-windows-i586.exe /s /v`" /qn")
}
else {
New-Item -Path $finalCountdownPath -type directory -Force
copy c:\hdatools\dummy.txt $finalCountdownPath
copy "c:\hdatools\java\jre-7u60-windows-i586.exe" $finalCountdownPath
([WMICLASS]"\\$targetJavaComputer\ROOT\CIMV2:win32_process").Create("cmd.exe /c c:\temp\java\jre-7u60-windows-i586.exe /s /v`" /qn")
}
I was trying to get $Job = Invoke-Command -Session $Session -Scriptblock $Script to allow me to copy files on a different server, because I needed to off load it from the server it was running from. I was using the PowerShell Copy-Item to do it. But the running PowerShell script waits until the file is done copying to return.
I want it to take as little resources as possible on the server that the powershell is running to spawn off the process on another server to copy the file. I tried to user various other schemes out there, but they didn't work or the way I needed them to work. (Seemed kind of kludgey or too complex to me.) Maybe some of them could have worked? But I found a solution that I like that works best for me, which is pretty easy. (Except for some of the back end configuration that may be needed if it is is not already setup.)
Background:
I am running a SQLServer Job which invokes Powershell to run a script which backups databases, copies backup files, and deletes older backup files, with parameters passed into it. Our server is configured to allow PowerShell to run and under the pre-setup User account with SQL Server Admin and dbo privileges in an Active Directory account to allow it to see various places on our Network as well.
But we don't want it to take the resources away from the main server. The PowerShell script that was to be run would backup the database Log file and then use the another server to asynchronously copy the file itself and not make the SQL Server Job/PowerShell wait for it. We wanted it to happen right after the backup.
Here is my new way, using WMI, using Windows Integrate Security:
$ComputerName = "kithhelpdesk"
([Wmiclass]'Win32_Process').GetMethodParameters('Create')
Invoke-WmiMethod -ComputerName RemoteServerToRunOn -Path win32_process -Name create -ArgumentList 'powershell.exe -Command "Copy-Item -Path \\YourShareSource\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak -Destination \\YourShareDestination\YourDatabase_2018-08-07_11-45.log.bak"'
Here is my new way using passed in Credentials, and building arg list variable:
$Username = "YouDomain\YourDomainUser"
$Password = "P#ssw0rd27"
$ComputerName = "RemoteServerToRunOn"
$FromFile = "\\YourShareSource\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak"
$ToFile = "\\YourShareDestination\SQLBackup\YourDatabase_2018-08-07_11-45.log.bak"
$ArgumentList = 'powershell.exe -Command "Copy-Item -Path ' + $FromFile + ' -Destination ' + $ToFile + '"'
$SecurePassWord = ConvertTo-SecureString -AsPlainText $Password -Force
$Cred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $Username, $SecurePassWord
([Wmiclass]'Win32_Process').GetMethodParameters('Create')
Invoke-WmiMethod -ComputerName $ComputerName -Path win32_process -Name create -ArgumentList $ArgumentList -Credential $Cred
We think that this above one is the preferred one to use.
You can also run a specific powershell that will do what you want it to do (even passing in parameters to it):
Invoke-WmiMethod -ComputerName RemoteServerToRunOn -Path win32_process -Name create -ArgumentList 'powershell.exe -file "C:\PS\Test1.ps1"'
This example could be changed to pass in parameters to the Test1.ps1 PowerShell script to make it more flexible and reusable. And you may also want to pass in a Credential like we used in a previous example above.
Help configuring WMI:
I got the main gist of this working from: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1
But it may have also needed WMI configuration using:
https://helpcenter.gsx.com/hc/en-us/articles/202447926-How-to-Configure-Windows-Remote-PowerShell-Access-for-Non-Privileged-User-Accounts?flash_digest=bec1f6a29327161f08e1f2db77e64856b433cb5a
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/enable-psremoting?view=powershell-5.1
Powershell New-PSSession Access Denied - Administrator Account
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1 (I used to get how to call Invoke-WmiMethod).
https://learn.microsoft.com/en-us/powershell/scripting/core-powershell/console/powershell.exe-command-line-help?view=powershell-6 (I used to get syntax of command line)
I didn't use this one, but could have: How to execute a command in a remote computer?
I don't know for sure if all of the steps in the web articles above are needed, I suspect not. But I thought I was going to be using the Invoke-Command PowerShell statement to copy the files on a remote server, but left my changes from the articles above that I did intact mostly I believe.
You will need a dedicated User setup in Active Directory, and to configure the user accounts that SQL Server and SQL Server Agent are running under to give the main calling PowerShell the privileges needed to access the network and other things to, and can be used to run the PowerShell on the remote server as well. And you may need to configure SQLServer to allow SQL Server Jobs or Stored Procedures to be able to call PowerShell scripts like I did. But this is outside the scope of this post. You Google other places on the internet to show you how to do that.