Seed Data to Cosmos DB in Azure Devops - azure-devops

I have ARM template creating Cosmos DB, database and collections through pipeline. Since there are multiple applications using the database, I want to seed initial data for testing, I was looking for the Cosmos DB import tasks in devops and found this https://marketplace.visualstudio.com/items?itemName=winvision-bv.winvisionbv-cosmosdb-tasks, but right now mongo API is not supported. its not able to import the data from json file which I have it in storage account.
My question is, Is there any other way I can add data from json file to cosmos DB through devops like powershell/api?

My question is, Is there any other way I can add data from json file to cosmos DB through devops like powershell/api?
The answer is yes.
We could try to use the Azure powershell task to execute the following powershell scripts:
param([string]$cosmosDbName
,[string]$resourceGroup
,[string]$databaseName
,[string[]]$collectionNames
,[string]$principalUser
,[string]$principalPassword
,[string]$principalTennant)
Write-Output "Loggin in with Service Principal $servicePrincipal"
az login --service-principal -u $principalUser -p $principalPassword -t $principalTennant
Write-Output "Check if database exists: $databaseName"
if ((az cosmosdb database exists -d $databaseName -n $cosmosDbName -g $resourceGroup) -ne "true")
{
Write-Output "Creating database: $databaseName"
az cosmosdb database create -d $databaseName -n $cosmosDbName -g $resourceGroup
}
foreach ($collectionName in $collectionNames)
{
Write-Output "Check if collection exists: $collectionName"
if ((az cosmosdb collection exists -c $collectionName -d $databaseName -n $cosmosDbName -g $resourceGroup) -ne "true")
{
Write-Output "Creating collection: $collectionName"
az cosmosdb collection create -c $collectionName -d $databaseName -n $cosmosDbName -g $resourceGroup
}
}
Write-Output "List Collections"
az cosmosdb collection list -d $databaseName -n $cosmosDbName -g $resourceGroup
Then press the three dots after Script Arguments and add the parameters defined in the PowerShell script (put all parameters in Variables):
You could check this great document for some more details.

I was running into the same scenario where I needed to maintain configuration documents in source control and update various instances of Cosmos as they changed. What I ended up doing is writing a python script to read a directory structure, one folder for every collection I needed to update, and then read every json file in the folder and upsert it into Cosmos. From there I ran the python script as part of a multi-stage pipeline in Azure DevOps.
This is a link to my proof of concept code https://github.com/tanzencoder/CosmosDBSeedDataExample.
And this is a link to the Python task for the pipelines https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/python-script?view=azure-devops

Related

STDIN is getting listed while listing Azure VMs, hence getting "invalid resource id" error

I used the below command to list and stop all the VMs in my account.
VMs are listing, but an additional STDIN is getting listed.
This STDIN is causing the error "invalid resource id"
What can I do to ignore the STDIN??. Your help greatly appreciated.
az vm stop --ids $(az vm list --query "[].id" -o tsv) | grep -v "ABDK"
thanks
If you mean the VM name by the STDIN and want to stop all the VMs that the name does not contain the string ABDK, then you can use the CLI command only like this:
az vm stop --ids $(az vm list --query "[?contains(#.name, 'ABDK')==\`false\`].id" -o tsv)
Update:
If you run the CLI command in the Windows PowerShell, then you need to change the command like this:
az vm stop --ids $(az vm list --query "[?!contains(#.name, 'ABDK')].id" -o tsv)

How to perform substring like function in jmespath on a string

I would like to know if there is substring function one can leverage in JMESPATH (supported by az cli).
I have the below az cli request and I want to just extract the name of the linked subnet with a security group, but unlike other cloud providers azure doesn't store associated resources names the same way.
The name can be extracted in the subnet.id node which looks like below
$ az network nsg show -g my_group -n My_NSG --query "subnets[].id" -o json
[
"/subscriptions/xxxxxx2/resourceGroups/my_group/providers/Microsoft.Network/virtualNetworks/MY-VNET/subnets/My_SUBNET"
]
I want to only extract "MY_SUBNET" from the the result.
I know there is something called search that is supposed to mimic
substring (explained here
https://github.com/jmespath/jmespath.jep/issues/5) but it didn't
work for me .
$ az network nsg show -g my_group -n My_NSG --query "subnets[].search(id,'#[120:-1]')" -o json
InvalidArgumentValueError: argument --query: invalid jmespath_type value: "subnets[].search(id,'#[120:-1]')"
CLIInternalError: The command failed with an unexpected error. Here is the traceback:
Unknown function: search()
Thank you
Edit :
I actually run the request including other elements that's why using substring with bash in a new line is not what I want .
here's an example of the full query :
az network nsg show -g "$rg_name" -n "$sg_name" --query "{Name:name,Combo_rule_Ports:to_string(securityRules[?direction==\`Inbound\`].destinationPortRanges[]),single_rule_Ports:to_string(securityRules[?direction==\`Inbound\`].destinationPortRange),sub:subnets[].id,resourceGroup:resourceGroup}" -o json
output
{
"Combo_rule_Ports": "[]",
"Name": "sg_Sub_demo_SSH",
"resourceGroup": "brokedba",
"single_rule_Ports": "[\"22\",\"80\",\"443\"]",
"sub": [
"/subscriptions/xxxxxxx/resourceGroups/brokedba/providers/Microsoft.Network/virtualNetworks/CLI-VNET/subnets/Sub_demo"
]
}
I had a similar problem with EventGrid subscriptions and used jq to transform JSON returned by the az command. As a result, you get an JSON array.
az eventgrid event-subscription list -l $location -g $resourceGroup --query "[].{
Name:name,
Container:deadLetterDestination.blobContainerName,
Account:deadLetterDestination.resourceId
}" \
| jq '[.[] | { Name, Container, Account: (.Account | capture("storageAccounts/(?<name>.+)").name) }]'
The expression Account: (.Account | capture("storageAccounts/(?<name>.+)").name) transforms the original resourceId from the Azure CLI.
# From Azure resourceId...
"Account": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Storage/storageAccounts/mystorageaccount"
# .. to Azure Storage Account name
"Account": "mystorageaccount"
I've adapted the approach from How to extract a json value substring with jq.
cut can be used to extract desired values:
az network nsg show -g my_group -n My_NSG --query "subnets[].id|[0]" -o json | cut -d"/" -f11
If you run Azure CLI in bash, here are string manipulation operations you can do:
Following syntax deletes the longest match of $substring from the front of $string
${string##substring}
In this case, you can retrieve the subnet like this.
var=$(az network nsg show -g nsg-rg -n nsg-name --query "subnets[].id" -o tsv)
echo ${var##*/}
For more information, you could refer to https://www.thegeekstuff.com/2010/07/bash-string-manipulation/

rolling back datbabase changes with sqlcmd utility

I have a release pipeline which applies database changes with 'SqlCmd.exe'.
I am trying to execute a stored procedure using this command-line utility:
/opt/mssql-tools/bin/sqlcmd -S tcp:$(Server) -d $(Database) -U $(UserName) -P '$(Password)' -b -i "$(ScriptFile)"
Once something goes wrong in the script file, I want to SQLCMD.EXE automatically rollback all the changes.
I should mention that there is no TRANSACTIONS management inside the script file.
Please help me to learn how to resolve this.
You probably have to add rollback transactions in your script file. There is not configurations in azure release pipeline to control the rollback behavior. See example here to add transactions in scripts.
If you donot want to add transactions in the script file. You can try adding a poweshell task in release pipeline run below script to append a BEGIN TRANSACTION and END TRANSACTION to your query contents.
$fullbatch = #()
$fullbatch += "BEGIN TRANSACTION;"
$fullbatch += Get-Content $(ScriptFile)
$fullbatch += "COMMIT TRANSACTION;"
sqlcmd -S tcp:$(Server) -d $(Database) -U $(UserName) -P '$(Password)' -b -Q "$fullbatch"
See example in this thread.

SumoLogic dashboards - how do I automate?

I am getting some experience with SumoLogic dashboards and alerting. I would like to have all possible configuration in code. Does anyone have experience with automation of SumoLogic configuration? At the moment I am using Ansible for general server and infra provisioning.
Thanks for all info!
Best Regards,
Rafal.
(The dashboards, alerts, etc. are referred to as Content in Sumo Logic parlance)
You can use the Content Management API, especially the content-import-job. I am not an expert in Ansible, but I am not aware of any way to plug that API into Ansible.
Also there's a community Terraform provider for Sumo Logic and it supports content:
resource "sumologic_content" "test" {
parent_id = "%s"
config =
{
"type": "SavedSearchWithScheduleSyncDefinition",
"name": "test-333",
"search": {
"queryText": "\"warn\"",
"defaultTimeRange": "-15m",
[...]
Disclaimer: I am currently employed by Sumo Logic
Below is the shell script to import the dashboard. Here it is SumoLogic AU instance. eg: https://api.au.sumologic.com/api. This will be changed based on your country.
Note: You can export all of your dashboard as json files.
#!/usr/bin/env bash
set -e
# if you are using AWS parameter store
# accessKey=$(aws ssm get-parameter --name path_to_your_key --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# accessSecret=$(aws ssm get-parameter --name name path_to_your_secret --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic where you want to create dashboards
# if you are using just key and secreat
accessKey= "your_sumologic_key"
accessSecret= "your_sumologic_secret"
yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic
# you can place all the json files of dashboard in ./Sumologic/Dashboards folder.
for f in $(find ./Sumologic/Dashboards -name '*.json'); \
do \
curl -X POST https://api.au.sumologic.com/api/v2/content/folders/$yourDashboardFolderName/import \
-H "Content-Type: application/json" \
-u "$accessKey:$accessSecret" \
-d #$f \
;done

Exporting collection from a Azure MongoDB

So I have a MongoDB in an Azure Cosmos DB service. It contains a collection of 1500 documents and I want to download this whole collection in a JSON format. I've tried several methods without success, namely
test_collection.find({})
Which gave me a cursor timeout. Using
{ timeout : false }
Did not help. Then I tried to use mongoexport:
mongoexport -h host_name --port 1234 -u user_name -p password
-d admin -c collection_name -o data.json --ssl
which gives me 0 exported records. The firewall IP access control is off and I can connect to the database through Mongo shell just fine. Trying to export other collections doesn't work either. Also, it has to be by ssl otherwise I get a "database not found" right away.
I've thought about using skip and limit but it doesn't seem to be a good idea with large (and expanding) collections? Could someone please give me some advise as to how I best achieve or overcome these obstacles to download my collection? It doesn't matter how, I just simply need to download the collection. Thank you.
You possibly have a few incorrect parameters, and a missing parameter:
Are you sure your database name is admin?
You need to specify --sslAllowInvalidCertificates
For host/port: this should look something like:
/h yourcosmosaccount.documents.azure.com:10255
If you take a look at the "quick start" tab in your Cosmos DB settings, you'll see the command line string for mongo (well, mongo.exe in the example). Just grab those parameters and use them for mongoexport.
I just ran this against a sample Cosmos DB (MongoDB API) database of mine, with no issue:
Here's the generic command-line equivalent:
mongoexport /h <host:port> /u <username> /p <password> /ssl /sslAllowInvalidCertificates /d <database> /c <collection> /o <outputfile>.json