List of all windows server names from parameter store - aws-cloudformation

I have this parameter and it works as expected.
Parameters:
LatestAmiId:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: /aws/service/ami-windows-latest/Windows_Server-2016-English-Core-Containers
But how do I get the list of all windows servers? The following command does not return anything.
aws ssm get-parameters-by-path --path "/aws/service/ami-windows-latest" --region us-east-1

This command works fine for me, returning 23 AMIs:
aws ssm get-parameters-by-path --path "/aws/service/ami-windows-latest" --region us-east-1
Sample output:
{
"Parameters": [
{
"Name": "/aws/service/ami-windows-latest/Windows_Server-2008-R2_SP1-English-64Bit-SQL_2012_SP4_Express",
"Type": "String",
"Value": "ami-0a2e90df8bb31df6f",
"Version": 29,
"LastModifiedDate": 1576555239.669,
"ARN": "arn:aws:ssm:us-east-1::parameter/aws/service/ami-windows-latest/Windows_Server-2008-R2_SP1-English-64Bit-SQL_2012_SP4_Express"
},
{
"Name": "/aws/service/ami-windows-latest/Windows_Server-2012-R2_RTM-Chinese_Simplified-64Bit-Base",
"Type": "String",
"Value": "ami-0bedade339716cc2b",
"Version": 48,
"LastModifiedDate": 1576555364.453,
"ARN": "arn:aws:ssm:us-east-1::parameter/aws/service/ami-windows-latest/Windows_Server-2012-R2_RTM-Chinese_Simplified-64Bit-Base"
},
...
I'm using a Mac. If you are using Windows, you might need to fiddle with the quotation marks (or even remove them).

Works fine on Windows as well.

Related

Execute SQL script with Azure ARM template

I'm deploying PostgreSQL server with a database and trying to seed this database with SQL script. I've learned that the best way to execute SQL script from ARM template is to use deployment script resource. Here is part of a template:
{
"type": "Microsoft.DBforPostgreSQL/flexibleServers/databases",
"apiVersion": "2021-06-01",
"name": "[concat(parameters('psqlServerName'), '/', parameters('psqlDatabaseName'))]",
"dependsOn": [
"[resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('psqlServerName'))]"
],
"properties": {
"charset": "[parameters('psqlDatabaseCharset')]",
"collation": "[parameters('psqlDatabaseCollation')]"
},
"resources": [
{
"type": "Microsoft.Resources/deploymentScripts",
"apiVersion": "2020-10-01",
"name": "deploySQL",
"location": "[parameters('location')]",
"kind": "AzureCLI",
"dependsOn": [
"[resourceId('Microsoft.DBforPostgreSQL/flexibleServers/databases', parameters('psqlServerName'), parameters('psqlDatabaseName'))]"
],
"properties": {
"azCliVersion": "2.34.1",
"storageAccountSettings": {
"storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01').keys[0].value]",
"storageAccountName": "[parameters('storageAccountName')]"
},
"cleanupPreference": "Always",
"environmentVariables": [
{
"name": "psqlFqdn",
"value": "[reference(resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('psqlServerName')), '2021-06-01').fullyQualifiedDomainName]"
},
{
"name": "psqlDatabaseName",
"value": "[parameters('psqlDatabaseName')]"
},
{
"name": "psqlAdminLogin",
"value": "[parameters('psqlAdminLogin')]"
},
{
"name": "psqlServerName",
"value": "[parameters('psqlServerName')]"
},
{
"name": "psqlAdminPassword",
"secureValue": "[parameters('psqlAdminPassword')]"
}
],
"retentionInterval": "P1D",
"scriptContent": "az config set extension.use_dynamic_install=yes_without_prompt\r\naz postgres flexible-server execute --name $env:psqlServerName --admin-user $env:psqlAdminLogin --admin-password $env:psqlAdminPassword --database-name $env:psqlDatabaseName --file-path test.sql --debug"
}
}
]
}
Azure does not show any errors regarding the syntax and starts the deployment. However, deploySQL deployment gets stuck and then fails after 1 hour due to agent execution timeout. PostgreSQL server itself, database and firewall rule (not shown in the code above) are deployed without any issues, but SQL script is not executed. I've tried to add --debug option to Azure CLI commands, but got nothing new from pipeline output. I've also tried to execute these commands in Azure CLI pipeline task and they worked perfectly. What am I missing here?

Why is password type of AzureKeyVaultSecret dropped when creating LinkedService via powershell,

I'm attemping to create a LinkedService via the powershell command
New-AzureRmDataFactoryV2LinkedService -ResourceGroupName rg -DataFactoryName df -Name n -DefinitionFile n.json
the result is that the LinkedService is created, however the reference to the password type of AzureKeyVaultSecret is removed rendering it non-operational
The config file n.json was extracted from the DataFactory code tab and has the syntax below...
{
"name": "<name>",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"type": "Oracle",
"typeProperties": {
"connectionString": "host=<host>;port=<port>;serviceName=<serviceName>;user id=<user_id>",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "Prod_KeyVault",
"type": "LinkedServiceReference"
},
"secretName": "<secretname>"
}
},
"connectVia": {
"referenceName": "<runtimename>",
"type": "IntegrationRuntimeReference"
}
}
}
When the new LinkedService is created, the code looks exactly the same except properties->typeProperties->password is removed and requires manual configuration - which I'm trying to avoid if possible.
Any thoughts?
If you have tried using "Update-Module -Name AzureRm.DataFactoryV2" to update your powershell to the latest version, and it is still the same behavior, then the possible root cause is that password is not support as Azure Key Value yet in Powershell. As far as I know, it is a new feature added recently. So it may take some time to rollout it to Powershell.
In that case, the workaround is to use UI to create linked service for now.

How to run a PowerShell script during Azure VM deployment with ARM template?

I want to deploy a VM in azure using Azure Resource Manager (ARM), and then run a PowerShell script inside the VM post deployment to configure it.
I can do this fine with something like this: https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-vsts-agent
However that template grabs the PowerShell script from GitHub. As part of my deployment I want to upload the script to Azure Storage, and then have the VM get the script from Azure storage and run it. How can I do that part with regards to dependencies on the PowerShell script, because it has to exist in Azure Storage somewhere before being executed.
I currently have this to install a VSTS Agent as part of a deployment, but the script is downloaded from GitHub, I don't want to do that, I want the installation script of the VSTS Agent to be part of my ARM Project.
{
"name": "vsts-build-agents",
"type": "extensions",
"location": "[parameters('location')]",
"apiVersion": "2017-12-01",
"dependsOn": [
"vsts-build-vm"
],
"tags": {
"displayName": "VstsInstallScript"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"settings": {
"fileUris": [
"[concat(parameters('_artifactsLocation'), '/', variables('powerShell').folder, '/', variables('powerShell').script, parameters('_artifactsLocationSasToken'))]"
]
},
"protectedSettings": {
"commandToExecute": "[concat('powershell.exe -ExecutionPolicy Unrestricted -Command \"& {', './', variables('powerShell').script, ' ', variables('powerShell').buildParameters, '}\"')]"
}
}
}
I guess my question is really about how to set _azurestoragelocation to an azure storage location where the script has just been uploaded as part of the deployment.
chicken\egg problem. you cannot upload to azure storage with arm template, you need to use script to upload to azure storage, but if you have that script on vm to upload it you dont really need to upload it.
that being said, why dont you use VSTS agent extension?
{
"name": "xxx",
"apiVersion": "2015-01-01",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "https://gallery.azure.com/artifact/20161101/microsoft.vsts-agent-windows-arm.1.0.0/Artifacts/MainTemplate.json"
},
"parameters": {
"vmName": {
"value": "xxx"
},
"location": {
"value": "xxx"
},
"VSTSAccountName": {
"value": "xxx"
},
"TeamProject": {
"value": "xxx"
},
"DeploymentGroup": {
"value": "Default"
},
"AgentName": {
"value": "xxx"
},
"PATToken": {
"value": "xxx"
}
}
}
},
Do you mean how to set _artifactsLocation as in the quickstart sample? If so you have 2 options (or 3 depending)
1) use the script in the QS repo, the defaultValue for the _artifactsLocation param will set that for you...
2) if you want to customize, from your local copy of the sample, just use the Deploy-AzureResourceGroup.ps1 in the repo and it will stage and set the value for you accordingly (when you use the -UploadArtifacts switch)
3) stage the PS1 somewhere yourself and manually set the values of _artifactsLocation and _artifactsLocationSasToken
You can also deploy from gallery.azure.com, but that will force you to use the script that is stored in the galley (same as using the defaults in GitHub)
That help?

Azure REST API does not return encryption settings for Virtual Machine

I have a 16.04-LTS Ubuntu Virtual Machine in my Azure account and I am trying Azure Disk Encryption for this virtual machine making use of this azure cli sample script. On running the encryption script, the azure portal shows its OS disk is encrypted. There is Enabled under Encryption header.
However, the Azure REST API (api link) for getting information about the virtual machine does not return the encryptionSettings under properties.storageProfile.osDisk. I tried both Model View and Model View and Instance View for the api-version 2017-03-30 as well as 2017-12-01. Here is the partial response from the API:
{
"name": "ubuntu",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "16.04-LTS",
"publisher": "Canonical",
"version": "latest",
"offer": "UbuntuServer"
},
"osDisk": {
"name": "ubuntu-OsDisk",
"diskSizeGB": 30,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Linux"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": ""
}
But for my other encrypted windows virtual machine, I get the correct response which contains encryptionSettings in properties.storageProfile.osDisk:
{
"name": "win1",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "2016-Datacenter-smalldisk",
"publisher": "MicrosoftWindowsServer",
"version": "latest",
"offer": "WindowsServer"
},
"osDisk": {
"name": "win1_OsDisk_1",
"diskSizeGB": 31,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"encryptionSettings": {
"diskEncryptionKey": {
"secretUrl": "...",
"sourceVault": {
"id": "..."
}
},
"keyEncryptionKey": {
"keyUrl": "...",
"sourceVault": {
"id": "..."
}
},
"enabled": true
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Windows"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "...",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": "..."
}
Why is the Virtual Machine Get API not returning the encryptionSettings for some VMs? Any help would be greatly appreciated.
I create VM using following command.
az vm create \
--resource-group shuivm \
--name shuivm \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
When I use the following API, I could get encryption setting.
https://management.azure.com/subscriptions/**********/resourceGroups/shuivm/providers/Microsoft.Compute/virtualMachines/shuivm?api-version=2017-03-30"
Note: When OS is encrypted successful, I could use API to get encryption setting.
This is because there are two types of at-rest disk encryption for Azure VMs and they are not reported in the same part of the Azure Management API:
Server-Side Encryption: that you can see in the encryptionSettings section of the VM/compute API when you get a vm details. It will show whether you are encypting with a customer managed key or a platform managed key
ADE: Azure Disk Encryption is actually a VM extension and so you can find it in the VM Extension API instead.
see: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachineextensions/list

Exporting a AWS Postgres RDS Table to AWS S3

I wanted to use AWS Data Pipeline to pipe data from a Postgres RDS to AWS S3. Does anybody know how this is done?
More precisely, I wanted to export a Postgres Table to AWS S3 using data Pipeline. The reason I am using Data Pipeline is I want to automate this process and this export is going to run once every week.
Any other suggestions will also work.
There is a sample on github.
https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RDStoS3
Here is the code:
https://github.com/awslabs/data-pipeline-samples/blob/master/samples/RDStoS3/RDStoS3Pipeline.json
You can define a copy-activity in the Data Pipeline interface to extract data from a Postgres RDS instance into S3.
Create a data node of the type SqlDataNode. Specify table name and select query.
Setup the database connection by specifying RDS instance ID (the instance ID is in your URL, e.g. your-instance-id.xxxxx.eu-west-1.rds.amazonaws.com) along with username, password and database name.
Create a data node of the type S3DataNode.
Create a Copy activity and set the SqlDataNode as input and the S3DataNode as output.
Another option is to use an external tool like Alooma. Alooma can replicate tables from PostgreSQL database hosted Amazon RDS to Amazon S3 (https://www.alooma.com/integrations/postgresql/s3). The process can be automated and you can run it once a week.
I built a Pipeline from scratch using the MySQL and the documentation as reference.
You need to have the roles on place, DataPipelineDefaultResourceRole && DataPipelineDefaultRole.
I haven't load the parameters, so, you need to get into the architech and put your credentials and folders.
Hope it helps.
{
"objects": [
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "#{myS3LogsPath}",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"database": {
"ref": "DatabaseId_WC2j5"
},
"name": "DefaultSqlDataNode1",
"id": "SqlDataNodeId_VevnE",
"type": "SqlDataNode",
"selectQuery": "#{myRDSSelectQuery}",
"table": "#{myRDSTable}"
},
{
"*password": "#{*myRDSPassword}",
"name": "RDS_database",
"id": "DatabaseId_WC2j5",
"type": "RdsDatabase",
"rdsInstanceId": "#{myRDSId}",
"username": "#{myRDSUsername}"
},
{
"output": {
"ref": "S3DataNodeId_iYhHx"
},
"input": {
"ref": "SqlDataNodeId_VevnE"
},
"name": "DefaultCopyActivity1",
"runsOn": {
"ref": "ResourceId_G9GWz"
},
"id": "CopyActivityId_CapKO",
"type": "CopyActivity"
},
{
"dependsOn": {
"ref": "CopyActivityId_CapKO"
},
"filePath": "#{myS3Container}#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"id": "S3DataNodeId_iYhHx",
"type": "S3DataNode"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"instanceType": "m1.medium",
"name": "DefaultResource1",
"id": "ResourceId_G9GWz",
"type": "Ec2Resource",
"terminateAfter": "30 Minutes"
}
],
"parameters": [
]
}
You can now do this with aws_s3.query_export_to_s3 command within postgres itself https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html