I have HDInsightHive Azure Data Factory Activity which is using HDIOnDemandLinkedService, but it is giving me this below error.
I have checked blob storage account it is GeneralPurpose V1 account and on Standard Pricing, still the issue.
It was working till yesterday but from today I am unable to figure out what is the change.
Below is my HDIOnDemandLinkedService code
{
"name": "HDIOnDemandLinkedService",
"properties": {
"hubName": "ne1admapsimplymeasureddf01_hub",
"type": "HDInsightOnDemand",
"typeProperties": {
"version": "3.6",
"clusterSize": 2,
"timeToLive": "01:00:00",
"osType": "Linux",
"coreConfiguration": {},
"hBaseConfiguration": {},
"hdfsConfiguration": {},
"hiveConfiguration": {},
"mapReduceConfiguration": {},
"oozieConfiguration": {},
"sparkConfiguration": {},
"stormConfiguration": {},
"yarnConfiguration": {},
"additionalLinkedServiceNames": [],
"linkedServiceName": "AzureStorageLinkedService"
}
}
}
Related
Moving some TinyMCE plugins from 4 to 5. Been digging through the docs on line, which have certainly helped. Have found one thing I cannot resolve.
Have a plugin that chooses a file on the host server (not the client where the browser is running). Original source uses:
type: 'filepicker',
But that is giving me an error:
The chosen schema: "filepicker" did not exist in branches: {
"alertbanner": {},
"bar": {},
"button": {},
"checkbox": {},
"colorinput": {},
"colorpicker": {},
"dropzone": {},
"grid": {},
"iframe": {},
"input": {},
"selectbox": {},
"sizeinput": {},
"textarea": {},
"urlinput": {},
"customeditor": {},
"htmlpanel": {},
"imagetools": {},
"collection": {},
"label": {},
"table": {},
"panel": {}
}
None of which seem like they are a replacement for filepicker. Am I missing something ?
TIA
Andy
I dug through the image plugin. urlinput is the correct answer:
{type: 'urlinput',
name: 'filePath',
label: 'Document',
filetype: 'file',
value: filePath
I use arm template to deploy a storage account. However, I got an error saying: StorageAccountAlreadyExists: The storage account named xxx already exists.
My release pipeline is set to incremental, so shouldn't really show this error.
I changed storage account name to a new one, not only it worked the first time, but I can keep on deploying the same pipeline and no error ever thrown out.
Looks like it is something specific to this account, however, I can't see anything special. The arm template we use is also quite normal (something we got from official examples before).
{
"$schema": "http://schema.management.azure.com/schemas/2019-06-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"StorageDescriptor": {
"type": "string",
"defaultValue": "StorageAccount",
"metadata": {}
},
"StorageAccountName": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('StorageDescriptor'), resourceGroup().name))]",
"metadata": { "Description": "Override name for the storage account" }
},
"StorageType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_ZRS",
"Standard_GRS",
"Standard_RAGRS",
"Premium_LRS"
]
},
"Environment": {
"type": "string",
"defaultValue": "PreProd",
"metadata": { "description": "PreProd or Prod" }
}
},
"variables": {
},
"resources": [
{
"name": "[parameters('StorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2019-06-01",
"dependsOn": [],
"tags": {
"displayName": "Web Job Storage Account"
},
"properties": {
"accountType": "[parameters('StorageType')]"
}
}
],
"outputs": {
}
}
Even though your release pipeline is set to incremental, the storage account name must be unique every time you deploy. Refer to: here.
Arm template deployment fail with 409 error for one specific storage account
You need to check if the storage account attributes have been changed through the Azure/PowerShell portal by somebody else, and are different than the ones specified on the ARM template.
To resolve this issue, please try to export the template and update it in the Azure devops repo:
Then, we could update this new exported template file as you want and deploy with it.
As test, I could keep on deploying the same pipeline and no error ever thrown out.
I am building an action with the new Actions Builder and everything is going pretty smoothly. I just setup account linking and can successfully link my account, however, once I do link my account there is no token included in the subsequent requests for me to use, even though the account linking status is in the request as "LINKED". Can anyone shed any light on why I am not seeing a token?
For reference, here is a version of one of my requests.
{
"requestJson": {
"handler": {
"name": "main"
},
"intent": {
"name": "actions.intent.MAIN",
"params": {},
"query": "Talk to my new app"
},
"scene": {
"name": "actions.scene.START_CONVERSATION",
"slotFillingStatus": "UNSPECIFIED",
"slots": {},
"next": {
"name": "ListPrompt"
}
},
"session": {
"id": "ABwppHE7M6NS8KdyjljEptrtZZ5GkE3qDdaiwjYbL9ehrA-t_c-ZsCrZ_WhN0ZTG5lXXXXXXhU6Im5vgeSwow",
"params": {},
"typeOverrides": [],
"languageCode": ""
},
"user": {
"locale": "en-US",
"params": {},
"accountLinkingStatus": "LINKED",
"verificationStatus": "VERIFIED",
"packageEntitlements": [],
"lastSeenTime": "2020-07-13T12:02:42Z"
},
"home": {
"params": {}
},
"device": {
"capabilities": [
"SPEECH",
"RICH_RESPONSE",
"LONG_FORM_AUDIO"
]
}
}
}
The Google docs for the Account Linking with the new Actions Builder have now been updated with additional information. The token is now provided within the headers of the incoming request. Details of how to find and decode this can be found at https://developers.google.com/assistant/identity/google-sign-in#handle_data_access_requests
Context:
I deploy a storage account as well as one or more containers with the following ARM template with Azure DevOps respectively a Resource Deployment Task:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string",
"metadata": {
"description": "The name of the Azure Storage account."
}
},
"containerNames": {
"type": "array",
"metadata": {
"description": "The names of the blob containers."
}
},
"location": {
"type": "string",
"metadata": {
"description": "The location in which the Azure Storage resources should be deployed."
}
}
},
"resources": [
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"location": "[parameters('location')]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
}
},
{
"name": "[concat(parameters('storageAccountName'), '/default/', parameters('containerNames')[copyIndex()])]",
"type": "Microsoft.Storage/storageAccounts/blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"copy": {
"name": "containercopy",
"count": "[length(parameters('containerNames'))]"
}
}
],
"outputs": {
"storageAccountName": {
"type": "string",
"value": "[parameters('storageAccountName')]"
},
"storageAccountKey": {
"type": "string",
"value": "[listKeys(parameters('storageAccountName'), '2018-02-01').keys[0].value]"
},
"storageContainerNames": {
"type": "array",
"value": "[parameters('containerNames')]"
}
}
}
Input can be e.g.
-storageAccountName 'stor1' -containerNames [ 'con1', 'con2' ] -location 'westeurope'
In an next step I create Stored Access Policies for the containers deployed.
Problem:
The first time I do that everything works fine. But if I execute the pipeline a second time the Stored Access Policies gets deleted by the deployment of the template. The storage account itself with its containers and blobs are not deleted (as it should be). This is unfortunate because I want to keep the Stored Access Policy with its starttime and expirytime as deployed the first time, furthermore I expect that the SAS also become invalid (not tested so far).
Questions:
Why is this happening?
How can I avoid this problem respectively keep the Stored Access Policies?
Thanks
After doing some investigation this seems to be by design. When deploying ARM templates for storage accounts the PUT operation is used, i.e. elements that are not specified within the template are removed. As it is not possible to specify Shared Access Policies for containers within an ARM template for Storage Accounts existing ones get deleted when the template is redeployed...
I have a 16.04-LTS Ubuntu Virtual Machine in my Azure account and I am trying Azure Disk Encryption for this virtual machine making use of this azure cli sample script. On running the encryption script, the azure portal shows its OS disk is encrypted. There is Enabled under Encryption header.
However, the Azure REST API (api link) for getting information about the virtual machine does not return the encryptionSettings under properties.storageProfile.osDisk. I tried both Model View and Model View and Instance View for the api-version 2017-03-30 as well as 2017-12-01. Here is the partial response from the API:
{
"name": "ubuntu",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "16.04-LTS",
"publisher": "Canonical",
"version": "latest",
"offer": "UbuntuServer"
},
"osDisk": {
"name": "ubuntu-OsDisk",
"diskSizeGB": 30,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Linux"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": ""
}
But for my other encrypted windows virtual machine, I get the correct response which contains encryptionSettings in properties.storageProfile.osDisk:
{
"name": "win1",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "2016-Datacenter-smalldisk",
"publisher": "MicrosoftWindowsServer",
"version": "latest",
"offer": "WindowsServer"
},
"osDisk": {
"name": "win1_OsDisk_1",
"diskSizeGB": 31,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"encryptionSettings": {
"diskEncryptionKey": {
"secretUrl": "...",
"sourceVault": {
"id": "..."
}
},
"keyEncryptionKey": {
"keyUrl": "...",
"sourceVault": {
"id": "..."
}
},
"enabled": true
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Windows"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "...",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": "..."
}
Why is the Virtual Machine Get API not returning the encryptionSettings for some VMs? Any help would be greatly appreciated.
I create VM using following command.
az vm create \
--resource-group shuivm \
--name shuivm \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
When I use the following API, I could get encryption setting.
https://management.azure.com/subscriptions/**********/resourceGroups/shuivm/providers/Microsoft.Compute/virtualMachines/shuivm?api-version=2017-03-30"
Note: When OS is encrypted successful, I could use API to get encryption setting.
This is because there are two types of at-rest disk encryption for Azure VMs and they are not reported in the same part of the Azure Management API:
Server-Side Encryption: that you can see in the encryptionSettings section of the VM/compute API when you get a vm details. It will show whether you are encypting with a customer managed key or a platform managed key
ADE: Azure Disk Encryption is actually a VM extension and so you can find it in the VM Extension API instead.
see: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachineextensions/list