I was testing the sample project demonstrating the Unreal Engine plug-in for Azure Digital Twins but I have a problem. In step 1 at the time of deploying the Azure resources it generates several errors and does not create the resource group. Could you help me with a solution?
They tell me that it may be an error in the cmd ./deployment/deploy/deploy.ps1
ERROR: {"code": "InvalidDeploymentParameterValue", "message": "The value of the deployment parameter 'appRegPassword' is null. Please specify the value or use the parameter reference. See https://aka.ms/resource-manager-parameter-files for details."}
Exception: /home/framework/azure-digital-twins-unreal-integration/deployment/deploy/deploy.ps1:456
Line |
456 | throw "Something went wrong with the deployment of the resource group ...
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| something went wrong with the deployment of the resource pool. End of script.
Thank you very much.
There are multiple issues on that particular GitHub repository. It seems the deployment script is broken. You can try running the script with the changes in this pull request. If that doesn't work, creating an issue on the repository will hopefully get you the help you need.
Related
All, when running a build pipeline using Azure Devops with ARM template, the process is consistently failing when trying to deploy a dataset or a reference to a dataset with this error:
ARM Template deployment: Resource Group scope (AzureResourceManagerTemplateDeployment)
BadRequest: The document creation or update failed because of invalid reference 'dataset_1'.
I've tried renaming the dataset and also recreating it to see if that would help.
I then deleted the dataset_1.json file from the repo and still get the same message so it's some reference to this dataset and not the dataset itself I think. I've looked through all the other files for references to this but they all look fine.
Any ideas on how to troubleshoot this?
thanks
try this
Looks like you have created 'myTestLinkedService' linked service, tested connection but haven't published it yet and trying to reference that linked service in the new dataset that you are trying to create using Powershell.
In order to reference any data factory entity from Powershell, please make sure those entities are published first. Please try publishing the linked service first from the portal and then try to run your Powershell script to create the new dataset/actvitiy.
I think I found the issue. When I went into the detailed logs I found that in addition to this error there was an error message about an invalid SQL connection string, so I though it may be related since the dataset in question uses Azure SQL database linked service.
I adjusted the connection string and this seems to have solved the issue.
I deleted my KF cluster last night to create a new one (using kubectl cluster command not Kfctl delete), and then when I tied to create a new one, it fails, it does not work with CLI not Console. I found other people have run into this issue before, for example (here and here)
"However, as I said even with CLI my deployment fails, the error from console is:
ailed to apply: (kubeflow.error): Code 500 with message: coordinator Apply failed for gcp: (kubeflow.error): Code 500 with message: gcp apply could not update deployment manager Error could not update storage-kubeflow.yaml; Insert deployment error: googleapi: Error 403: Request had insufficient authentication scopes.
More details:
Reason: insufficientPermissions, Message: Insufficient Permission"
and the error I get from Console is:
"Please enable APIs for your project and try again
Please enable cloud resource manager API: https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ and iam API: https://console.developers.google.com/apis/api/iam.googleapis.com/"
Note that this error is wrong, all the apis are active already. I'm quite sure this is a bug of KF but not sure how to find a workaround, any thoughts?
With CLI, I'm using my own account which has "owner" privileges.
Thanks
It seems you have an issue with IAM and the installation of Kubeflow, a 3rd party product that itself is not supported by us; nevertheless I went ahead and dig some information about this Machine Learning product.
The main issues (and although it seems you already cover permissions) are permissions, number of projects and some fine grained points.
I was checking and found out the following things that may help
a) Troubleshooting Kubeflow 1
b) Deploying Kubeflow in GKE[2]
c) Kubleflow auto deployer for GKE[3]
There are also some discussion about a mismatch permissions setting in Kubeflow that may be worth reading [4]
Finally there is a group that, also on a best-effort basis due the nature of Kubeflow:"google-kubeflow-support#google.com" that may come in handy.
I trust this information will be useful for you to solve your issue
I have created VMSS using custom image. I have hosted web application build in .Net MVC on VMSS. I have configured CI/CD from Azure DevOps by referring following https://learn.microsoft.com/en-us/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset?view=azure-devops .
It is showing error D:\a\_temp\1575277721063\packer\packer.exe failed with return code: 1 . Any suggestion/recommendation is appreciated.
Below is some failed commands in Log:
1. azure-arm: resources.DeploymentsClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details."
2. Some builds didn't complete successfully and had errors:
2019-12-02T09:57:31.5222618Z --> azure-arm: resources.DeploymentsClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details."
3. 2019-12-02T09:57:31.5222868Z ==> Builds finished but no artifacts were created.
Since the error message (DeploymentFailed) from the pipeline is a generic one, it would be tough to investigate the issue without looking at the underlying logs or your pipeline details.
For troubleshooting it further, please try the following:
View deployment history with Azure Resource Manager as mentioned in the error message itself.
Gather logs to diagnose problems, such as debug/verbose pipeline logs, worker/agent diagnostics logs etc..
Look at some common issues and resolutions if it helps.
Send feedback and report problems through the Developer Community for Azure DevOps.
My ARM template resource group deployment fails in VSTS.
I get an error without any specific reference to parameter that has an issue: "One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details."
The referenced url contain general information, with one comment asking the same question, but no answer to it.
Person asking it alluded that it may have something to do with the version of the deployment step (2.*) and it not using Powershell anymore. I went though the template back and forth comparing parameters in BeyondCompare and nothing sticks out...
Does anyone know what does this error mean?
I had the same issue and found out that some parameters has a space in their values.
So you should write -adminUsername "$(vmuser)".
This works for me
Check Your parameter key or value does not have space in between.
if your value required space then, use "".
check this link.
Example,
direct value -param1 "Value with Space"
value from pipeline variables -param1 "$(valueFromVariables)".
It means you've got a parameterkey in your deployment template without a name. For example "-" instead of "-parametername" or "- parametername" (notice the space).
It can also happen if you manage to paste an 'em-dash' (e.g. from a web browser) instead of a standard dash.
We had the same as matendie; a space between the dash and the parameter name:
- pricingTier "standard"
^ note the space
So, I'm not sure what the issue was, but I gave up on trying to identify the problem, and I deleted the release definition. Recreating it from scratch using the same template, works fine now...
Maybe the definition got some how corrupted.
Not sure, but new one is not having this issue.
Thanks
In my case the problem was with template parameters override. I needed to put parameter value in quotes - "DEV" on screenshot below.
Ran into this the other day. The release pipeline used to be working, and it suddenly started failing continuously with this error:
Error text:
##[error] One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
##[warning] Validation errors were found in the Azure Resource Manager template. This can potentially cause template deployment to fail. Task failed while creating or updating the template deployment.. Please follow https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-syntax
Starting Deployment.
Deployment name is TemplateDeployment-20220504-******-****
There were errors in your deployment. Error code: InvalidDeploymentParameterKey.
##[error] One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
##[error] Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
##[error] Task failed while creating or updating the template deployment.
Change that caused the error
I had changed the build pipeline such that the build numbers would now have spaces in it: so it changed from my-build-number to my build number. I was still using template param overrides this way: -buildNumber $(Build.BuildNumber):
this would expand to -buildNumber my build number, which breaks the command line processing of the ARM template deployment release task.
Solution
Used quotes for my build number variable: -buildNumber "$(Build.BuildNumber)":
Now this would expand to -buildNumber "my build number", and the Azure Resource Manager (ARM) template deployment release task is happy:
I'm am getting an error when deploying ADF pipelines. I don't understand how to resolve this error message:
Pipeline Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices is in Failed state. Cannot set active period Start=05/30/2017 00:00:00, End=05/29/2018 23:59:59 for pipeline 'Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices' due to conflicts on Output: SDW_dbo_UserProfiles with Pipeline: Populate SDW_dbo_UserProfiles from SDW_dbo_Manifest, Activity StoredProcedureActivityTemplate, Period: Start=05/30/2017 00:00:00, End=05/30/2018 00:00:00
.
Try changing the active period or using autoResolve option when setting the active period.
I'm am authoring and deploying from within Visual Studio 2015. All of my pipelines have the same values for Start and End.
"start": "2017-05-30T00:00:00Z",
"end": "2018-05-29T23:59:59Z"
How do I resolve this issue?
Visual Studio can be fun sometimes when it comes to validating your JSON because not only does it check everything in your solution it also validates against what you already have deployed in Azure!
I suspect this error will be because there is a pipeline that you have already deployed that now differs from Visual Studio. If you delete the affected pipeline from ADF in Azure manually and then redeploy you should be fine.
Sadly the tooling isn't yet clever enough to understand which values should take presedence and be overwritten at deployment time. So for now it simiply errors because of a mismatch, any mismatch!
You will also encounter similar issues if you remove datasets from your solution. They will still be used for validation at deployment time because the wizard first deploys all new things before trying to delete the old. I've fed this back to Microsoft already as an issue that needs attention for complex solutions with changing schedules.
Hope this helps.