I'm new to Pulumi so I'm struggling at the moment trying to run it in my Azure release pipeline in order to create my infrastructure.
During development I've used the local storage to store my pulumi state (pulumi login --local), I've created my stacks (dev being one of them) and I was able to easily test my deployment script against my azure subscription.
Now I've pushed my code to source control, created by build pipeline (which works) and I'm trying to create my infrastructure from the release pipeline by using the Pulumi Azure Pipelines Task.
I've managed to configure it to use the blob storage for the state file, but when running pulumi up --yes --skip-preview for the dev stack I get an error that the dev stack does not exist.
Do I need to do a pulumi stack init dev on every "store" that I use? Aren't the Pulumi.stack_name.yaml files enough?
Any advice on how to proceed is welcomed as the documentation on this is non existent or not clear.
Thank you!
The error is probably caused by the stack not existing in your blob storage.
If you use pulumi login --local. The stack will be managed in your local machine and is not synced to azure blob storage. Check here for more login options.
In my test pipeline. I got error: no stack named 'dev' found. If dev does not exist on app.pulumi.com. If i created the dev on app.pulumi.com(i use pulumi.com for storage), it worked as expected.
So please go to azure blob to check if the dev stack exists. You need to create one on azure blob for your account if not exist.
If you want to migrate you local endpoints to azure blob. Please check the steps here.
Once the stack exists in your azure blob. You can run pulumi up --yes --skip-preview directly in pulumi task of azure devopline. No need to run pulumi stack init dev
Please make sure the login args is empty to use the online stack. If you specify --local, you will get the error too, for the stack does not exist in agent machine.
You can also enable the option Create the stack if it does not exist to let the pulumi task create the stack if it is not found on your azure blob.
Here is the an example from Pulumi official documents to integrate with azure devops. Hope it helps!
Related
I'm trying to deploy an Azure Function from an Azure DevOps repo via an DevOps pipeline and release.
When it gets to the deploy stage I get an error message stating that the credentials can not be null, but knowhere in the canned release components is there a place for any credentials and none of my other pipelines ever have this problem.
I see some old references here but no clear answers.
Anyone have suggestions or fixes?
Credentials cannot be null
I can reproduce this issue in my pipeline.
The cause of the issue is that you are using the Publish Profile type Azure Resource Manager Service Connection. And Azure Function App deploy task will not able to read the credentials of the Publish Profile type service connection.
Here are two methods to solve the issue:
1.You can change to use the Azure Web App task to deploy the Function APP.
For example:
Note: Azure Web APP task can be used to deploy to Web APP and Function App.
2.You can change the Service Connection type to Service Principal.
For example:
I am pushing an ADF factory to another environment via a CICD Pipeline and YAML Config file in Azure Devops. I can successfully deploy but one of my linked services becomes a "bad resource" although it works in the master branch when I published it.
Furthermore I cannot delete this in the target data factory nor can I edit it. Getting the bad resource error. I suspect I need to edit something in the ARM file but I don't really understand this error nor can I find much information on similar.
{"stack":"Error: Error: Unable to save [SERVICENAME]. Bad resource\n at Rl.<anonymous> (https://adf.azure.com/app.06b0e174dd8e6fa8.js:1:11274843)\n at Generator.next (<anonymous>)\n at https://adf.azure.com/main.d1fe4ec6f69aa72f.js:1:66326\n at new c
That when I deploy my ADF to a new environment it succeeds with connections intact or at least that I can fix/edit.
EDIT: Even when I recreate the Linked Service I get the same error.
The answer to this is to store all of your connection credentials as secrets in Azure Keyvault then reference that. I am unclear why using the parameters in a linked service do not transfer into the ARM template and this cause it to be a "bad resource" but the Keyvault method translates into ARM correctly and the problem doesn't persist.
We Have Automated scripts that we would like to build and Test on Azure DevOps but our pipeline cannot run our Test Scripts on Azure
We have a Database Service Account that we want to configure on Azure but we don't know how to go about it. Please assist.
Here is a well explained video (by Hassan Habib from Microsoft) on exactly how to run a console app (you create) in an Azure Pipeline that securely gets credentials to immediately do stuff in Azure (https://youtu.be/ht0xhQyF1x4?t=1688)
He basically, in a handful of minutes shows exactly how to:
Link Pipeline Variables to KeyVault Secrets, so when accessed, the variables do a get() from KeyVault and return that value.
Securely links Pipeline Variables to Azure Environment Variables.
As a step in the release pipeline the console app reads the Azure Environment Variables to get credentials to do stuff in Azure.
In his case he created an Azure Resource Group in Azure.
In your case if I’m understanding correctly. You could possibly make a simple console app that runs in the pipeline, that gets creds\connections strings for your database to do whatever in the DB and could possibly test your scripts.
Was working on az ml cli v2 to deploy real-time endpoint with command az ml online-deployment through Azure pipeline. had double confirmed that the service connection used in this pipeline task had added the permissions below in Azure Portal but still showing the same error.
ERROR: Error with code: You don't have permission to alter this storage account. Ensure that you have been assigned both Storage Blob Data Reader and Storage Blob Data Contributor roles.
Using the same service connection, we are able to perform the creation of online endpoint with az ml online-endpoint create in the same and other workspaces.
Issue was resolved. I did not change anything in the service principal and running it on second day using same yml got through the issue. I guess there might be some propagation issue, but longer than usual.
I would like to copy files with Azure File Copy with Azure Pipeline.
I'm following instruction of https://praveenkumarsreeram.com/2021/04/14/azure-devops-copy-files-from-git-repository-to-azure-storage-account/
I'm using automatically created Service Connection named "My Sandbox (a1111e1-d30e-4e02-b047-ef6a5e901111)"
I'm getting error with AzureBlob File Copy:
INFO: Authentication failed, it is either not correct, or
expired, or does not have the correct permission ->
github.com/Azure/azure-storage-blob-go/azblob.newStorageError,
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-
go#v0.10.1-0.20201022074806-
8d8fc11be726/azblob/zc_storage_error.go:42
RESPONSE Status: 403 This request is not authorized to perform
this operation using this permission.
I'm assuming that Azure Pipeline have no access to Azure Storage.
I wonder how do find service principal which should get access to Azure Storage.
I can also reproduce your issue on my side, as different Azure file copy task versions use different versions of AzCopy in behind, then they use different auth ways to call the API to do the operations.
There are two ways to fix the issue.
If you use the automatically created service connection, it should have Contributor role in your storage account, you could use Azure file copy task version 3.* instead of 4.*, then it will work.
If you want to use Azure file copy task version 4.*, navigate to your storage account -> Access Control (IAM) -> add your service principal used in the service connection as a Storage Blob Data Contributor role, see detailed steps here. It will also work.