We have a devops pipeline which has an azure cli task that triggers and adf pipeline. All was working untill last friday but now the same task started giving error: ModuleNotFound: azure.core.rest.
The azure cli task looks like this:
az login --service-principal --username $(test-service-principle-id) --password $(test-service-principle-password) --tenant $(TENANT_ID)
az config set extension.use_dynamic_install=yes_without_prompt
$runId=az datafactory pipeline create-run --factory-name test-adf --name "pl_load_training_data" --resource-group test-rg --query "runId" --output tsv
I have added azure-core to requirements.txt and running pip list | grep auzre-core also list that it is installed.
Here is the trace:
Related
I am trying to configure a devops pipeline to get Databricks workspace to automatically pull the latest version of the repo when master branch is updated.
I am able to generate the DB_PAT but I cannot seem to get this into the DATABRICKS_TOKEN variable properly hence the databricks repos update fails with the following error:
Error: JSONDecodeError: Expecting value: line 1 column 1 (char 0)
##[debug]Exit code 1 received from tool '/usr/bin/bash'
##[debug]STDIO streams have closed for tool '/usr/bin/bash'
##[error]Bash exited with code '1'.
This is my YAML file that I ran in Azure Devops.
trigger:
- master
jobs:
- job: PullReposUpdate
steps:
- task: AzureCLI#2
name: DbTokenGen
inputs:
azureSubscription: $(azureSubscription)
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
ADBWORKSPACENAME="adb-$(clientName)-$(stageName)"
echo "$ADBWORKSPACENAME"
TENANT_ID=$(az account show --query TENANT_ID --output tsv)
WORKSPACE_ID=$(az resource show --resource-type Microsoft.Databricks/workspaces --resource-group $RGNAME --name $ADBWORKSPACENAME --query id --output tsv)
TOKEN=$(az account get-access-token --resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d | jq --raw-output '.accessToken')
AZ_TOKEN=$(az account get-access-token --resource https://management.core.windows.net/ | jq --raw-output '.accessToken')
DB_PAT=$(curl --silent https://$(region).azuredatabricks.net/api/2.0/token/create \
--header "Authorization: Bearer $TOKEN" \
--header "X-Databricks-Azure-SP-Management-Token:$AZ_TOKEN" \
--header "X-Databricks-Azure-Workspace-Resource-Id:$WORKSPACE_ID" \
--data '{ "lifetime_seconds": 1200, "comment": "Azure DevOps pipeline" }' \
| jq --raw-output '.token_value')
echo "##vso[task.setvariable variable=DB_PAT]$DB_PAT"
failOnStandardError: true
displayName: 'Generate Token for Databricks'
- script: |
pip install --upgrade databricks-cli
displayName: 'Install dependencies'
- script: |
databricks repos update --path $(STAGING_DIRECTORY) --branch "$(branchName)"
env:
DATABRICKS_HOST: $(databricks-url)
DATABRICKS_TOKEN: $(DbTokenGen.DB_PAT)
displayName: 'Update Production Databricks Repo'
How can I pass the DB_PAT successfully to the databricks cli environment? any suggestion is highly appreciated.
Thank you.
pipelines.yml
- script: |
az login --allow-no-subscriptions -u xx -p xx
az devops configure --defaults organization=https://dev.azure.com/xx project=xx
az devops user list
displayName: 'Login Azure DevOps Extension'
resulst
ERROR: TF400813: The user 'xxx' is not authorized to access this resource.
But run the command on your own computer, Ubuntu 20
az login --allow-no-subscriptions -u xx -p xx
az devops configure --defaults organization=https://dev.azure.com/xx project=xx
az devops user list
Return is normal
help me?
I have created pipeline to import existing Azure Resource into terraform. Since Terraform Import requires Provider details or Environment Variables for The below details which has to extracted from the Service Connection.
steps:
- task: AzureCLI#2
displayName: Terraform Init
inputs:
azureSubscription: ${{ parameters.service_connection }}
addSpnToEnvironment: true
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
export ARM_CLIENT_ID=$servicePrincipalId
export ARM_CLIENT_SECRET=$servicePrincipalKey
export ARM_SUBSCRIPTION_ID=$(az account show --query id | xargs)
export ARM_TENANT_ID=$(az account show --query tenantId | xargs)
ls
terraform init -upgrade -input=false \
-backend-config="subscription_id=${{ parameters.tf_state_subscription_id }}" \
-backend-config="tenant_id=$tenantId" \
-backend-config="client_id=$servicePrincipalId" \
-backend-config="client_secret=$servicePrincipalKey" \
-backend-config="resource_group_name=${{ parameters.resource_group_name }}" \
-backend-config="storage_account_name=${{ parameters.storage_account_name }}" \
-backend-config="container_name=${{ parameters.tf_state_key }}" \
-backend-config="key=${{ parameters.tf_state_key }}.tfstate"
if [ $(az resource list --name pytestkeyvault --query '[].id' -o tsv) != null ]
then
echo "using Keyvault $(az resource list --name pytestkeyvault --query '[].id' -o tsv)"
terraform import azurerm_key_vault.this $(az resource list --name pytestkeyvault --query '[].id' -o tsv)
else
echo "Keyvault does not exist"
fi
echo $ARM_CLIENT_ID
The exported environment variable ARM_CLIENT_ID is empty. The below variables are not being exported as environment variables.
echo $ARM_CLIENT_ID
echo $ARM_CLIENT_SECRET
echo $ARM_SUBSCRIPTION_ID
echo $ARM_TENANT_ID
For my setup i could not access the service principal from azure powershell.
But i could from Azure CLI.
This post pointed me in the right direction, check it out:
https://www.integration-playbook.io/docs/combining-az-cli-and-azure-powershell-az-modules-in-a-pipeline
In my experience of trying every possible variation of setting environment variables, it seems as ADO build agents don't allow the persisting of ARM_CLIENT_SECRET as an environment variable.
So the workaround I had to do was set the environment variables at the task level (instead of at the shell/machine level):
- script: |
terraform init # ...rest of your CLI arguments/backend-config flags
env:
ARM_CLIENT_SECRET: $(client_secret)
displayName: Terraform Init
Edit:
IMO, just using terraform init yourself via CLI is better than using the AzureCLI#2 task which is a confusing black box that honestly makes it harder/more verbose to do the same thing just with the plan CLI command.
Try using system variables $env:servicePrincipalId, $env:servicePrincipalKey, $env:tenantId to get SPN details.
I created a docker container using Dockerfile.
&& apk add --virtual=build gcc libffi-dev musl-dev openssl-dev make python3-dev \
&& pip3 --no-cache-dir install azure-cli==${AZURE_CLI_VERSION} \
but still the Azure DevOps container job fail with the error
## [error]Azure CLI 2.x is not installed on this machine.
Can you please let me know is there anything i can do with path or do I need to install Azure cli by some other means ??
I tried a sample command from my pipeline and az seems to work fine. This is the yaml that I tested with Azure CLI task version 1:
steps:
- task: AzureCLI#1
displayName: 'Azure CLI '
inputs:
azureSubscription: xxxxx
scriptLocation: inlineScript
inlineScript: 'az acr list'
As shown in the output of the task execution, the Azure CLI is intalled at: C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin\az.cmd
If you are still blocked, please post the details of the steps and tasks used in your build pipeline and we can troubleshoot further.
I've successfully run pipeline with step below to push docker image to registry. When I try to re-run it gives an ERROR: Please run 'az login' to setup account
- bash: az acr helm push -n $(registryName) -u $(registryLogin) -p $(registryPassword) $(build.artifactStagingDirectory)/$(projectName)-$(build.buildId).tgz
displayName: 'az acr helm push'
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
The step is seen in the logs as below so shouldn't require any additional information (it works when run from command line on my local machine)
az acr helm push -n acrname -u acruser -p password /home/vsts/work/1/a/chart.tgz
The azure cli genuinely does need you to be logged in, so az cli commands in a bash task will fail with this error. That is expected behaviour.
Instead of using the bash task, you should use the Azure CLI task; this includes authentication against an azure subscription as part of its setup, so you will be able to run a bash script including az cli commands.
For example:
- task: AzureCLI#2
displayName: az acr helm push
inputs:
azureSubscription: <Name of the Azure Resource Manager service connection>
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az acr helm push -n $(registryName) -u $(registryLogin) -p $(registryPassword) $(build.artifactStagingDirectory)/$(projectName)-$(build.buildId).tgz