I would like to iterate over all the secrets from group and would like to pass those to another script. I am able to get all the keys by running az command
for allkeys in `az pipelines variable-group list --organization https://dev.azure.com/myorg --project myproj --group-name mygroup | jq -r '.[].variables | keys | .[]' `;do
echo $allkeys
done
OUTPUT
key1
key2
key3
Now I would like to iterate over this and would like to use key=value for which a bash solution would be
for allkeys in `az pipelines variable-group list --organization https://dev.azure.com/myorg --project myproj --group-name mygroup | jq -r '.[].variables | keys | .[]' `;do
echo $allkeys=`eval echo \${${allkeys}}`
done
above loop will output
key1=val1
key2=val2
key3=val3
But it wont work in ADO pipeline as we refer variable in ADO pipeline as $(key1) which will output the secret.
So, looking for a solution to iterate over all the secret instead of reading those one by one with hardcoded value.
I would really appreciate any help.
Related
Is there a way to use the github-cli or api to view the inputs of an action while it is running?
I want to allow Github actions to run concurrently. The resources they will manage are determined by the input stack_name. I want to make sure two pipelines cannot run at the same time with the same stack_name input. If this happens, then I want one of the pipeline actions to fail and stop immediately.
I am also taking the input and turning it into an environmental variable for one of my jobs. After the job finishes, the values are available in the logs and I can grep through the following output to get a pipelines stack_name:
$ gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log
....
env-check env-check 2022-03-22T17:06:30.2615395Z STACK_NAME: foo
However, this is not available while a job is running and I instead get this error:
run 1234567890 is still in progress; logs will be available when it is complete
Here is my current attempt at a code block that can achieve this. I could also use suggestions on how to make better gh run list and/or gh run view calls that can avoid using grep and awk. Clean json output I can parse with jq is preferable.
set +e
running_pipeline_ids=$(gh run list --workflow=$SLEEVE --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY \
| grep 'in_progress' \
| awk '{print $((NF-2))}' \
| grep -v $GITHUB_RUN_ID)
set -e
for running_pipeline_id in $running_pipeline_ids; do
# get the stack name for all other running pipelines
running_pipeline_stack_name=$(gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log \
| grep 'STACK_NAME:' | head -n 1 \
| awk -F "STACK_NAME:" '{print $2}' | awk '{print $1}')
# fail if we detect another pipeline running against the same stack
if [ "$running_pipeline_stack_name" == "$STACK_NAME" ]; then
echo "ERROR: concurrent pipeline detected. $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$running_pipeline_id"
echo "Please try again after the running pipeline has completed."
exit 1
fi
done
Perhaps you could use the concurrency feature of GitHub Actions?
Now you cannot directly bake this into an action, but if it's possible for you to extract your action into a reusable workflow then you could make use of the concurrency feature.
It would look something like this:
# ./github/workflows/partial.yaml
on:
workflow_call:
inputs:
stack-name:
description: "name of the stack"
required: true
type: string
jobs:
greet:
runs-on: ubuntu-latest
concurrency:
group: ${{ inputs.stack-name }}
cancel-in-progress: true
steps:
- uses: my/other-action
with:
stack_name: ${{ inputs.stack-name }}
And then where you're using it:
jobs:
test:
uses: my/app-repo/.github/workflows/partial.yml#main
with:
stack-name: 'my-stack'
I have the following Yaml script. I am looking for how to grab the token created and store into a variable:
- bash: |
echo {} > ~/.databricks-connect#
source py37-venv/bin/activate
pip3 install wheel
pip3 install databricks-cli
displayName: Install Databricks CLI
- bash: |
source py37-venv/bin/activate
databricks configure --token <<EOF
${DATABRICKS_HOST}
${DATABRICKS_AAD_TOKEN}
EOF
databricks tokens create --lifetime-seconds 129600 --comment "My comment."
The response that the above command returns is this json:
{
"token_value": "dapi1a23b45678901cd2e3fa4bcde56f7890",
"token_info": {
"token_id": "1ab23cd45678e90123f4567abc8d9e012345fa67890123b45678cde90fa123b4",
"creation_time": 1621287738473,
"expiry_time": 1621417338473,
"comment": "My comment."
}
}
I want to store the value of token_value above so I can use it in another task below.
You can use jq to parse the response json to get token value, for example:
token=$(databricks tokens create --lifetime-seconds 129600 --comment "My comment." | jq .token_value --raw-output)
Set $token as variable with logging command(you can set it as secret or not,click the link to check the usage), then use it in next job($(setvar.databrickstoken)).
echo "##vso[task.setvariable variable=databrickstoken;issecret=true;isoutput=true]$token"
I would like to know if there is substring function one can leverage in JMESPATH (supported by az cli).
I have the below az cli request and I want to just extract the name of the linked subnet with a security group, but unlike other cloud providers azure doesn't store associated resources names the same way.
The name can be extracted in the subnet.id node which looks like below
$ az network nsg show -g my_group -n My_NSG --query "subnets[].id" -o json
[
"/subscriptions/xxxxxx2/resourceGroups/my_group/providers/Microsoft.Network/virtualNetworks/MY-VNET/subnets/My_SUBNET"
]
I want to only extract "MY_SUBNET" from the the result.
I know there is something called search that is supposed to mimic
substring (explained here
https://github.com/jmespath/jmespath.jep/issues/5) but it didn't
work for me .
$ az network nsg show -g my_group -n My_NSG --query "subnets[].search(id,'#[120:-1]')" -o json
InvalidArgumentValueError: argument --query: invalid jmespath_type value: "subnets[].search(id,'#[120:-1]')"
CLIInternalError: The command failed with an unexpected error. Here is the traceback:
Unknown function: search()
Thank you
Edit :
I actually run the request including other elements that's why using substring with bash in a new line is not what I want .
here's an example of the full query :
az network nsg show -g "$rg_name" -n "$sg_name" --query "{Name:name,Combo_rule_Ports:to_string(securityRules[?direction==\`Inbound\`].destinationPortRanges[]),single_rule_Ports:to_string(securityRules[?direction==\`Inbound\`].destinationPortRange),sub:subnets[].id,resourceGroup:resourceGroup}" -o json
output
{
"Combo_rule_Ports": "[]",
"Name": "sg_Sub_demo_SSH",
"resourceGroup": "brokedba",
"single_rule_Ports": "[\"22\",\"80\",\"443\"]",
"sub": [
"/subscriptions/xxxxxxx/resourceGroups/brokedba/providers/Microsoft.Network/virtualNetworks/CLI-VNET/subnets/Sub_demo"
]
}
I had a similar problem with EventGrid subscriptions and used jq to transform JSON returned by the az command. As a result, you get an JSON array.
az eventgrid event-subscription list -l $location -g $resourceGroup --query "[].{
Name:name,
Container:deadLetterDestination.blobContainerName,
Account:deadLetterDestination.resourceId
}" \
| jq '[.[] | { Name, Container, Account: (.Account | capture("storageAccounts/(?<name>.+)").name) }]'
The expression Account: (.Account | capture("storageAccounts/(?<name>.+)").name) transforms the original resourceId from the Azure CLI.
# From Azure resourceId...
"Account": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Storage/storageAccounts/mystorageaccount"
# .. to Azure Storage Account name
"Account": "mystorageaccount"
I've adapted the approach from How to extract a json value substring with jq.
cut can be used to extract desired values:
az network nsg show -g my_group -n My_NSG --query "subnets[].id|[0]" -o json | cut -d"/" -f11
If you run Azure CLI in bash, here are string manipulation operations you can do:
Following syntax deletes the longest match of $substring from the front of $string
${string##substring}
In this case, you can retrieve the subnet like this.
var=$(az network nsg show -g nsg-rg -n nsg-name --query "subnets[].id" -o tsv)
echo ${var##*/}
For more information, you could refer to https://www.thegeekstuff.com/2010/07/bash-string-manipulation/
I have created pipeline to import existing Azure Resource into terraform. Since Terraform Import requires Provider details or Environment Variables for The below details which has to extracted from the Service Connection.
steps:
- task: AzureCLI#2
displayName: Terraform Init
inputs:
azureSubscription: ${{ parameters.service_connection }}
addSpnToEnvironment: true
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
export ARM_CLIENT_ID=$servicePrincipalId
export ARM_CLIENT_SECRET=$servicePrincipalKey
export ARM_SUBSCRIPTION_ID=$(az account show --query id | xargs)
export ARM_TENANT_ID=$(az account show --query tenantId | xargs)
ls
terraform init -upgrade -input=false \
-backend-config="subscription_id=${{ parameters.tf_state_subscription_id }}" \
-backend-config="tenant_id=$tenantId" \
-backend-config="client_id=$servicePrincipalId" \
-backend-config="client_secret=$servicePrincipalKey" \
-backend-config="resource_group_name=${{ parameters.resource_group_name }}" \
-backend-config="storage_account_name=${{ parameters.storage_account_name }}" \
-backend-config="container_name=${{ parameters.tf_state_key }}" \
-backend-config="key=${{ parameters.tf_state_key }}.tfstate"
if [ $(az resource list --name pytestkeyvault --query '[].id' -o tsv) != null ]
then
echo "using Keyvault $(az resource list --name pytestkeyvault --query '[].id' -o tsv)"
terraform import azurerm_key_vault.this $(az resource list --name pytestkeyvault --query '[].id' -o tsv)
else
echo "Keyvault does not exist"
fi
echo $ARM_CLIENT_ID
The exported environment variable ARM_CLIENT_ID is empty. The below variables are not being exported as environment variables.
echo $ARM_CLIENT_ID
echo $ARM_CLIENT_SECRET
echo $ARM_SUBSCRIPTION_ID
echo $ARM_TENANT_ID
For my setup i could not access the service principal from azure powershell.
But i could from Azure CLI.
This post pointed me in the right direction, check it out:
https://www.integration-playbook.io/docs/combining-az-cli-and-azure-powershell-az-modules-in-a-pipeline
In my experience of trying every possible variation of setting environment variables, it seems as ADO build agents don't allow the persisting of ARM_CLIENT_SECRET as an environment variable.
So the workaround I had to do was set the environment variables at the task level (instead of at the shell/machine level):
- script: |
terraform init # ...rest of your CLI arguments/backend-config flags
env:
ARM_CLIENT_SECRET: $(client_secret)
displayName: Terraform Init
Edit:
IMO, just using terraform init yourself via CLI is better than using the AzureCLI#2 task which is a confusing black box that honestly makes it harder/more verbose to do the same thing just with the plan CLI command.
Try using system variables $env:servicePrincipalId, $env:servicePrincipalKey, $env:tenantId to get SPN details.
Is there a way to transfer all GitHub repositories owned by one user to another user? Is this functionality accessible by an admin (eg. on Enterprise, if the user can no longer access GitHub)?
GitHub has a convenient command-line tool: hub found at https://hub.github.com
I've written an example to move all repos from all your organisations to my_new_organisation_name:
#!/usr/bin/env bash
orgs="$(hub api '/user/orgs' | jq -r '.[] | .login')";
repos="$(for org in $orgs; do hub api '/orgs/'"$org"'/repos' | jq -r '.[] | .name';
done)"
for org in $orgs; do
for repo in $repos; do
( hub api '/repos/'"$org"'/'"$repo"'/transfer'
-F 'new_owner'='my_new_organisation_name' | jq . ) &
done
done
For users rather than organisation, set my_new_organisation_name to the replacement username, remove the outer loop, and replace the repos= line with:
repos="$(hub api /users/SamuelMarks/repos | jq -r '.[] | .name')"
EDIT: Found a GUI if you prefer https://stackoverflow.com/a/54549899