The output of aws cloudformation list-stack-instances --stack-set-name ${{ matrix.stack }} --query 'Summaries[*].Account' is like:
[
"12345",
"62135",
"84328"
]
I'm interested in the account IDs, so actually the full command is aws cloudformation list-stack-instances --stack-set-name ${{ matrix.stack }} --query 'Summaries[*].Account' | grep -E "\d+" | tr -d '", '.
This just works locally. However, it was returning an empty response in GitHub Action.
After further debugging, I found out that the reason is that the grep is failing in GHA, and I don't have any clue why.
Basically, aws cloudformation list-stack-instances --stack-set-name ${{ matrix.stack }} --query 'Summaries[*].Account' returns the expected output as the example above. But then | grep -E "\d+" exits with an error.
Full log of the step:
Run $(aws cloudformation list-stack-instances --stack-set-name foo --query 'Summaries[*].Account' | grep -E "\d+")
aws cloudformation list-stack-instances --stack-set-name foo --query 'Summaries[*].Account' | grep -E "\d+"
shell: /usr/bin/bash -e {0}
env:
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_DEFAULT_REGION: eu-west-1
Error: Process completed with exit code 1.
Any ideas?
Found this similar question with no answer.
EDIT:
For now, I managed to workaround this by also passing [] to the tr, but I'm leaving the question open because I'm quite curious about why the hell grep is not working.
I spent countless hours on this myself. You will have to use [0-9] instead of \d
Related
I have this git action for my build
...
- name: Building S3 Instance
uses: charlie87041/s3-actions#main
id: s3
env:
AWS_S3_BUCKET: 'xxx'
AWS_ACCESS_KEY_ID: 'xxx'
AWS_SECRET_ACCESS_KEY: 'xxxxx'
AWS_REGION: 'xxx'
- name: Updating EC2 [Develop] instance
uses: appleboy/ssh-action#master
with:
host: ${{secrets.EC2HOST}}
key: ${{secrets.EC2KEY}}
username: xxx
envs: TESTING
script: |
cd ~/devdir
export BUCKET_USER=${{steps.s3.outputs.user_id}}
export BUCKET_USER_KEY=${{steps.s3.outputs.user_key}}
docker login
docker-compose down --remove-orphans
docker system prune -a -f
docker pull yyyy
docker-compose up -d
And this is the important function in charlie87041/s3-actions#main
generate_keys () {
RSP=$(aws iam create-access-key --user-name $USER);
BUCKET_ACCESS_ID=$(echo $RSP | jq -r '.AccessKey.AccessKeyId');
BUCKET_ACCESS_KEY=$(echo $RSP | jq -r '.AccessKey.SecretAccessKey');
echo "user_id=$BUCKET_ACCESS_ID" >> $GITHUB_OUTPUT
echo "user_key=$BUCKET_ACCESS_KEY" >> $GITHUB_OUTPUT
echo "::set-output name=BUCKET_ACCESS_KEY::$BUCKET_ACCESS_KEY"
echo "::set-output name=BUCKET_ACCESS_ID::$BUCKET_ACCESS_ID"
}
I need to update env variables in container with BUCKET_USER and BUCKET_USER_KEY, but these always return null when echo the container. How do I do this?
Not that set-output was deprecated recently (oct. 2022)
If you are using self-hosted runners make sure they are updated to version 2.297.0 or greater.
If you are using runner on github.com directly, you would need to change
echo "::set-output name=BUCKET_ACCESS_KEY::$BUCKET_ACCESS_KEY"
with
echo "BUCKET_ACCESS_KEY=$BUCKET_ACCESS_KEY" >> $GITHUB_OUTPUT
I am not sure an export within the script would work.
Using with directives, as in issue 154 might be more effective
with:
BUCKET_USER: ${{steps.s3.outputs.user_id}}
...
script: |
...
I'm using a github workflow to automate some actions for AWS. I haven't changed anything for a while as the script has been working nicely for me. Recently I've been getting this error: Unable to process file command 'env' successfully whenever the workflow runs. I've got no idea why this is happening. Any help or pointers would greatly appreciated. Thanks. Here's the workflow which is outputting the error:
- name: "Get AWS Resource values"
id: get_aws_resource_values
env:
SHARED_RESOURCES_ENV: ${{ github.event.inputs.shared_resources_workspace }}
run: |
BASTION_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:env,Values=$SHARED_RESOURCES_ENV" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier $SHARED_RESOURCES_ENV-rds \
--query "DBInstances[0].Endpoint.Address" \
--output text)
echo "rds_endpoint=$RDS_ENDPOINT" >> $GITHUB_ENV
echo "bastion_instance_id=$BASTION_INSTANCE_ID" >> $GITHUB_ENV
From the RDS endpoint query expression (Reservations[*].Instances[*].InstanceId) in your aws cli command, it seems you expect a multiline string. It could also be that before you started to receive this error the command was producing a single line string, and that changed at some point.
In GitHub actions, multiline strings for environment variables and outputs need to be created with a different syntax.
For the RDS endpoint you should set the environment variable like this:
echo "rds_endpoint<<EOF" >> $GITHUB_ENV
echo "$RDS_ENDPOINT" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
I guess that the bastion instance id will not be a problem since it's a single line string.
Is there a way to use the github-cli or api to view the inputs of an action while it is running?
I want to allow Github actions to run concurrently. The resources they will manage are determined by the input stack_name. I want to make sure two pipelines cannot run at the same time with the same stack_name input. If this happens, then I want one of the pipeline actions to fail and stop immediately.
I am also taking the input and turning it into an environmental variable for one of my jobs. After the job finishes, the values are available in the logs and I can grep through the following output to get a pipelines stack_name:
$ gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log
....
env-check env-check 2022-03-22T17:06:30.2615395Z STACK_NAME: foo
However, this is not available while a job is running and I instead get this error:
run 1234567890 is still in progress; logs will be available when it is complete
Here is my current attempt at a code block that can achieve this. I could also use suggestions on how to make better gh run list and/or gh run view calls that can avoid using grep and awk. Clean json output I can parse with jq is preferable.
set +e
running_pipeline_ids=$(gh run list --workflow=$SLEEVE --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY \
| grep 'in_progress' \
| awk '{print $((NF-2))}' \
| grep -v $GITHUB_RUN_ID)
set -e
for running_pipeline_id in $running_pipeline_ids; do
# get the stack name for all other running pipelines
running_pipeline_stack_name=$(gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log \
| grep 'STACK_NAME:' | head -n 1 \
| awk -F "STACK_NAME:" '{print $2}' | awk '{print $1}')
# fail if we detect another pipeline running against the same stack
if [ "$running_pipeline_stack_name" == "$STACK_NAME" ]; then
echo "ERROR: concurrent pipeline detected. $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$running_pipeline_id"
echo "Please try again after the running pipeline has completed."
exit 1
fi
done
Perhaps you could use the concurrency feature of GitHub Actions?
Now you cannot directly bake this into an action, but if it's possible for you to extract your action into a reusable workflow then you could make use of the concurrency feature.
It would look something like this:
# ./github/workflows/partial.yaml
on:
workflow_call:
inputs:
stack-name:
description: "name of the stack"
required: true
type: string
jobs:
greet:
runs-on: ubuntu-latest
concurrency:
group: ${{ inputs.stack-name }}
cancel-in-progress: true
steps:
- uses: my/other-action
with:
stack_name: ${{ inputs.stack-name }}
And then where you're using it:
jobs:
test:
uses: my/app-repo/.github/workflows/partial.yml#main
with:
stack-name: 'my-stack'
I am trying to deploy my application into aws cluster as follows
Steps
Build image and push into docker hub (it is working)
Deploy the image into aws cluster (I couldn't make it work)
I searched in google, but couldn't find any solution.
Here is my GitHub workflow file
deploy.yml. Any help is appreciated to make it work.
# This is a basic workflow that is manually triggered
name: Deploy Manual
# Controls when the action will run. Workflow runs when manually triggered using the UI
# or API.
on:
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "deploy"
deploy:
# The type of runner that the job will run on
runs-on: ubuntu-latest
env:
IMAGE_TAG: ${{ github.sha }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
KUBE_NAMESPACE: production
DOCKER_USER: ${{secrets.DOCKER_HUB_USERNAME}}
DOCKER_PASSWORD: ${{secrets.DOCKER_HUB_ACCESS_TOKEN}}
RELEASE_IMAGE: ucars/ucars-ui3:${{ github.sha }}
steps:
# This step instructs Github to cancel any current run for this job on this very repository.
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action#0.4.1
with:
access_token: ${{ github.token }}
- uses: actions/checkout#v2
- name: docker login
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Docker image
run: docker build . --file Dockerfile --tag $RELEASE_IMAGE
- name: Docker Push
run: docker push $RELEASE_IMAGE
- name: Deploy to Kubernetes cluster
uses: kodermax/kubectl-aws-eks#master
with:
args: set image deployment/ucars-ui3-pod app=${{ env.RELEASE_IMAGE }} --record -n $KUBE_NAMESPACE
It is failing at the step Deploy to Kubernetes cluster
2022-01-14T18:22:14.4557590Z ##[group]Run kodermax/kubectl-aws-eks#master
2022-01-14T18:22:14.4558128Z with:
2022-01-14T18:22:14.4559002Z *** set image deployment/***-ui3-pod app=***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200 --record -n $KUBE_NAMESPACE
2022-01-14T18:22:14.4559708Z ***
2022-01-14T18:22:14.4560253Z IMAGE_TAG: 3d23d9fb07a2ce43b3a27502359c1a0685705200
2022-01-14T18:22:14.4608584Z KUBE_CONFIG_DATA: ***
2022-01-14T18:22:14.4609135Z KUBE_NAMESPACE: production
2022-01-14T18:22:14.4609639Z DOCKER_USER: ***
2022-01-14T18:22:14.4610253Z DOCKER_PASSWORD: ***
2022-01-14T18:22:14.4610915Z RELEASE_IMAGE: ***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200
2022-01-14T18:22:14.4611509Z ##[endgroup]
2022-01-14T18:22:14.4809817Z ##[command]/usr/bin/docker run --name a74655ce21da3d4675874b9544657797b0_b31db8 --label 9916a7 --workdir /github/workspace --rm -e IMAGE_TAG -e KUBE_CONFIG_DATA -e KUBE_NAMESPACE -e DOCKER_USER -e DOCKER_PASSWORD -e RELEASE_IMAGE -e INPUT_ARGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/***-ui3/***-ui3":"/github/workspace" 9916a7:4655ce21da3d4675874b9544657797b0 set image deployment/***-ui3-pod app=***/***-ui3:3d23d9fb07a2ce43b3a27502359c1a0685705200 --record -n $KUBE_NAMESPACE
2022-01-14T18:22:14.7791749Z base64: invalid input
I think I have found the issue, apparently, KUBE_CONFIG_DATA is invalid. Your entrypoint.sh in kodermax/kubectl-aws-eks#master image is trying to decode it, but can't and throwing the error.
#!/bin/sh
set -e
# Extract the base64 encoded config data and write this to the KUBECONFIG
echo "$KUBE_CONFIG_DATA" | base64 -d > /tmp/config
export KUBECONFIG=/tmp/config
sh -c "kubectl $*"
Please fix the KUBE_CONFIG_DATA, it must be in a valid base64 format. if you put raw kubeconfig file there, you may have to convert it to base64 format first.
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
I'm learning Ansible. And I was following the official docs:
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
https://docs.ansible.com/ansible/2.3/intro_inventory.html
But I have a little question. How use the vars in the inventories?
I have try to use some of the default parameters like self_destruct_countdown.
[pruebascomandos]
MY-SERVER-IP self_destruct_countdown=60
OTHER-MY-SERVER-IP
And using the apply variables to all group. With a own var.
[pruebascomandos:vars]
example=true
But my problem is that in both cases I try to check the var with:
$ ansible pruebascomandos -m shell -a "echo $self_destruct_countdown"
$ ansible pruebascomandos -m shell -a "echo $example"
And in both cases I get a blank response. I don't sure why.
If someone can explain why or tell me where to read it it would be great. Thank to everyone!
Double braces {{ }} are needed to evaluate the variable. Try this
shell> ansible pruebascomandos -i hosts -m shell -a "echo {{ example }}"
test_01 | CHANGED | rc=0 >>
true
test_02 | CHANGED | rc=0 >>
true
shell> ansible pruebascomandos -i hosts -m shell -a "echo {{ self_destruct_countdown }}"
test_02 | FAILED | rc=-1 >>
The task includes an option with an undefined variable. The error was: self_destruct_countdown is undefined
test_01 | CHANGED | rc=0 >>
60
The host test_02 failed because the variable self_destruct_countdown had been defined for test_01 only.
shell> cat hosts
[pruebascomandos]
test_01 self_destruct_countdown=60
test_02
[pruebascomandos:vars]
example=true