Initialise and pull terraform public modules using GitHub SSH private key - github

Context:
I have gitlab runners which are executing terraform init command which is pulling all necessary terraform modules. Recently, I started hitting github throttling issues (60 calls to github api per hour). So I am trying to reconfigure my pipeline so it uses Github user's private key.
Currently, I have the following in my pipeline but it still doesn't seem to work and private key isn't used to pull the terraform modules.
- GITHUB_SECRET=$(aws --region ${REGION} ssm get-parameters-by-path --path /github/umotifdev --with-decryption --query 'Parameters[*].{Name:Name,Value:Value}' --output json);
- PRIVATE_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/private_key").Value' | base64 -d);
- PUBLIC_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/public_key").Value' | base64 -d);
- mkdir -p ~/.ssh;
- echo "${PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa;
- chmod 700 ~/.ssh/id_rsa;
- eval $(ssh-agent -s);
- ssh-add ~/.ssh/id_rsa;
- ssh-keyscan -H 'github.com' >> ~/.ssh/known_hosts;
- ssh-keyscan github.com | sort -u - ~/.ssh/known_hosts -o ~/.ssh/known_host;
- echo -e "Host github.com\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config;
- echo ${PUBLIC_KEY} >> ~/.ssh/authorized_keys
The error I am seeing in my pipeline is something like (which is basically throttling from github):
Error: Failed to download module
Could not download module "vpc" (vpc.tf:17) source code from
"https://api.github.com/repos/terraform-aws-modules/terraform-aws-vpc/tarball/v2.21.0//*?archive=tar.gz":
bad response code: 403.
Anyone can advise how to resolve an issue where private key isn't used to pull terraform modules?

Related

Github Workflow: Unable to process file command 'env' successfully

I'm using a github workflow to automate some actions for AWS. I haven't changed anything for a while as the script has been working nicely for me. Recently I've been getting this error: Unable to process file command 'env' successfully whenever the workflow runs. I've got no idea why this is happening. Any help or pointers would greatly appreciated. Thanks. Here's the workflow which is outputting the error:
- name: "Get AWS Resource values"
id: get_aws_resource_values
env:
SHARED_RESOURCES_ENV: ${{ github.event.inputs.shared_resources_workspace }}
run: |
BASTION_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:env,Values=$SHARED_RESOURCES_ENV" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier $SHARED_RESOURCES_ENV-rds \
--query "DBInstances[0].Endpoint.Address" \
--output text)
echo "rds_endpoint=$RDS_ENDPOINT" >> $GITHUB_ENV
echo "bastion_instance_id=$BASTION_INSTANCE_ID" >> $GITHUB_ENV
From the RDS endpoint query expression (Reservations[*].Instances[*].InstanceId) in your aws cli command, it seems you expect a multiline string. It could also be that before you started to receive this error the command was producing a single line string, and that changed at some point.
In GitHub actions, multiline strings for environment variables and outputs need to be created with a different syntax.
For the RDS endpoint you should set the environment variable like this:
echo "rds_endpoint<<EOF" >> $GITHUB_ENV
echo "$RDS_ENDPOINT" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
I guess that the bastion instance id will not be a problem since it's a single line string.

Bitbucket REST API to do search in remote private master repository

Is there any way I can search for a specific string in a private master repositories in butbucket.
Below is the code I use to get the clone command and then I download the files and do a grep on them now. But is it possible to search directlt? I need the output like
Full file Path, Search result (Line with the word I am searching for)
set -e
echo -n '' > clone-repos.sh
chmod +x clone-repos.sh
ONPREM_USER=user1
ONPREM_PASS=pass1
ONPREM_PROJECT=project1
curl -s -u "$ONPREM_USER:$ONPREM_PASS" https://bitbucket.bmogc.net/rest/api/1.0/projects/$ONPREM_PROJECT/repos/\?limit=1000 | ./jq-win64.exe -r '.values[] | {slug:.slug, links:.links.clone[] } | select(.links.name=="http") | "git clone \(.links.href) \(.slug)"' >> clone-repos.sh

How do I authenticating to remote repo inside Github Action using GH CLI?

I am having an issue with the following segment of a Github Action/Workflow which is meant to pull the PR list (with some filtering) of a remote, private repo (e.g. not the repo that contains the Action itself).
- run: echo "PR2=$( gh pr list --head "${{ env.BRANCH_NAME }}" --repo github.com/[OWNER]/[REMOTE_REPO] | tr -s [:space:] ' ' | cut -d' ' -f1 )" >> $GITHUB_ENV
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
However, I am getting the following error: GraphQL: Could not resolve to a Repository with the name '[OWNER]/[REMOTE_REPO]'. (repository)
I gather there is some issue with authentication somewhere, since the commands runs perfectly in a terminal after authenticating with gh auth. I'm new to Github as a whole, Actions, and CLI, so any advice as to how to properly authenticate inside an action would be amazing.
Edit: Found a solution/workaround.
Use git ls-remote to get a list of PRs and branches, then link the two using the ID. For future reference:
id=$(git ls-remote git#github.com:[OWNER]/[REMOTE_REPO] | grep "${{ env.BRANCH_NAME }}" | head -c 40)
PR=$(git ls-remote git#github.com:[OWNER]/[REMOTE_REPO] | grep "${id}.*refs/pull" | cut -b 52- | rev | cut -b 6- | rev)
There is an open feature request for authenticating non-interactively: Add flags to gh auth login to replace the interactive prompts
You can use github-script though:
steps:
- name: Find Pull Request
uses: actions/github-script#v5
with:
github-token: ${{ secrets.TOKEN_FOR_PRIVATE_REPO }}
script: |
const pulls = github.rest.pulls.list({
owner: '[OWNER]',
head: '${{ env.BRANCH_NAME }}',
repo: '[REMOTE_REPO]',
});
Note how it passes a separate github-token. The default token (secrets.GITHUB_TOKEN) cannot access your other private repository, so you'll have to issue another token and set that up as a secret.
If you don't want to use github script, you could also use plain curl with the newly issued token. Here's the doc on the REST API: https://docs.github.com/en/rest/reference/pulls#list-pull-requests and how to use the token: https://docs.github.com/en/rest/overview/other-authentication-methods#via-oauth-and-personal-access-tokens
You don't need to specifically authenticate using gh auth, but you should be using a generated PAT which has access to the private repo in this case.
For example, generate a PAT which can access your private repo, steps: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token
Add the PAT as a secret to the repo where you have your workflow, say PRIVATE_REPO_PAT , steps: https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository
Then, you can use that in your workflow like:
- run: echo "PR2=$( gh pr list --head "${{ env.BRANCH_NAME }}" --repo github.com/[OWNER]/[REMOTE_REPO] | tr -s [:space:] ' ' | cut -d' ' -f1 )" >> $GITHUB_ENV
env:
GITHUB_TOKEN: ${{ secrets.PRIVATE_REPO_PAT }}
Note that, if you do want to use gh auth 'non-interactively', say in a shell script, you could always do it using :
echo "$GH_CONFIG_TOKEN" | gh auth login --with-token
where GH_CONFIG_TOKEN is either the default GITHUB_TOKEN or a generated PAT.
For use in Github Actions, this auth is implicit when you pass in the correct GITHUB_TOKEN in the env variables.

How to deploy with .gitlab-ci.yml in runner used by docker?

I installed docker and gitlab + a runner using this tutorial: https://frenchco.de/article/Add-un-Runner-Gitlab-CE-Docker
The problem is that when I try to modify the .gitlab-ci.yml to make a deployment on my host machine I can not do it.
My .yml :
stages:
- deploy
deploy_develop:
stage: deploy
before_script:
- apk update && apk add bash && apk add openssh && apk add rsync
- apk add --no-cache bash
script:
- mkdir -p ~/.ssh
- ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa.pub
- rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
environment:
name: develop
And the problem is that in ssh or rsync I always have the same error message in my job:
$ rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
I tried to copy the ssh id_rsa and id_rsa.pub in the host, it's the same.
Surely a problem because my runner is in a docker can be? It is strange because I manage to ping my host (172.16.1.97) since the execution of the .yml. An idea has my problem?
Looks like you did not add the public key into your authorized_keys on the host server for the deploy-user?
For example, I use gitlab-ci to deploy my webapp, and therefore I added the user gitlab on my host machine, and added the public key to authorized_keys and then I can connect with ssh gitlab#IP -i PRIVATE_KEY to that server.
My gitlab-ci.yml looks like this:
deploy-app:
stage: deploy
image: ubuntu
before_script:
- apt-get update -qq
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(cat "$DEPLOY_SERVER_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- chmod 755 ./deploy.sh
script:
- ./deploy.sh
where I added the private key's content as a variable to my gitlab-instance. (see https://docs.gitlab.com/ee/ci/variables/)
The deploy.sh looks like this:
#!/bin/bash
set -eo pipefail
scp app/docker-compose.yml gitlab#"${DEPLOY_SERVER_IP}":~/apps/${NGINX_SERVER_NAME}/
ssh gitlab#$DEPLOY_SERVER_IP "apps/${NGINX_SERVER_NAME}/app.sh update" # this is just doing docker-compose pull && docker-compose up in the app's directory.
Maybe this helps? It's working fine for me and scp/ssh are giving more intuitive error messages than what you got with rsync in this particular case.

How to export log files from Travis CI to GitHUB?

I am using Travis CI.org (Public repo) to execute my Builds and log is printing on Travis Home Page (Log File). I want to extract the log file/ send the log file to Git HUB or to any other external open source tool to access it.
Could you please let us know how to achieve this?
We can deploy build artifacts to S3 : Paste the below code in .travis.yml file if you are using github and S3.
after_failure:
addons:
artifacts:
paths:
- $(git ls-files -o | tr "\n" ":")
deploy:
- provider: s3
- access_key_id: $ARTIFACTS_KEY
- secret_access_key: $ARTIFACTS_SECRET
- bucket: $ARTIFACTS_BUCKET
- skip_cleanup: true
- acl: public_read
Also, if you want to send it free open source tool, you can use chunk.io. Place the below code in a shell script and call this from after_failure section from .travis.yml file:
cd path/to/directory/where/untracked files store/
count=$(git ls-files -o | wc -l)
git ls-files -o
echo ">>>>>>>>> CONTAINERS LOG FILES <<<<<<<<<<<<"
for (( i=1; i<"$count";i++ ))
do
file=$(echo $(git ls-files -o | sed "${i}q;d"))
echo "$file"
cat $file | curl -sT - chunk.io
done
echo " >>>>> testsummary log file <<<< "
cat testsummary.log | curl -sT - chunk.io