Github Workflow: Unable to process file command 'env' successfully - github

I'm using a github workflow to automate some actions for AWS. I haven't changed anything for a while as the script has been working nicely for me. Recently I've been getting this error: Unable to process file command 'env' successfully whenever the workflow runs. I've got no idea why this is happening. Any help or pointers would greatly appreciated. Thanks. Here's the workflow which is outputting the error:
- name: "Get AWS Resource values"
id: get_aws_resource_values
env:
SHARED_RESOURCES_ENV: ${{ github.event.inputs.shared_resources_workspace }}
run: |
BASTION_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:env,Values=$SHARED_RESOURCES_ENV" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier $SHARED_RESOURCES_ENV-rds \
--query "DBInstances[0].Endpoint.Address" \
--output text)
echo "rds_endpoint=$RDS_ENDPOINT" >> $GITHUB_ENV
echo "bastion_instance_id=$BASTION_INSTANCE_ID" >> $GITHUB_ENV

From the RDS endpoint query expression (Reservations[*].Instances[*].InstanceId) in your aws cli command, it seems you expect a multiline string. It could also be that before you started to receive this error the command was producing a single line string, and that changed at some point.
In GitHub actions, multiline strings for environment variables and outputs need to be created with a different syntax.
For the RDS endpoint you should set the environment variable like this:
echo "rds_endpoint<<EOF" >> $GITHUB_ENV
echo "$RDS_ENDPOINT" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
I guess that the bastion instance id will not be a problem since it's a single line string.

Related

Pass Mongodb Atlas Operator env vars from travis to kubernetes deploy.sh

I am trying to adapt the quickstart guide for Mongo Atlas Operator here Atlas Operator Quickstart to use secure env variables set in TravisCI.
I want to put the quickstart scripts into my deploy.sh, which is triggered from my travis.yaml file.
My travis.yaml already sets one global variable like this:
env:
global:
- SHA=$(git rev-parse HEAD)
Which is consumed by the deploy.sh file like this:
docker build -t mydocker/k8s-client:latest -t mydocker/k8s-client:$SHA -f ./client/Dockerfile ./client
but I'm not sure how to pass vars set in the Environment variables bit in the travis Settings to deploy.sh
This is the section of script I want to pass variables to:
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$MY_ORG_ID" \
--from-literal="publicApiKey=$MY_PUBLIC_API_KEY" \
--from-literal="privateApiKey=$MY_PRIVATE_API_KEY" \
-n mongodb-atlas-system
I'm assuming the --from-literal syntax will just put in the literal string "orgId=$MY_ORG_ID" for example, and I need to use pipe syntax - but can I do something along the lines of this?:
echo "$MY_ORG_ID" | kubectl create secret generic mongodb-atlas-operator-api-key --orgId-stdin
Or do I need to put something in my travis.yaml before_install script?
Looks like the echo approach is fine, I've found a similar use-case to yours, have a look here.

Export ARM_CLIENT_ID and ARM_CLIENT_SECRET from Service connection in pipeline.yaml

I have created pipeline to import existing Azure Resource into terraform. Since Terraform Import requires Provider details or Environment Variables for The below details which has to extracted from the Service Connection.
steps:
- task: AzureCLI#2
displayName: Terraform Init
inputs:
azureSubscription: ${{ parameters.service_connection }}
addSpnToEnvironment: true
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
export ARM_CLIENT_ID=$servicePrincipalId
export ARM_CLIENT_SECRET=$servicePrincipalKey
export ARM_SUBSCRIPTION_ID=$(az account show --query id | xargs)
export ARM_TENANT_ID=$(az account show --query tenantId | xargs)
ls
terraform init -upgrade -input=false \
-backend-config="subscription_id=${{ parameters.tf_state_subscription_id }}" \
-backend-config="tenant_id=$tenantId" \
-backend-config="client_id=$servicePrincipalId" \
-backend-config="client_secret=$servicePrincipalKey" \
-backend-config="resource_group_name=${{ parameters.resource_group_name }}" \
-backend-config="storage_account_name=${{ parameters.storage_account_name }}" \
-backend-config="container_name=${{ parameters.tf_state_key }}" \
-backend-config="key=${{ parameters.tf_state_key }}.tfstate"
if [ $(az resource list --name pytestkeyvault --query '[].id' -o tsv) != null ]
then
echo "using Keyvault $(az resource list --name pytestkeyvault --query '[].id' -o tsv)"
terraform import azurerm_key_vault.this $(az resource list --name pytestkeyvault --query '[].id' -o tsv)
else
echo "Keyvault does not exist"
fi
echo $ARM_CLIENT_ID
The exported environment variable ARM_CLIENT_ID is empty. The below variables are not being exported as environment variables.
echo $ARM_CLIENT_ID
echo $ARM_CLIENT_SECRET
echo $ARM_SUBSCRIPTION_ID
echo $ARM_TENANT_ID
For my setup i could not access the service principal from azure powershell.
But i could from Azure CLI.
This post pointed me in the right direction, check it out:
https://www.integration-playbook.io/docs/combining-az-cli-and-azure-powershell-az-modules-in-a-pipeline
In my experience of trying every possible variation of setting environment variables, it seems as ADO build agents don't allow the persisting of ARM_CLIENT_SECRET as an environment variable.
So the workaround I had to do was set the environment variables at the task level (instead of at the shell/machine level):
- script: |
terraform init # ...rest of your CLI arguments/backend-config flags
env:
ARM_CLIENT_SECRET: $(client_secret)
displayName: Terraform Init
Edit:
IMO, just using terraform init yourself via CLI is better than using the AzureCLI#2 task which is a confusing black box that honestly makes it harder/more verbose to do the same thing just with the plan CLI command.
Try using system variables $env:servicePrincipalId, $env:servicePrincipalKey, $env:tenantId to get SPN details.

Initialise and pull terraform public modules using GitHub SSH private key

Context:
I have gitlab runners which are executing terraform init command which is pulling all necessary terraform modules. Recently, I started hitting github throttling issues (60 calls to github api per hour). So I am trying to reconfigure my pipeline so it uses Github user's private key.
Currently, I have the following in my pipeline but it still doesn't seem to work and private key isn't used to pull the terraform modules.
- GITHUB_SECRET=$(aws --region ${REGION} ssm get-parameters-by-path --path /github/umotifdev --with-decryption --query 'Parameters[*].{Name:Name,Value:Value}' --output json);
- PRIVATE_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/private_key").Value' | base64 -d);
- PUBLIC_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/public_key").Value' | base64 -d);
- mkdir -p ~/.ssh;
- echo "${PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa;
- chmod 700 ~/.ssh/id_rsa;
- eval $(ssh-agent -s);
- ssh-add ~/.ssh/id_rsa;
- ssh-keyscan -H 'github.com' >> ~/.ssh/known_hosts;
- ssh-keyscan github.com | sort -u - ~/.ssh/known_hosts -o ~/.ssh/known_host;
- echo -e "Host github.com\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config;
- echo ${PUBLIC_KEY} >> ~/.ssh/authorized_keys
The error I am seeing in my pipeline is something like (which is basically throttling from github):
Error: Failed to download module
Could not download module "vpc" (vpc.tf:17) source code from
"https://api.github.com/repos/terraform-aws-modules/terraform-aws-vpc/tarball/v2.21.0//*?archive=tar.gz":
bad response code: 403.
Anyone can advise how to resolve an issue where private key isn't used to pull terraform modules?

AWS Cli - Query output using an environment variable in powershell

I am trying to query the output of an AWS cli command using an environment variable as the query string. This works fine for me using the AWS Cli in Linux but in Powershell I am having trouble getting the Cli to use the variable in Powershell.
For example - thsi works for me in Linux:
SECGRP="RDP from Home"
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`'"$SECGRP"'`].GroupId' --output text
If i run this in Powershell:
$SECGRP="RDP from Home"
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`'"$SECGRP"'`].GroupId' --output text
Error Details:
Bad value for --query SecurityGroups[?GroupName==`: Bad jmespath
expression: Unclosed ` delimiter:
SecurityGroups[?GroupName==`
^
I have tried a few combinations of quotes inisde the query expression but either get errors or no output.
I have also run the following to demonstrate i can get the correct output using Powershell (but not using a variable):
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`RDP from Home`].GroupId' --output text
Try this:
$SECGRP="RDP from Home"
aws ec2 describe-security-groups --query "SecurityGroups[?GroupName=='$SECGRP'].GroupId" --output text

ECS Service - Automating deploy with new Docker image

I want to automate the deployment of my application by having my ECS service launch with the latest Docker image. From what I've read, the way to deploy a new image version is as follows:
Create a new task revision (after updating the image on your Docker repository).
Update the service and specify the new revision.
This seems to work, but I want to do this all through CLI so I can script it. #2 seems easy enough to do through the AWS CLI with update-service, but I don't see a way to do #1 without specifying the entire Task JSON all over again as with register-task-definition (my JSON will include credentials in environment variables, so I want to have that in as few places as possible).
Is this how I should be automating deployment of my ECS Service updates? And if so, is there a "good" way to have the Task Definition launch a new revision (i.e. without duplicating everything)?
Yes, that is the correct approach.
And no, with the current API, you can't register a new revision of an existing task definition without duplicating it.
If you didn't use the CLI to generate the original task definition (or don't want to reuse the original commands that generated it), you could try something like the following through the CLI:
OLD_TASK_DEF=$(aws ecs describe-task-definition --task-definition <task_family_name>)
NEW_CONTAINER_DEFS=$(echo $OLD_TASK_DEF | jq '.taskDefinition.containerDefinitions' | jq '.[0].image="<new_image_name>"')
aws ecs register-task-definition --family <task_family_name> --container-definitions "'$(echo $NEW_CONTAINER_DEFS)'"
Not 100% secure as the last command's --container-defintions argument (which includes "environment" entries) will still be visible through processes like ps. One of the AWS SDKs would give better peace of mind.
The answer provided by Matt Callanan did not work for me: I received an error on this part:
--container-definitions "'$(echo $NEW_CONTAINER_DEFS)'"
Resulted in: Error parsing parameter '--container-definitions': Expected: '=', received: ''' for input:
'{ environment: [ { etc etc....
What I did to resolve it was:
TASK_FAMILY=<task familiy name>
DOCKER_IMAGE=<new_image_name>
LATEST_TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition ${TASK_FAMILY})
echo $LATEST_TASK_DEFINITION \
| jq '{containerDefinitions: .taskDefinition.containerDefinitions, volumes: .taskDefinition.volumes}' \
| jq '.containerDefinitions[0].image='\"${DOCKER_IMAGE}\" \
> /tmp/tmp.json
aws ecs register-task-definition --family ${TASK_FAMILY} --cli-input-json file:///tmp/tmp.json
I take both the containerDefinitions and volumes elements from the original json document, because my containerDefinition uses these volumes (so it's not needed if you don't use volumes).
#!/bin/bash
SERVICE_NAME="your service name"
IMAGE_VERSION="v_"${BUILD_NUMBER}
TASK_FAMILY="your task defination name"
CLUSTER="your cluster name"
REGION="your region"
echo "=====================Create a new task definition for this build==========================="
sed -e "s;%BUILD_NUMBER%;${BUILD_NUMBER};g" taskdef.json > ${TASK_FAMILY}-${IMAGE_VERSION}.json
echo "=================Resgistring the task defination==========================================="
aws ecs register-task-definition --family ${TASK_FAMILY} --cli-input-json file://${TASK_FAMILY}-${IMAGE_VERSION}.json --region ${REGION}
echo "================Update the service with the new task definition and desired count================"
TASK_REVISION=`aws ecs describe-task-definition --task-definition ${TASK_FAMILY} --region ${REGION} | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//'`
DESIRED_COUNT=`aws ecs describe-services --cluster ${CLUSTER} --services ${SERVICE_NAME} --region ${REGION} | jq .services[].desiredCount`
if [ ${DESIRED_COUNT} = "0" ]; then
DESIRED_COUNT="1"
fi
echo "===============Updating the service=============================================================="
aws ecs update-service --cluster ${CLUSTER} --service ${SERVICE_NAME} --task-definition ${TASK_FAMILY}:${TASK_REVISION} --desired-count ${DESIRED_COUNT} --region ${REGION}
enter code here