Cloudbuild recursively creating multiple builds for app engine deployment on GCP - gcloud

hope this question helps others struggling to use GCP.
I am trying to automate deployments of my strapi app to Google App Engine using CloudBuild. This is my cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: "bash"
args:
- "-c"
- |
rm -rf app.yaml
touch app.yaml
cat <<EOT >> app.yaml
runtime: custom
env: flex
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'production'
DATABASE_NAME: ${_DATABASE_NAME}
DATABASE_USERNAME: ${_DATABASE_USERNAME}
DATABASE_PASSWORD: ${_DATABASE_PASSWORD}
INSTANCE_CONNECTION_NAME: ${_INSTANCE_CONNECTION_NAME}
beta_settings:
cloud_sql_instances: ${_CLOUD_SQL_INSTANCES}
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
EOT
cat app.yaml
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy app.yaml --project ecomm-backoffice']
If I understand correctly how general CI/CD works, this file should create an app.yaml and then run gcloud app deploy app.yaml --project ecomm-backoffice command.
However, CloudBuild is creating nested recursive builds once i push my changes to github(triggers are enabled).
Can someone please help me with the right way of deploying strapi/nodejs to app engine using cloudbuild? I tried searching lot of solutions but haven't had any luck so far.

Related

How to get a secrets "file" into Google Cloud Build so docker compose can read it?

How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.

Gitlab CI CD: keeping docker env files

So I need to store somewhere .env files for docker-compose. One approach is to store their contents in masked variables in gitlab CI/CD but it seems not secure for me as hacking quite a lot of apps would only take someone to crack a gitlab account.
I would like to store .env files in a directory on server and copy them to new pulled repository path in the first job of CI/CD. I tried artifacts for that, but they are uploaded to gitlab and can be viewed there and I didn't manage to find them in the later jobs (ls in after_script didn't show them).
How could I copy .env files into all jobs and not upload them on gitlab?
.gitlab-ci.yml
before_script:
- docker info
- docker compose --version
copy_env_files:
script:
- cp /home/myuser/myapp/env.* .
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
build_image:
script:
- docker-compose -f docker-compose.yml up -d --build
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
collect_static_files:
script:
- docker-compose exec web python manage.py collectstatic --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
migrate_database:
script:
- docker-compose exec web python manage.py migrate --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
artifacts:
paths:
- env.*
after_script:
- docker container ls
- pwd
- ls
How could I copy .env files into all jobs and not upload them on gitlab?
By integrating your gitlab-ci job to an external vault, where sensitive data would securely reside.
For instance: "Authenticating and reading secrets with HashiCorp Vault", but it is for GitLab premium only.
You still can use external secrets in CI
Configure your vault and secrets.
Generate your JWT and provide it to your CI job.
Runner contacts HashiCorp Vault and authenticates using the JWT.
HashiCorp Vault verifies the JWT.
HashiCorp Vault checks the bounded claims and attaches policies.
HashiCorp Vault returns the token.
Runner reads secrets from the HashiCorp Vault.
I should have added "cp /home/myuser/myapp/env.* ." into before_script, not separate it as a job.
I also fixed my errors with django --no-input (by adding -T to docker exec) which occured after docker was successfully builded.
before_script:
- docker info
- docker compose --version
- cp /home/myuser/myproject/env.* .
build_image:
script:
- docker-compose -f docker-compose.yml up -d --build
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
collect_static_files:
script:
- docker-compose exec -T web python manage.py collectstatic --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
migrate_database:
script:
- docker-compose exec -T web python manage.py migrate --no-input
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
after_script:
- docker container ls

GCP Cloud Build tag release

I have a GCP cloud build yaml file that triggers on a new Tag in Github.
I have configure the latest tag to diplay on the app engine version but I need to configure the cloudbuild.yml file to replace the full stop on my tag to hyphen otherwise it fails on the deployment phase.
- id: web:set-env
name: 'gcr.io/cloud-builders/gcloud'
env:
- "VERSION=${TAG_NAME}"
#Deploy to google cloud app engine
- id: web:deploy
dir: "."
name: "gcr.io/cloud-builders/gcloud"
waitFor: ['web:build']
args:
[
'app',
'deploy',
'app.web.yaml',
"--version=${TAG_NAME}",
--no-promote,
]
Tried using --version=${TAG_NAME//./-}, but getting an error on the deployment phase.
Managed to replace te fullstop with n hyphen by using the below step in the cloudbuild.yml file:
- id: tag:release
name: 'gcr.io/cloud-builders/gcloud'
args:
- '-c'
- |
version=$TAG_NAME
gcloud app deploy app.web.yaml --version=${version//./-} --no-promote
entrypoint: bash

How to access a service in Github Actions CI/CD?

I'm trying to set up a CI/CD pipeline in GitHub Actions for my Elixir project.
I can fetch dependencies, compile them, check formatting, credo... But when the tests starts, I'm not able to reach the PostgreSQL service declared on the YAML.
How can I link both containers? (Elixir and PostgreSQL)
According to the logs shown on GitHub Actions, both containers are on the same Docker network, so they should be reachable from each other using their network aliases. However, when I try to connect to the postgres one, it says NXDOMAIN. Also the ping doesn't work, as expected.
The content of my workflow:
name: Elixir CI
on: push
jobs:
build:
runs-on: ubuntu-18.04
container:
image: elixir:1.9.1
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app
POSTGRES_DB: my_app_test
steps:
- uses: actions/checkout#v1
- name: Install Dependencies
env:
MIX_ENV: test
run: |
cp config/test.secret.ci.exs config/test.secret.exs
mix local.rebar --force
mix local.hex --force
apt-get update -qqq && apt-get install make gcc -y -qqq
mix deps.get
- name: Compile
env:
MIX_ENV: test
run: mix compile --warnings-as-errors
- name: Run formatter
env:
MIX_ENV: test
run: mix format --check-formatted
- name: Run Credo
env:
MIX_ENV: test
run: mix credo
- name: Run Tests
env:
MIX_ENV: test
run: mix test
Also, on Elixir I have set up the test task to connect to postgres:5432, but it says the host does not exist.
According to some tutorials and examples I found on the Internet, this configurations looks like valid, but nothing I could do made it work.
You need to pass the name of the service ("postgres") as POSTGRES_HOST to the application and set the port POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} (spaces matter.)
Github CI dynamically routes port and host to it.
I wrote a blog post on the subject a couple of days ago.

run kubernetes job in cloud builder

I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']