I have a GCP cloud build yaml file that triggers on a new Tag in Github.
I have configure the latest tag to diplay on the app engine version but I need to configure the cloudbuild.yml file to replace the full stop on my tag to hyphen otherwise it fails on the deployment phase.
- id: web:set-env
name: 'gcr.io/cloud-builders/gcloud'
env:
- "VERSION=${TAG_NAME}"
#Deploy to google cloud app engine
- id: web:deploy
dir: "."
name: "gcr.io/cloud-builders/gcloud"
waitFor: ['web:build']
args:
[
'app',
'deploy',
'app.web.yaml',
"--version=${TAG_NAME}",
--no-promote,
]
Tried using --version=${TAG_NAME//./-}, but getting an error on the deployment phase.
Managed to replace te fullstop with n hyphen by using the below step in the cloudbuild.yml file:
- id: tag:release
name: 'gcr.io/cloud-builders/gcloud'
args:
- '-c'
- |
version=$TAG_NAME
gcloud app deploy app.web.yaml --version=${version//./-} --no-promote
entrypoint: bash
Related
How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.
I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1
There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.
hope this question helps others struggling to use GCP.
I am trying to automate deployments of my strapi app to Google App Engine using CloudBuild. This is my cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: "bash"
args:
- "-c"
- |
rm -rf app.yaml
touch app.yaml
cat <<EOT >> app.yaml
runtime: custom
env: flex
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'production'
DATABASE_NAME: ${_DATABASE_NAME}
DATABASE_USERNAME: ${_DATABASE_USERNAME}
DATABASE_PASSWORD: ${_DATABASE_PASSWORD}
INSTANCE_CONNECTION_NAME: ${_INSTANCE_CONNECTION_NAME}
beta_settings:
cloud_sql_instances: ${_CLOUD_SQL_INSTANCES}
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
EOT
cat app.yaml
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy app.yaml --project ecomm-backoffice']
If I understand correctly how general CI/CD works, this file should create an app.yaml and then run gcloud app deploy app.yaml --project ecomm-backoffice command.
However, CloudBuild is creating nested recursive builds once i push my changes to github(triggers are enabled).
Can someone please help me with the right way of deploying strapi/nodejs to app engine using cloudbuild? I tried searching lot of solutions but haven't had any luck so far.
Hi i am trying to deploy on my gke cluster through cloud build.I am able to deploy. But every time i am pushing new images.My cluster is not picking up the new image but deploy the pod with the old image only(nothing is changed).When i am deleting my pod and triggering the cloudbuild then it is picking the new image. I have also added ImagePullPolicy= Always.
Below is my cloudbuild.yaml file.
- id: 'build your instance'
name: 'maven:3.6.0-jdk-8-slim'
entrypoint: mvn
args: ['clean','package','-Dmaven.test.skip=true']
- id: "docker build"
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PID/test', '.']
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PID/TEST']
- id: 'Deploy image to kubernetes'
name: 'gcr.io/cloud-builders/gke-deploy'
args:
- run
- --filename=./run/helloworld/src
- --location=us-central1-c
- --cluster=cluster-2
My pod manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: Test
labels:
app: hello
spec:
containers:
- name: private-reg-containers
image: gcr.io/PID/test
imagePullPolicy: "Always"
Any help is appreciated.
This is an expected behavior and you may be confusing the usage of imagePullPolicy: "Always". This is well explanined in this answer:
Kubernetes is not watching for a new version of the image. The image pull policy specifies how to acquire the image to run the container. Always means it will try to pull a new version each time it's starting a container. To see the update you'd need to delete the Pod (not the Deployment) - the newly created Pod will run the new image.
There is no direct way to have Kubernetes automatically update running containers with new images. This would be part of a continuous delivery system (perhaps using kubectl set image with the new sha256sum or an image tag - but not latest).
This is why when you recreate the pods, those get the newest image. So the answer to your question is to explicitly tell K8s to get the newest image. In the example I share with you I use two tags, the clasic latest which is more used to share the image with a friendly name and the tag using the $BUILD_ID which is used to update the image in GKE. In this example I update the image for a deployment so you only change it for updating an standalone pod which should be your little "homework".
steps:
#Building Image
- name: 'gcr.io/cloud-builders/docker'
id: build-loona
args:
- build
- --tag=${_LOONA}:$BUILD_ID
- --tag=${_LOONA}:latest
- .
dir: 'loona/'
waitFor: ['-']
#Pushing image (this pushes the image with both tags)
- name: 'gcr.io/cloud-builders/docker'
id: push-loona
args:
- push
- ${_LOONA}
waitFor:
- build-loona
#Deploying to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
id: deploy-gke
args:
- run
- --filename=k8s/
- --location=${_COMPUTE_ZONE}
- --cluster=${_CLUSTER_NAME}
#Update Image
- name: 'gcr.io/cloud-builders/kubectl'
id: update-loona
args:
- set
- image
- deployment/loona-deployment
- loona=${_LOONA}:$BUILD_ID
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
waitFor:
- deploy-gke
substitutions:
_CLUSTER_NAME: my-cluster
_COMPUTE_ZONE: us-central1
_LOONA: gcr.io/${PROJECT_ID}/loona
I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']