I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
Now I want to create a job , I want to run something like
kubectl create -R -f ./kubernetes
which creates job in kubernetes folder.
I know cloud builder has - name: 'gcr.io/cloud-builders/kubectl' but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json
I wasn't able to connect and get cluster credentials. Here's what I did
Go to IAM, add another Role to xyz#cloudbuild.gserviceaccount.com. I used Project Editor.
Wrote this on cloudbuild.yaml name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']
Related
How can I configure Google Cloud Build so that a docker-compose setup can use a secret file the same way as it does when it is run locally on my machine accessing a file?
My Docker-compose based setup uses a secrets entry to expose an API key to a backend component like this (simplified for example):
services:
backend:
build: docker_contexts/backend
secrets:
- API_KEY
environment:
- API_KEY_PATH=/run/secrets/api_key
secrets:
API_KEY:
file: ./secrets/api_key.json
From my understanding docker-compose places any files in the secrets section in /run/secrets on the local container for access, that's why the target location is hard-coded to /run/build.
I would like to deploy my docker-compose setup on Google Cloud Build to use this configuration, but the only examples I've seen in documentation have been to load the secret as an environment variable. I have tried to provide this secret to the secret manager and copy it to a local file at /run/secrets like this:
steps:
- name: gcr.io/cloud-builders/gcloud
# copy to /workspace/secrets so docker-compose can find it
entrypoint: 'bash'
args: [ '-c', 'echo $API_KEY > /workspace/secrets/api_key.json' ]
volumes:
- name: 'secrets'
path: /workspace/secrets
secretEnv: ['API_KEY']
# running docker-compose
- name: 'docker/compose:1.29.2'
args: ['up', '-d']
volumes:
- name: 'secrets'
path: /workspace/secrets
availableSecrets:
secretManager:
- versionName: projects/ID/secrets/API_KEY/versions/1
env: API_KEY
But when I run the job on google cloud build, I get this error message after everything is built: ERROR: for backend Cannot create container for service backend: invalid mount config for type "bind": bind source path does not exist: /workspace/secrets/api_key.json.
Is there a way I can copy the API_KEY environment variable at the cloudbuild.yaml level so it is accessible to the docker-compose level like it is when I run it on my local filesystem?
If you want to have the value of API_KEY taken from Secret Manager and placed into a text file at /workspace/secrets/api_key.json then change your step to this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "mkdir -p /workspace/secrets && echo $$API_KEY > /workspace/secrets/api_key.json"]
secretEnv: ["API_KEY"]
This will:
Remove the unnecessary volumes attribute which is not necessary as /workspace is already a volume that persists between steps
Make sure the directory exists before you try to put a file in it
Use the $$ syntax as described in Use secrets from Secret Manager so that it echoes the actual secret to the file
Note this section:
When specifying the secret in the args field, specify it using the environment variable prefixed with $$.
You can double-check that this is working by adding another step:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args: ["-c", "cat /workspace/secrets/api_key.json"]
This should echo out the contents of the file as the build step, allowing you to confirm that:
The previous step read the secret
The previous step wrote the secret to the file
The file was written to a volume that persists across steps
From there you can configure docker-compose to read the contents of that persisted file.
I have a GCP cloud build yaml file that triggers on a new Tag in Github.
I have configure the latest tag to diplay on the app engine version but I need to configure the cloudbuild.yml file to replace the full stop on my tag to hyphen otherwise it fails on the deployment phase.
- id: web:set-env
name: 'gcr.io/cloud-builders/gcloud'
env:
- "VERSION=${TAG_NAME}"
#Deploy to google cloud app engine
- id: web:deploy
dir: "."
name: "gcr.io/cloud-builders/gcloud"
waitFor: ['web:build']
args:
[
'app',
'deploy',
'app.web.yaml',
"--version=${TAG_NAME}",
--no-promote,
]
Tried using --version=${TAG_NAME//./-}, but getting an error on the deployment phase.
Managed to replace te fullstop with n hyphen by using the below step in the cloudbuild.yml file:
- id: tag:release
name: 'gcr.io/cloud-builders/gcloud'
args:
- '-c'
- |
version=$TAG_NAME
gcloud app deploy app.web.yaml --version=${version//./-} --no-promote
entrypoint: bash
hope this question helps others struggling to use GCP.
I am trying to automate deployments of my strapi app to Google App Engine using CloudBuild. This is my cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: "bash"
args:
- "-c"
- |
rm -rf app.yaml
touch app.yaml
cat <<EOT >> app.yaml
runtime: custom
env: flex
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'production'
DATABASE_NAME: ${_DATABASE_NAME}
DATABASE_USERNAME: ${_DATABASE_USERNAME}
DATABASE_PASSWORD: ${_DATABASE_PASSWORD}
INSTANCE_CONNECTION_NAME: ${_INSTANCE_CONNECTION_NAME}
beta_settings:
cloud_sql_instances: ${_CLOUD_SQL_INSTANCES}
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
EOT
cat app.yaml
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy app.yaml --project ecomm-backoffice']
If I understand correctly how general CI/CD works, this file should create an app.yaml and then run gcloud app deploy app.yaml --project ecomm-backoffice command.
However, CloudBuild is creating nested recursive builds once i push my changes to github(triggers are enabled).
Can someone please help me with the right way of deploying strapi/nodejs to app engine using cloudbuild? I tried searching lot of solutions but haven't had any luck so far.
Hi i am trying to deploy on my gke cluster through cloud build.I am able to deploy. But every time i am pushing new images.My cluster is not picking up the new image but deploy the pod with the old image only(nothing is changed).When i am deleting my pod and triggering the cloudbuild then it is picking the new image. I have also added ImagePullPolicy= Always.
Below is my cloudbuild.yaml file.
- id: 'build your instance'
name: 'maven:3.6.0-jdk-8-slim'
entrypoint: mvn
args: ['clean','package','-Dmaven.test.skip=true']
- id: "docker build"
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PID/test', '.']
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PID/TEST']
- id: 'Deploy image to kubernetes'
name: 'gcr.io/cloud-builders/gke-deploy'
args:
- run
- --filename=./run/helloworld/src
- --location=us-central1-c
- --cluster=cluster-2
My pod manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: Test
labels:
app: hello
spec:
containers:
- name: private-reg-containers
image: gcr.io/PID/test
imagePullPolicy: "Always"
Any help is appreciated.
This is an expected behavior and you may be confusing the usage of imagePullPolicy: "Always". This is well explanined in this answer:
Kubernetes is not watching for a new version of the image. The image pull policy specifies how to acquire the image to run the container. Always means it will try to pull a new version each time it's starting a container. To see the update you'd need to delete the Pod (not the Deployment) - the newly created Pod will run the new image.
There is no direct way to have Kubernetes automatically update running containers with new images. This would be part of a continuous delivery system (perhaps using kubectl set image with the new sha256sum or an image tag - but not latest).
This is why when you recreate the pods, those get the newest image. So the answer to your question is to explicitly tell K8s to get the newest image. In the example I share with you I use two tags, the clasic latest which is more used to share the image with a friendly name and the tag using the $BUILD_ID which is used to update the image in GKE. In this example I update the image for a deployment so you only change it for updating an standalone pod which should be your little "homework".
steps:
#Building Image
- name: 'gcr.io/cloud-builders/docker'
id: build-loona
args:
- build
- --tag=${_LOONA}:$BUILD_ID
- --tag=${_LOONA}:latest
- .
dir: 'loona/'
waitFor: ['-']
#Pushing image (this pushes the image with both tags)
- name: 'gcr.io/cloud-builders/docker'
id: push-loona
args:
- push
- ${_LOONA}
waitFor:
- build-loona
#Deploying to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
id: deploy-gke
args:
- run
- --filename=k8s/
- --location=${_COMPUTE_ZONE}
- --cluster=${_CLUSTER_NAME}
#Update Image
- name: 'gcr.io/cloud-builders/kubectl'
id: update-loona
args:
- set
- image
- deployment/loona-deployment
- loona=${_LOONA}:$BUILD_ID
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
waitFor:
- deploy-gke
substitutions:
_CLUSTER_NAME: my-cluster
_COMPUTE_ZONE: us-central1
_LOONA: gcr.io/${PROJECT_ID}/loona
I have a small cloudbuild.yaml file where I build a Docker image, push it to Google container registry (GCR) and then apply the changes to my Kubernetes cluster. It looks like this:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/frontend:latest || exit 0'
]
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-f",
"./services/frontend/prod.Dockerfile",
"-t",
"gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
"-t",
"gcr.io/$PROJECT_ID/frontend:latest",
".",
]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/frontend"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
- name: "gcr.io/cloud-builders/kubectl"
args: ["rollout", "restart", "deployment/frontend-deployment"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
The build runs smoothly, until the last step. args: ["rollout", "restart", "deployment/frontend-deployment"]. It has the following log output:
Already have image (with digest): gcr.io/cloud-builders/kubectl
Running: gcloud container clusters get-credentials --project="cents-ideas" --zone="europe-west3-a" "cents-ideas"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
Running: kubectl rollout restart deployment/frontend-deployment
error: unknown command "restart deployment/frontend-deployment"
See 'kubectl rollout -h' for help and examples.
Allegedly, restart is an unknown command. But it works when I run kubectl rollout restart deployment/frontend-deployment manually.
How can I fix this problem?
Looking at the Kubernetes release notes, the kubectl rollout restart commmand was introduced in the v1.15 version. In your case, it seems Cloud Build is using an older version where this command wasn't implemented yet.
After doing some test, it appears Cloud Build uses a kubectl client version depending on the cluster's server version. For example, when running the following build:
steps:
- name: "gcr.io/cloud-builders/kubectl"
args: ["version"]
env:
- "CLOUDSDK_COMPUTE_ZONE=<cluster_zone>"
- "CLOUDSDK_CONTAINER_CLUSTER=<cluster_name>"
if the cluster's master version is v1.14, Cloud Build uses a v1.14 kubectl client and returns the same unknown command "restart" error message. When master's version is v1.15, Cloud Build uses a v1.15 kubectl client and the command runs successfully.
So about your case, I suspect your cluster "cents-ideas" master version is <1.15 which would explain the error you're getting. As per why it works when you run the command manually (I understand locally), I suspect your kubectl may be authenticated to another cluster with master version >=1.15.