GCP: Cloud Run preview build fails because of a missing tag "latest" - github

If have an issue where cloud build is failing on creating a preview build for use in github pull requests.
I have
a github organization with the cloud build app installed.
a cloud build set-up with triggers to deploy to cloud run
functional build on master deploy (doesn't really matter here).
The following is my cloudbuild-preview.yaml file. The failing step is the last one: "link revision on pull request"
steps:
- id: "build image"
name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
".",
]
- id: "push image"
name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
]
- id: "deploy revision with tag"
name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: "gcloud"
args:
[
"beta",
"run",
"deploy",
"${_SERVICE_NAME}",
"--platform",
"managed",
"--region",
"${_REGION}",
"--image",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
"--tag",
"pr-${_PR_NUMBER}",
"--no-traffic",
]
- id: "link revision on pull request"
name: "$_GCR_HOSTNAME/${PROJECT_ID}/deployment-previews" # our custom builder
args:
[
"set",
"--project-id",
"${PROJECT_ID}",
"--region",
"${_REGION}",
"--service",
"${_SERVICE_NAME}",
"--pull-request",
"${_PR_NUMBER}",
"--repo-name",
"${_GITHUB_REPO}",
"--commit-sha",
"${SHORT_SHA}",
]
timeout: 1400s
options:
machineType: N1_HIGHCPU_8
substitutions:
_GCR_HOSTNAME: eu.gcr.io
_SERVICE_NAME: redacted-service
_REGION: europe-west4
_GITHUB_REPO: $(pull_request.pull_request.head.repo.full_name)
The execution fails with
Step #3 - "link revision on pull request": Error response from daemon: manifest for eu.gcr.io/redacted-org/deployment-previews:latest not found: manifest unknown: Failed to fetch "latest" from request "/v2/redacted-org/deployment-previews/manifests/latest".
Step #3 - "link revision on pull request": Using default tag: latest
Step #3 - "link revision on pull request": Pulling image: eu.gcr.io/redacted-org/deployment-previews
Starting Step #3 - "link revision on pull request"
What I don't undestand is why the sep is even looking for a :latest tag. There is none. The above steps don't create one. The container registry does not contain one.
How to tell that build step to use the proper image tagged with ${_PR_NUMBER}-${SHORT_SHA}?
Where can I dive into the magic here? Where is the definition of this magic build step?!
Thank you very much for any ideas.

When you don't specify an image tag, tools will always try to pull the :latest image. In Cloud Build, you can specify a specific version of a builder image by simply including the tag in the name for your build step:
- id: "link revision on pull request"
name: "$_GCR_HOSTNAME/${PROJECT_ID}/deployment-previews:${_PR_NUMBER}-${SHORT_SHA}" # our custom builder
args:
[
"set",
"--project-id",
"${PROJECT_ID}",
"--region",
"${_REGION}",
"--service",
"${_SERVICE_NAME}",
"--pull-request",
"${_PR_NUMBER}",
"--repo-name",
"${_GITHUB_REPO}",
"--commit-sha",
"${SHORT_SHA}",
]

Related

Why isn't Cloud Code honoring my cloudbuild.yaml file but gcloud beta builds submit is?

I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild

Auto-approve not automatically merged in GitHub

I have a GitHub workflow where a PR is created each day to update some dependencies.
The problem is that I have to merge it manually. The PR gets an auto-approve tag.
Next I can click "Merge pull request". If I go to settings > branches > main > and I enable require approvals (1). Then I can enable "Auto-merge" for the repo. But it does not fully merge since it requires an approval. So I have to turn it off again.
I'm not sure how this flow is working. I just want that it gets merged automatically. In my settings I have already enabled "Allow auto-merge".
This is a part of my upgrade-main.yml.
- name: Create Pull Request
id: create-pr
uses: peter-evans/create-pull-request#v3
with:
token: ${{ secrets.PROJEN_GITHUB_TOKEN }}
commit-message: |-
chore(deps): upgrade dependencies
Upgrades project dependencies. See details in [workflow run].
[Workflow Run]: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}
------
*Automatically created by projen via the "upgrade-main" workflow*
branch: github-actions/upgrade-main
title: "chore(deps): upgrade dependencies"
labels: auto-approve
body: |-
Upgrades project dependencies. See details in [workflow run].
[Workflow Run]: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}
------
*Automatically created by projen via the "upgrade-main" workflow*
author: github-actions <github-actions#github.com>
committer: github-actions <github-actions#github.com>
signoff: true

AWS ECS Blue/Green CodePipeline: Exception while trying to read the image artifact

I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D
answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>
Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment
I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'
Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.

Unable to run Sonarqube analysis from cloudbuild.yaml with Google Cloud build

I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key
If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"

Concourse merge another branch

I'm trying to automate deployments using Concourse-CI.
I have a go application that is checked into a local Gitlab with two branches (master and develop).
I have a pipeline setup for the develop branch that runs go unit tests and if they pass i want to automatically merge the changes from the develop branch to the master branch and tag it with the latest version.
Here is what I have so far:
jobs:
- name: run-unit-tests
public: true
plan:
- get: source-master
- get: source
trigger: true
- put: discord
params:
channel: "((channel_id))"
color: 6076508
title: Concourse CI
message: |
Starting Unit tests for manageGameData
- task: task-unit-tests
file: source/ci/tasks/task-unit-tests.yml
on_success:
do:
- put: discord
params:
channel: "((channel_id))"
color: 6076508
title: Concourse CI
message: |
All Unit tests passed for manageGameData
- put: version
params:
bump: minor
- get: version
- put: source-master
params:
merge: source
repository: source-master
tag: version/number
The problem is that this only tags the master branch with the new version.
Is there a way to merge the develop branch to master?
I guess i didn't understand the documentation at first but the answer was pretty easy.
- get: source-master
- get: source
- put: source-master
params:
repository: source
First you have to get both branches in this case master and develop. Then you push the source local repo (a folder on the concourse worker) to master by using put.
There is no need for the merge parameter and i had the wrong repository parameter.
Hope this helps someone else.
Alternatively you could use just scripts for more complex git commands.
platform: linux
image_resource:
type: docker-image
source:
repository: concourse/buildroot
tag: git
run:
path: /bin/bash
args:
- -c
- |
set -eux
git clone https://user:passw#repo.git
git config --global user.name "UserName"
git config --global user.email "email#your.com"
git checkout master
git merge hotfix