Unable to run Sonarqube analysis from cloudbuild.yaml with Google Cloud build - triggers

I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key

If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"

Related

Why isn't Cloud Code honoring my cloudbuild.yaml file but gcloud beta builds submit is?

I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild

Cloudbuild can't access Artifacts Registery when building cloud run docker container

I'm using a package from Artifacts Registery in my cloud run nodejs container.
When I try to gcloud builds submit I get the following error:
Step #1: npm ERR! 403 403 Forbidden - GET https://us-east4-npm.pkg.dev/....
Step #1: npm ERR! 403 In most cases, you or one of your dependencies are requesting
Step #1: npm ERR! 403 a package version that is forbidden by your security policy.
Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/...']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'admin-api'
- '--image'
- 'gcr.io/...'
- '--region'
- 'us-east4'
- '--allow-unauthenticated'
images:
- 'gcr.io/....'
and Dockerfile
FROM node:14-slim
WORKDIR /usr/src/app
COPY --chown=node:node .npmrc ./
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 8080
CMD [ "npm","run" ,"server" ]
.npmrc file:
#scope_xxx:registry=https://us-east4-npm.pkg.dev/project_xxx/repo_xxx/
//us-east4-npm.pkg.dev/project_xxx/repo_xxx/:always-auth=true
the google build service account already has the permission "Artifact Registry Reader"
You have to connect the CloudBuild network in your docker build command. Like that
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '--network=cloudbuild', '.']
I had the same root cause, my setup is close to #AmmAr, after hours of trial and error, found a solution.
Dislaimer, this might not be the reason for your issue, the gcp 403 error message is vague, you need to chip away and eliminate all possibilities, that is how I arrived on this page.
Comparing to #AmmArr above, the changes I made:-
In node.js package.json, add to "scripts" :{...} property
"artifactregistry-login": "npx google-artifactregistry-auth",
"artifactregistry-auth-npmrc": "npx google-artifactregistry-auth .npmrc"
In cloudbuild.yaml, I added two steps prior to the build step, these steps should result in .npmrc getting appended with an access token, thus allowing it to communicate with the gcp artifact repository, that resolved the 403 issue for my scenario.
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-auth-npmrc']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/...', '.']
#- next steps in your process...
In Dockerfile, copy over .nmprc before package.json
COPY .npmrc ./
COPY package*.json ./
Screenshot of my cloud build config
Now run, and see if it gets past the build step where it pulls npm module from artifact registry.
The solution that worked with me can be found in this blog post:
https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2

GCP: Cloud Run preview build fails because of a missing tag "latest"

If have an issue where cloud build is failing on creating a preview build for use in github pull requests.
I have
a github organization with the cloud build app installed.
a cloud build set-up with triggers to deploy to cloud run
functional build on master deploy (doesn't really matter here).
The following is my cloudbuild-preview.yaml file. The failing step is the last one: "link revision on pull request"
steps:
- id: "build image"
name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
".",
]
- id: "push image"
name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
]
- id: "deploy revision with tag"
name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: "gcloud"
args:
[
"beta",
"run",
"deploy",
"${_SERVICE_NAME}",
"--platform",
"managed",
"--region",
"${_REGION}",
"--image",
"$_GCR_HOSTNAME/${PROJECT_ID}/${_SERVICE_NAME}:${_PR_NUMBER}-${SHORT_SHA}",
"--tag",
"pr-${_PR_NUMBER}",
"--no-traffic",
]
- id: "link revision on pull request"
name: "$_GCR_HOSTNAME/${PROJECT_ID}/deployment-previews" # our custom builder
args:
[
"set",
"--project-id",
"${PROJECT_ID}",
"--region",
"${_REGION}",
"--service",
"${_SERVICE_NAME}",
"--pull-request",
"${_PR_NUMBER}",
"--repo-name",
"${_GITHUB_REPO}",
"--commit-sha",
"${SHORT_SHA}",
]
timeout: 1400s
options:
machineType: N1_HIGHCPU_8
substitutions:
_GCR_HOSTNAME: eu.gcr.io
_SERVICE_NAME: redacted-service
_REGION: europe-west4
_GITHUB_REPO: $(pull_request.pull_request.head.repo.full_name)
The execution fails with
Step #3 - "link revision on pull request": Error response from daemon: manifest for eu.gcr.io/redacted-org/deployment-previews:latest not found: manifest unknown: Failed to fetch "latest" from request "/v2/redacted-org/deployment-previews/manifests/latest".
Step #3 - "link revision on pull request": Using default tag: latest
Step #3 - "link revision on pull request": Pulling image: eu.gcr.io/redacted-org/deployment-previews
Starting Step #3 - "link revision on pull request"
What I don't undestand is why the sep is even looking for a :latest tag. There is none. The above steps don't create one. The container registry does not contain one.
How to tell that build step to use the proper image tagged with ${_PR_NUMBER}-${SHORT_SHA}?
Where can I dive into the magic here? Where is the definition of this magic build step?!
Thank you very much for any ideas.
When you don't specify an image tag, tools will always try to pull the :latest image. In Cloud Build, you can specify a specific version of a builder image by simply including the tag in the name for your build step:
- id: "link revision on pull request"
name: "$_GCR_HOSTNAME/${PROJECT_ID}/deployment-previews:${_PR_NUMBER}-${SHORT_SHA}" # our custom builder
args:
[
"set",
"--project-id",
"${PROJECT_ID}",
"--region",
"${_REGION}",
"--service",
"${_SERVICE_NAME}",
"--pull-request",
"${_PR_NUMBER}",
"--repo-name",
"${_GITHUB_REPO}",
"--commit-sha",
"${SHORT_SHA}",
]

Is there a way to put a lock on Concourse git-resource?

I have setup pipeline in Concourse with some jobs that are building Docker images.
After the build I push the image tag to the git repo.
The problem is when the builds come to end at the same time, one jobs pushes to git, while the other just pulled, and when second job tries push to git it gets error.
error: failed to push some refs to 'git#github.com:*****/*****'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
So is there any way to prevent concurrent push?
So far I've tried applying serial and serial_groups to jobs.
It helps, but all the jobs got queued up, because we have a lot of builds.
I expect jobs to run concurrently and pause before doing operations to git if some other job have a lock on it.
resources:
- name: backend-helm-repo
type: git
source:
branch: master
paths:
- helm
uri: git#github.com:******/******
-...
jobs:
-...
- name: some-hidden-api-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-hidden-api-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-hidden-api-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-hidden-api-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-hidden-api-status
params:
commit: some-hidden-api-repo
state: success
- name: some-other-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-other-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-other-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-other-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-other-status
params:
commit: some-other-repo
state: success
-...
So if jobs come finish image build at the same time and make git commit in parallel, one pushes faster, than second one, second one breaks.
Can someone help?
note that your description is too vague to give detailed answer.
I expect jobs to concurrently and stop before pushing to git if some other job have a lock on git.
This will not be enough, if they stop just before pushing, they are already referencing a git commit, which will become stale when the lock is released by the other job :-)
The jobs would have to stop, waiting on the lock, before cloning the git repo, so at the very beginning.
All this is speculation on my part, since again it is not clear what you want to do, for these kind of questions posting a as-small-as-possible pipeline image and as-small-as-possible configuration code is helpful.
You can consider https://github.com/concourse/pool-resource as locking mechanism.

How to setup SonarQube with Docker using Saltslack, and how to use it from CI

This post contins some information about how we integrated SonarQube in our workflow using Docker and Saltslack as Docker Container Configuration Management.
It also contains the setup used with Gradle in Travis-CI in order to execute analysis of code and analysis of Pull Requests on Github.
Also, if you see any improvements to this setup, please comment!
(If using Docker Compose, see https://github.com/SonarSource/docker-sonarqube. Feel free to maintain this answer here or copy it to a SCM.)
Requires Docker Engine 1.9
Setting up a SonarQube Server using Salt
Create this pillar file applicable for your SonarQube server:
sonar-qube:
name: sonar-qube
port: 9000
version: <ENTER SOME VERSION>
version_postgresql: <ENTER SOME VERSION>
# Using a shared disk allows you to move the SonarQube container between different servers and still keep the data.
host_storage_path: /some/shared/disk
Create this sonarqube.sls as your Docker State file.
(It requires you to have a network created named sonarnet configured in a configuration named sonarnet-config)
{% set name = salt['pillar.get']('sonar-qube:name') %}
{% set port = salt['pillar.get']('sonar-qube:port') %}
{% set tag = salt['pillar.get']('sonar-qube:version') %}
{% set pg_tag = salt['pillar.get']('sonar-qube:version_postgresql') %}
{% set host_storage_path = salt['pillar.get']('sonar-qube:host_storage_path') %}
include:
- <state file of the sonarnet-config network definition>
sonar-qube-image:
dockerng.image_present:
- name: sonarqube:{{tag}}
sonar-qube:
dockerng.running:
- name: {{name}}
- image: sonarqube:{{tag}}
- network_mode: sonarnet
- port_bindings:
- {{port}}:{{port}}
- environment:
- SONARQUBE_JDBC_URL: jdbc:postgresql://sonar-db:5432/sonar
- binds:
- {{host_storage_path}}/sonarqube/conf:/opt/sonarqube/conf
- {{host_storage_path}}/sonarqube/data:/opt/sonarqube/data
- {{host_storage_path}}/sonarqube/extensions:/opt/sonarqube/extensions
- {{host_storage_path}}/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
- require:
- dockerng: sonarnet-config
sonar-db:
dockerng.running:
- image: postgres:{{pg_tag}}
- network_mode: sonarnet
- port_bindings:
- 5432:5432
- environment:
- POSTGRES_USER: sonar
- POSTGRES_PASSWORD: sonar
- binds:
- {{host_storage_path}}/postgresql:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- {{host_storage_path}}/postgresql/data:/var/lib/postgresql/data
- require:
- dockerng: sonarnet-config
Use regular salt to start your containers.
Once this SonarQube server is started, you should be able to reach the web gui of SonarQube.
Execute automated analysis (with Gradle in Travis CI)
These bullests will be described one by one
Enable Gradle plugin
Create users at SonarQube and Github
Write a bash script that executes analysis
Invoke bash script from Travis CI.
1) Enable the Gradle plugin
Enable the plugin according to documentation at https://plugins.gradle.org/plugin/org.sonarqube
plugins {
id "org.sonarqube" version "2.0.1"
}
2) Setup users in Github and Sonar
Github requires a user with write access (soon only read access?) to the repo. Create a sonar-ci user to a team, and provide write access to the repo for the team. See this post: https://github.com/janinko/ghprb/issues/232#issuecomment-149649126 Then create an access token for that user, the access token must grant "Full control of private repositories".
Sonar requires a user that has permission to "Execute Analysis" and "Create Projects" under Global Permissions. It also needs permissions to "BROWSE", "SEE SOURCE CODE" and "EXECUTE ANALYSIS" under Project Permissions. Generate an access token for this user.
3) Write bash script
This script will do a full analysis and publish the result at the SonarQube web GUI when merged to git branch master. This keeps track of the code evolvement over time. It will also analyze pull requests in github and write its findings directly as review comments.
These env variables needs to be set:
TRAVIS_*- set by Travis: see https://docs.travis-ci.com/user/environment-variables/
SONAR_TOKEN is the access token for the sonar server
GITHUB_SONAR_TOKEN is the access token for the sonar alaysis user on Github
sonarqube.sh:
SONAR_URL="https://sonar.example.com"
if [ -z "$SONAR_TOKEN" ] || [ -z "$GITHUB_SONAR_TOKEN" ]; then
echo "Missing environemnt variable(s) for SonarQube. Make sure all environment variables are set."
exit 1
fi
if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then
echo "Running SonarQube analysis for pull request nr $TRAVIS_PULL_REQUEST..."
./gradlew sonarqube \
-Dsonar.host.url=$SONAR_URL \
-Dsonar.login=$SONAR_TOKEN \
-Dsonar.github.pullRequest=$TRAVIS_PULL_REQUEST \
-Dsonar.github.repository=$TRAVIS_REPO_SLUG \
-Dsonar.github.oauth=$GITHUB_SONAR_TOKEN \
-Dsonar.analysis.mode=issues
elif [ "$TRAVIS_BRANCH" == "master" ]; then
echo "Starting publish SonarQube analyzis results to $SONAR_URL"
./gradlew sonarqube \
-Dsonar.host.url=$SONAR_URL \
-Dsonar.login=$SONAR_TOKEN \
-Dsonar.analysis.mode=publish
fi
4) Integrate from Travis CI
In the .travis.yml add:
after_success:
- ./sonarqube.sh
before_cache:
- rm -rf $HOME/.gradle/caches/*/gradle-sonarqube-plugin
cache:
directories:
- $HOME/.sonar