Cloudbuild can't access Artifacts Registery when building cloud run docker container - gcloud

I'm using a package from Artifacts Registery in my cloud run nodejs container.
When I try to gcloud builds submit I get the following error:
Step #1: npm ERR! 403 403 Forbidden - GET https://us-east4-npm.pkg.dev/....
Step #1: npm ERR! 403 In most cases, you or one of your dependencies are requesting
Step #1: npm ERR! 403 a package version that is forbidden by your security policy.
Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/...']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'admin-api'
- '--image'
- 'gcr.io/...'
- '--region'
- 'us-east4'
- '--allow-unauthenticated'
images:
- 'gcr.io/....'
and Dockerfile
FROM node:14-slim
WORKDIR /usr/src/app
COPY --chown=node:node .npmrc ./
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 8080
CMD [ "npm","run" ,"server" ]
.npmrc file:
#scope_xxx:registry=https://us-east4-npm.pkg.dev/project_xxx/repo_xxx/
//us-east4-npm.pkg.dev/project_xxx/repo_xxx/:always-auth=true
the google build service account already has the permission "Artifact Registry Reader"

You have to connect the CloudBuild network in your docker build command. Like that
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '--network=cloudbuild', '.']

I had the same root cause, my setup is close to #AmmAr, after hours of trial and error, found a solution.
Dislaimer, this might not be the reason for your issue, the gcp 403 error message is vague, you need to chip away and eliminate all possibilities, that is how I arrived on this page.
Comparing to #AmmArr above, the changes I made:-
In node.js package.json, add to "scripts" :{...} property
"artifactregistry-login": "npx google-artifactregistry-auth",
"artifactregistry-auth-npmrc": "npx google-artifactregistry-auth .npmrc"
In cloudbuild.yaml, I added two steps prior to the build step, these steps should result in .npmrc getting appended with an access token, thus allowing it to communicate with the gcp artifact repository, that resolved the 403 issue for my scenario.
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-auth-npmrc']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/...', '.']
#- next steps in your process...
In Dockerfile, copy over .nmprc before package.json
COPY .npmrc ./
COPY package*.json ./
Screenshot of my cloud build config
Now run, and see if it gets past the build step where it pulls npm module from artifact registry.

The solution that worked with me can be found in this blog post:
https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2

Related

Why isn't Cloud Code honoring my cloudbuild.yaml file but gcloud beta builds submit is?

I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild

Skaffold is not detecting js file changes

When I change inside index.js file inside auth directory then skaffold stuck on watching for changes... I restarted but every time when I change it stuck
Syncing 1 files for test/test-auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99
Watching for changes...
> auth
> node_modules
> src
> signup
signup.js
index.js
> .dockerignore
> Dockerfile
> package-lock.json
> package.json
> infra
> k8s
auth-depl.yaml
ingress-srv.yaml
> skaffold.yaml
My skaffold.yaml file is
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: test/test-auth
docker:
dockerfile: Dockerfile
context: auth
sync:
manual:
- src: '***/*.js'
dest: src
If I make change signup.js or index.js skaffold stuck.Please help me!
Given the output you included above, I suspect that Skaffold is copying the file across:
Syncing 1 files for test/test-> auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99
Watching for changes...
but your app is not set up to respond to file changes. You need to use a tool like nodemon to watch for file changes and restart your app. The Skaffold hot-reload example shows one way to set this up.

Unable to run Sonarqube analysis from cloudbuild.yaml with Google Cloud build

I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key
If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"

Is there a way to put a lock on Concourse git-resource?

I have setup pipeline in Concourse with some jobs that are building Docker images.
After the build I push the image tag to the git repo.
The problem is when the builds come to end at the same time, one jobs pushes to git, while the other just pulled, and when second job tries push to git it gets error.
error: failed to push some refs to 'git#github.com:*****/*****'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
So is there any way to prevent concurrent push?
So far I've tried applying serial and serial_groups to jobs.
It helps, but all the jobs got queued up, because we have a lot of builds.
I expect jobs to run concurrently and pause before doing operations to git if some other job have a lock on it.
resources:
- name: backend-helm-repo
type: git
source:
branch: master
paths:
- helm
uri: git#github.com:******/******
-...
jobs:
-...
- name: some-hidden-api-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-hidden-api-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-hidden-api-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-hidden-api-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-hidden-api-status
params:
commit: some-hidden-api-repo
state: success
- name: some-other-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-other-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-other-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-other-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-other-status
params:
commit: some-other-repo
state: success
-...
So if jobs come finish image build at the same time and make git commit in parallel, one pushes faster, than second one, second one breaks.
Can someone help?
note that your description is too vague to give detailed answer.
I expect jobs to concurrently and stop before pushing to git if some other job have a lock on git.
This will not be enough, if they stop just before pushing, they are already referencing a git commit, which will become stale when the lock is released by the other job :-)
The jobs would have to stop, waiting on the lock, before cloning the git repo, so at the very beginning.
All this is speculation on my part, since again it is not clear what you want to do, for these kind of questions posting a as-small-as-possible pipeline image and as-small-as-possible configuration code is helpful.
You can consider https://github.com/concourse/pool-resource as locking mechanism.

How to install yum repository key with ansible?

I tried it two ways:
- name: Add repository
yum_repository:
# from https://oss-binaries.phusionpassenger.com/yum/definitions/el-passenger.repo
name: passenger
description: Passenger repository
baseurl: https://oss-binaries.phusionpassenger.com/yum/passenger/el/$releasever/$basearch
repo_gpgcheck: 1
gpgcheck: 0
enabled: 1
gpgkey: https://packagecloud.io/gpg.key
sslverify: 1
sslcacert: /etc/pki/tls/certs/ca-bundle.crt
- name: Add repository key (option 1)
rpm_key:
key: https://packagecloud.io/gpg.key
- name: Add repository key (option 2)
command: rpm --import https://packagecloud.io/gpg.key
- name: Install nginx with passenger
yum: name={{ item }}
with_items: [nginx, passenger]
But for it to work, I need to ssh to the machine, confirm importing the key (by running any yum command, e.g. yum list installed), and then continue provisioning. Is there a way to do it automatically?
UPD here's what ansible says:
TASK [nginx : Add repository key] **********************************************
changed: [default]
TASK [nginx : Install nginx with passenger] ************************************
failed: [default] (item=[u'nginx', u'passenger']) => {"failed": true, "item": ["nginx", "passenger"], "msg": "Failure talking
to yum: failure: repodata/repomd.xml from passenger: [Errno 256] No more mirrors to try.\nhttps://oss-binaries.phusionpassen
ger.com/yum/passenger/el/7/x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for passenger"}
So, the key is indeed imported in both cases, but to be used it must be confirmed.
Fixed it by running yum directly with -y switch (and using rpm_key module, if anything):
- name: Install nginx with passenger
command: yum -y install {{ item }}
with_items: [nginx, passenger]
After adding the repository and the repository key, just update that repo's metadata with:
- name: update repo cache for the new repo
command: yum -q makecache -y --disablerepo=* --enablerepo=passenger
Then proceed with yum: name=... as before.