Skaffold is not detecting js file changes - kubernetes

When I change inside index.js file inside auth directory then skaffold stuck on watching for changes... I restarted but every time when I change it stuck
Syncing 1 files for test/test-auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99
Watching for changes...
> auth
> node_modules
> src
> signup
signup.js
index.js
> .dockerignore
> Dockerfile
> package-lock.json
> package.json
> infra
> k8s
auth-depl.yaml
ingress-srv.yaml
> skaffold.yaml
My skaffold.yaml file is
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: test/test-auth
docker:
dockerfile: Dockerfile
context: auth
sync:
manual:
- src: '***/*.js'
dest: src
If I make change signup.js or index.js skaffold stuck.Please help me!

Given the output you included above, I suspect that Skaffold is copying the file across:
Syncing 1 files for test/test-> auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99
Watching for changes...
but your app is not set up to respond to file changes. You need to use a tool like nodemon to watch for file changes and restart your app. The Skaffold hot-reload example shows one way to set this up.

Related

Why isn't Cloud Code honoring my cloudbuild.yaml file but gcloud beta builds submit is?

I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild

Cloudbuild can't access Artifacts Registery when building cloud run docker container

I'm using a package from Artifacts Registery in my cloud run nodejs container.
When I try to gcloud builds submit I get the following error:
Step #1: npm ERR! 403 403 Forbidden - GET https://us-east4-npm.pkg.dev/....
Step #1: npm ERR! 403 In most cases, you or one of your dependencies are requesting
Step #1: npm ERR! 403 a package version that is forbidden by your security policy.
Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/...']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'admin-api'
- '--image'
- 'gcr.io/...'
- '--region'
- 'us-east4'
- '--allow-unauthenticated'
images:
- 'gcr.io/....'
and Dockerfile
FROM node:14-slim
WORKDIR /usr/src/app
COPY --chown=node:node .npmrc ./
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 8080
CMD [ "npm","run" ,"server" ]
.npmrc file:
#scope_xxx:registry=https://us-east4-npm.pkg.dev/project_xxx/repo_xxx/
//us-east4-npm.pkg.dev/project_xxx/repo_xxx/:always-auth=true
the google build service account already has the permission "Artifact Registry Reader"
You have to connect the CloudBuild network in your docker build command. Like that
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/...', '--network=cloudbuild', '.']
I had the same root cause, my setup is close to #AmmAr, after hours of trial and error, found a solution.
Dislaimer, this might not be the reason for your issue, the gcp 403 error message is vague, you need to chip away and eliminate all possibilities, that is how I arrived on this page.
Comparing to #AmmArr above, the changes I made:-
In node.js package.json, add to "scripts" :{...} property
"artifactregistry-login": "npx google-artifactregistry-auth",
"artifactregistry-auth-npmrc": "npx google-artifactregistry-auth .npmrc"
In cloudbuild.yaml, I added two steps prior to the build step, these steps should result in .npmrc getting appended with an access token, thus allowing it to communicate with the gcp artifact repository, that resolved the 403 issue for my scenario.
steps:
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-login']
- name: gcr.io/cloud-builders/npm
args: ['run', 'artifactregistry-auth-npmrc']
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/...', '.']
#- next steps in your process...
In Dockerfile, copy over .nmprc before package.json
COPY .npmrc ./
COPY package*.json ./
Screenshot of my cloud build config
Now run, and see if it gets past the build step where it pulls npm module from artifact registry.
The solution that worked with me can be found in this blog post:
https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2

Is it possible to run Karate test in a pod? If possible, then how?

I just want to know whether I can run Karate test in a pod. Or is there any good suggestion on how to run it?
I tried to run the Karate test in terminal and it works. Just want to know if I can run it from Kubernetes pod. Nginx also running in the pod.
You can everything in pod whatever you are running outside environment. Pod run the container inside it.
So create the docker file and generate the docker image using docker file. Using that docker image and start the karate pod.
You can write the docker file like this
FROM maven:3-jdk-8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY settings.xml /usr/share/maven/ref/
COPY pom.xml /tmp/pom.xml
COPY . /usr/src/app
RUN mvn -B -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml prepare-package -DskipTests
CMD ["/usr/src/app/maven_runner.sh"]
I found here one example : https://github.com/neillfontes/karate-sample
Posting as Community Wiki for future use.
#Harsh Manvar provided good example, however if you will just build it from Dockerfile, you will recieved errors. You have to download all files mentioned in Github. Correct oreder will be:
$ git clone https://github.com/neillfontes/karate-sample.git
$ cd karate-sample
$ docker build -t karate_docker .
After image was built you can check it:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
karate_docker latest 9dc6d7a5278a About a minute ago 136MB
Later you can start testing using:
$ docker run karate_docker
START: Running tests...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running demo.DemoTest
11:57:49.684 [main] DEBUG c.i.karate.cucumber.CucumberRunner - init test class: class demo.DemoTest
11:57:50.412 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/get-token.feature
11:57:50.663 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/make-request.feature
11:57:53.898 [main] INFO com.intuit.karate.ScriptBridge - karate.env system property was: null
11:57:54.867 [main] DEBUG c.i.k.h.a.RequestLoggingInterceptor -
1 > POST http://brentertainment.com/oauth2/lockdin/token
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Content-Length: 96

Unable to run Sonarqube analysis from cloudbuild.yaml with Google Cloud build

I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key
If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"

Error reading manifest file in bluemix deploy

I'm having a hard time to deploy this standard ecommerce project on Bluemix:
https://github.com/zallaricardo/ecommerce-devops
I've chosen to do it with git repository and automatic deploy through the Bluemix pipeline service. After successfully building and fixing a lot of misconfigurations, the root challenge seems to be write a correct version of the manifest.yml file for the project.
Without the manifest.yml file, the log shows the following error:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
Updating app loja-virtual-devops in org pfc-devops / space Dev as [email account]...
OK
Uploading loja-virtual-devops...
Uploading app files from: /home/pipeline/d38f0184-33da-44da-ba16-4671b491988a
Uploading 384.1M, 1679 files
228.5M uploaded...
Done uploading
OK
Stopping app loja-virtual-devops in org pfc-devops / space Dev as [email account]...
OK
Starting app loja-virtual-devops in org pfc-devops / space Dev as[email account]...
-----> Downloaded app package (452M)
-----> Downloaded app buildpack cache (4.0K)
Staging failed: An application could not be detected by any available buildpack
FAILED
NoAppDetectedError
TIP: Buildpacks are detected when the "cf push" is executed from within the directory that contains the app source code.
Use 'cf buildpacks' to see a list of supported buildpacks.
Use 'cf logs loja-virtual-devops --recent' for more in depth log information.
And with the version of manifest which I believe * - I'm new on this manifests stuff* - to be ok and sufficient, the log shows:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
FAILED
Error reading manifest file:
yaml: unmarshal errors:
line 2: cannot unmarshal !!seq into map[interface {}]interface {}
The manifest.yml file is currently written as follows:
---
- name: loja-virtual-devops
memory: 512M
buildpack: https://github.com/cloudfoundry/java-buildpack
domain: mybluemix.net
I'll sincerely appreciate any hint about how to fix the manifest for this application or another way to successfully deploy the project through Bluemix.
Try including the applications heading in your manifest.yml file.
example:
applications:
- name: appname
host: app_hostname
buildpack: java_buildpack
instances: 2
memory: 512M
disk_quota: 512M
path: .