I'm trying to deploy Puppeteer to IBM Cloud via CF with https://github.com/cloudfoundry/nodejs-buildpack buildpack, but always getting the following error:
/chrome-linux/chrome: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory
(node:131) UnhandledPromiseRejectionWarning: Error: Failed to launch chrome!
My manifest.yml looks like this:
applications:
- path: .
domain: eu-gb.cf.appdomain.cloud
command: npm start
name: Name
host: Name
memory: 128M
instances: 1
disk_quota: 1024M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
Puppeteer initialised like this:
const browser = await puppeteer.launch({
headless: true,
defaultViewport: null,
args: ['--no-sandbox', '--disable-setuid-sandbox'],
});
Any ideas, how could this be resolved?
Thanks.
This is what worked for me. I just created a node.js app with npm init command, added Puppeteer via npm i puppeteer command, added index.js with basic Puppeteer sample and then followed two approaches
Approach 1:
Here's my manifest.yml file with buildpack and other parameters defined.
applications:
- name: puppeteer-test
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
command: npm start
disk_quota: 1024MB
memory: 128MB
instances: 1
Also, I have run npm install locally creating the node_modules. cf push will pick the node_modules folder. Make sure you have right permissions on the folders
I ran ibmcloud cf push instead of cf push.
Approach 2
Don't define anything in the manifest.yml. Let Cloud foundry decide the buildpack based on the files.
applications:
- name: puppeteer-test
Then run
ibmcloud cf push
There is a third approach where you define everything in .yml files starting from dependencies to buildpacks. You can follow the instructions mentioned in the post here
Note: As Henrik mentioned, you won't see anything when you visit the yourappname.bluemix.net URL as there is not frontend attached to the node.js app.
Related
I'm having some issues when trying to use Hashicorp vault template (kubernetes with Google Kubernetes Engine) with to.be.continuous.
Actually when I use it with Google Docker Kaniko layer I got an error message: ... wget: bad address 'vault-secrets-provider'.
It seems that Kaniko doesn't recognize the vault-secrets-provider layer. Would you please help me with this? Or perhaps, where I can ask for some help?
This is a summary of .gitlab-ci.yml
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "templates/gitlab-ci-k8s-vault.yml"
...
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
Error Message:
[ERROR] Failed getting secret K8S_DEFAULT_KUBE_CONFIG:
... wget: bad address 'vault-secrets-provider'
I tried many times directly without Vault layer and Kaniko works ok, I mean without Vault secrets.
How I can accomplish this? I tried modifying the kaniko template but without success.
I will appreciate any help with this.
To fix your issue, first upgrade the docker template to its latest version (2.3.0 at the time this response was written).
Then depending on your case you have 2 options:
Docker needs to handle some of your secrets managed by Vault: then you shall also activate the Vault variant for Docker,
Docker doesn't needs to handle any secret managed by Vault: don't use the Vault variant for Docker, you'll have a warning message from Docker not being able to decode the secret (basically the same as the one you had, but not failing the build),
You shall simply use it in your .gitlab-ci.yml file:
include:
# Docker template
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker.yml'
# Vault variant for Docker (depending on your above case)
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker-vault.yml'
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "/templates/gitlab-ci-k8s-vault.yml"
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
I added a Sonarqube task into my azure build pipeline, in order to login to my sonarqube server I need to run a command, which uses trunst store ssl.
my pipeline looks just like this:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: abc-sonarqube
scannerMode: CLI
configMode: manual
cliProjectKey: 'abc'
cliProjectName: 'abc'
cliSources: src
extraProperties: |
sonar.host.url=https://sonarqube.build.abcdef.com
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit
I am not sure, if this command "sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit" correct is.
I got the error "API GET '/api/server/version' failed, error was: {"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE"}
"
PS: my project is angular project.
any solutions?
Azure Pipepline with task Sonarqube https
This issue should be related in how the configure task works. So, even if we add the certificate to the java trustore, the task that sets the configuration uses a different runtime (not java at least) to communicate with the server, that’s why you still get that certificate error.
To resolve this issue, you could try to:
set a global variable, NODE_EXTRA_CA_CERTS, and set it to a copy of
the root cert we had stored locally in a directory. See this article.
Check the related ticket for some more details.
Hope this helps.
I am running a cloud build trigger on a cloudbuid.yaml file in which I build a docker container and then deploy it to cloud run. The error stacktrace is as follows:
API [sql-component.googleapis.com] not enabled on project
The problem is that I have enabled both SQL and SQL Admin APIs in both projects (one for the cloud build and one for the database), which was confirmed in the console and in gcloud.
Here is the yaml code for the step I am referring to:
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta',
'run',
'deploy',
'MY_NAME',
'--image', 'gcr.io/MY_PROJECT/MY_IMAGE',
'--region', 'MY_REGION',
'--platform', 'managed',
'--set-cloudsql-instances', 'MY_CONNECTION_NAME',
'--set-env-vars', 'NODE_ENV=production,INSTANCE_CONNECTION_NAME=MY_CONNECTION_NAME,SQL_USER=MY_USER,SQL_PASSWORD=MY_PASSWORD,SQL_NAME=MY_SCHEMA,TOPIC_NAME=MY_TOPIC'
]
Any suggestions?
Thanks.
P.S.: As per Eespinola suggestion, I checked and confirmed I am running Google Cloud SDK 254.0.0.
P.S. 2: I have also tried to create a project from scratch but ended up with the same results.
Ok so as per the same thread eespinola posted (see above), the Cloud Build gcloud step will be updated according to Cloud SDK 254.0.0 update in a near future (the actual date may or may not be posted in the same thread in the future). Until then, the alternative is to use the YAML file without the --add-cloudsql-instances flag and add it manually in the UI (I still have not tried this but it should work as per Google's development team).
I am trying to set up Stackdriver debugging using Go. Using the article and this great medium post I came up with this solution.
Key parts, in cloudbuild.yaml
- name: gcr.io/cloud-builders/wget
args: [
"-O",
"go-cloud-debug",
"https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug"
]
...
Dockerfile I have
...
COPY gopath/bin/stackdriver-demo /stackdriver-demo
ADD go-cloud-debug /
ADD source-context.json /
CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"]
...
However the pods keeps crashing, the container logs show this error:
Error loading program: decoding dwarf section info at offset 0x0: too short
EDIT: Using https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug may be outdated as I haven't seen it used outside Daz's medium post. The official docs uses the package cloud.google.com/go/cmd/go-cloud-debug-agent
I have update cloudbuild.yaml file to install this package:
- name: 'gcr.io/cloud-builders/go'
args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
- name: 'gcr.io/cloud-builders/go'
args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
And in the Dockerfile I can get access to the binary in gopath/bin/go-cloud-debug-agent
When I execute the gopath/bin/go-cloud-debug-agent with my own program as an argument:
/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo
I get another opaque error:
Error loading program: AttrStmtList not present or not int64 for unit 88
So basically using the cloud-debug binary from https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug and cloud-debug-agent binary from the package cloud.google.com/go/cmd/go-cloud-debug-agent both don't work and give different errors.
Would appreciate any tips on what I'm doing wrong and how to fix it.
OK :-)
Yes, you should follow the current Stackdriver documentation, e.g. go-cloud-debug-agent
Unfortunately, there are now various issues with my post including a (currently broken) gcr.io/cloud-builders/kubectl for regions.
I think your issue pertains to your use of golang:alpine. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.
I'm able to get your solution working primarily by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:
FROM golang:alpine
RUN apk add git
RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent
ADD main.go src
RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go
ADD source-context.json /
CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"]
I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.
I've made my version of your image publicly available (and will retain it for a few days for you):
gcr.io/dazwilkin-190402-55473323/roberson34#sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d
You may wish to reference it (by that digest) in your deployment.yaml
NB For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.
I had an app running on another bluemix account. I wanted to copy the app and run it in another bluemix account. So I downloaded code from github repo., but when I am trying to push this app to another account I am seeing following error.
Note : I used cf push to push the downloaded app. Any help ?
Log :
2014-12-16T14:49:15.41+0530 [API] OUT Updated app with guid e2fca26a-c62d-47 5d-8c21-8e959ae6632c ({"state"=>"STOPPED"})
2014-12-16T14:49:42.10+0530 [DEA] OUT Got staging request for app with id e2 fca26a-c62d-475d-8c21-8e959ae6632c
2014-12-16T14:49:45.08+0530 [API] OUT Updated app with guid e2fca26a-c62d-47 5d-8c21-8e959ae6632c ({"state"=>"STARTED"})
2014-12-16T14:49:45.65+0530 [STG] OUT -----> Downloaded app package (4.6M)
2014-12-16T14:49:46.15+0530 [STG] OUT -----> Downloaded app buildpack cache(4.4M)
2014-12-16T14:49:48.62+0530 [STG] OUT Staging failed: An application could not be detected by any available buildpack
2014-12-16T14:49:49.37+0530 [API] ERR Encountered error: An app was not succ
essfully detected by any available buildpack
Please check your Manifest.yml. Either your app is missing it or has some wrong entry. you can look for this file in your downloaded app. And also don't forget to pass build pack name when running push command.
This link could be helpful:
https://ibm.biz/BdEgub
What language is the app in? Sometimes Cloud Foundry can't detect the type of app you are running and when you push the app you need to tell it what kind of app it is. We can do that with some of the following commands. I went ahead and posted it for a couple different languages. More info here. https://www.ng.bluemix.net/docs/#starters/byob.html
To see all the "built in" buildpacks run the following command.
cf buildpacks
You will get something like the following.
Getting buildpacks...
buildpack position enabled locked filename
liberty-for-java 1 true false buildpack_liberty-for-java_v1.9-20141202-0947-yp.zip
sdk-for-nodejs 2 true false buildpack_sdk-for-nodejs_v1.9.1-20141208-1221-yp.zip
noop-buildpack 3 true false noop-buildpack-20140311-1519.zip
java_buildpack 4 true false java-buildpack-v2.5.zip
ruby_buildpack 5 true false ruby_buildpack-offline-v1.1.1.zip
nodejs_buildpack 6 true false nodejs_buildpack-offline-v1.0.4.zip
liberty-for-java_v1-8-20141118-1610 7 true false buildpack_liberty-for-java_v1.8-20141118-1610-yp.zip
liberty-for-java_v1-3-20140818-1538 8 true false buildpack_liberty-for-java_v1.3-20140818-1538.zip
sdk-for-nodejs_v1-8-20141104-1654 9 true false buildpack_sdk-for-nodejs_v1.8-20141104-1654-yp.zip
Java App:
cf push appname -b liberty-for-java
or cf push appname -b java_buildpack
Node.js:
cf push appname -b sdk-for-nodejs
or cf push appname -b nodejs_buildpack
Ruby:
cf push appname -b ruby_buildpack
There are a bunch of other languages supported as well.
For a list head over to https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/Buildpacks.
If for example you wanted to use PHP you would do the following.
cf push -b https://github.com/cloudfoundry/php-buildpack.git
If you wanted to do Go you would do the following.
cf push appname -b https://github.com/cloudfoundry/go-buildpack.git
two ways to sort out this issue(Assuming its a node.js app)
Run command like below from cf tool mentioning app name :
cf push testmyapp -b sdk-for-nodejs -n testmyapp -m 128M -c 'node main.js'
P.S- "-n" option is used for required hostname on bluemix
mention app name,service name explicitly in manifest.yml file like below:
applications:
name: testmyapp
host: testmyapp
memory: 128M
command: node main.js
P.S- You need to create manifest.yml explicitly if you are using 2nd method.
If you still getting any error,please provide o/p of "cf logs testmyapp --recent"
Alternatively,you can even directly push your app like below:
For Go application to Bluemix, but need to supply -b with the Go Buildpack URL:
cf push appname -b https://github.com/cloudfoundry/go-buildpack.git
Similarly,you can do for other one's.
Looking at the below error,your app is not able to detect correct type of SDK.
2014-12-16T14:49:48.62+0530 [STG] OUT Staging failed: An application could not be detected by any available buildpack
2014-12-16T14:49:49.37+0530 [API] ERR Encountered error: An app was not succ
essfully detected by any available buildpack
you need to check the correct sdk type and mention it while pushing like below:
cf push myapp -b sdk-for-nodejs -n myapp -m 128M -c 'node main.js'
I had a talk in the last CloudFoundry summit on all kinds of app push errors: their symptoms, how to diagnose and what are the solutions. See https://www.slideshare.net/greensight/10-common-errors-when-pushing-apps-to-cloud-foundry. Hopefully it will be helpful.
Given the above answers are a bit stale here is the latest. Build packs version do go out of support. You should check what version of the build pack you are specifying either on the command line using -b option the command line or in your manifest.yml
See the commands to check these items here in the IBM Cloud Doc ibmcloud cf buildpacks. https://cloud.ibm.com/docs/cloud-foundry-public?topic=cloud-foundry-public-using_buildpacks
To see the build packs that are available by language:
ibmcloud cf buildpacks