cf push to IBM Cloud failed : Unable to install node: improper constraint: >=4.1.0 <5.5.0 - ibm-cloud

I pushed an app to the IBM Cloud after a minor change (just some data, no code or dependencies).
cat: /VERSION: No such file or directory
-----> IBM SDK for Node.js Buildpack v4.0.1-20190930-1425
Based on Cloud Foundry Node.js Buildpack 1.6.53
-----> Installing binaries
engines.node (package.json): >=4.1.0 <5.5.0
engines.npm (package.json): unspecified (use default)
**WARNING** Dangerous semver range (>) in engines.node. See: http://docs.cloudfoundry.org/buildpacks/node/node-tips.html
**ERROR** Unable to install node: improper constraint: >=4.1.0 <5.5.0
Failed to compile droplet: Failed to run all supply scripts: exit status 14
Exit status 223
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a stopping instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a destroying container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a successfully destroyed container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
FAILED
Error restarting application: BuildpackCompileFailed
An older version of the app is running on the IBM Cloud already (from May 2019, I think).
So I wonder what changed so it's not working anymore.

In IBM Cloud Foundry, the Node.js version must be specified like this
"engines": {
"node": "12.x"
}
or
"engines": {
"node": "12.10.x"
}

You can also try removing this completely on your package.json file.
"engines": {
"node": "6.15.1",
"npm": "3.10.10"
},
Here's a quick reference.

Related

Beam SDK harness still trying to launch docker when I set environment_type to be `PROCESS`

According to the beam harness documentation:
PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
args = [
"--runner=portableRunner",
"--streaming",
"--sdk_worker_parallelism=2",
"--environment_type=PROCESS",
"--environment_config={\"command\": \"/opt/apache/beam/boot\"}",
]
consumer_config = {
"security.protocol": "SASL_SSL",
"sasl.mechanism": "AWS_MSK_IAM",
"sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
"sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
"bootstrap.servers": bootstrap_servers,
}
with beam.Pipeline(options=PipelineOptions(args)) as p:
data = p | "Reading messages from Kafka" >> ReadFromKafka(
consumer_config=consumer_config,
topics=topics,
with_metadata=True
)
data | 'Writing to stdout' >> beam.Map(logging.info)
But when I run the code (deployed to k8s using flinkk8soperator), it is complaining:
Caused by: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
Wondering if I misunderstand anything? Thanks!
After couple digging, I finally make the cross language work without using DinD or DooD. Here's the steps:
Ensure both job and task manager mount a shared volume for artifact staging. (This is required, otherwise the task manager will complained unable to find the submitted jar)
Ensure your docker image can run both java and python beam code, here's what I did:
# python SDK
COPY --from=apache/beam_python3.7_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam/
# java SDK
COPY --from=apache/beam_java8_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam_java/
In the job, you'll need to start the expansion service with extra args, for example the KafkaIo:
from apache_beam.io.kafka import ReadFromKafka, default_io_expansion_service
ReadFromKafka(
consumer_config=consumer_config,
topics=[topic],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam_java/boot\"}",
]
)
You portable execution relies on xLang support that requires starting a Java SDK with docker. Your cluster doesn't have docker installed.

Google Cloud Deployment Failure-Java 11,cloud sdk 288.0.0 ,builder:java11_20200223_11_0_RC00

I'm trying to deploy a Java 11 Maven project in App Engine(Standard), using mvn appengine:deploy command.
It was successful until last week.
This week after Google cloud sdk got updated to 288.0.0 the deployment is failing with below error.
i tried reverting Cloud SDK version but still the issue persists.
Project Id is of format- google.com:abc-xyz
Cloud Build Log Snippet below:
....
Finished Step #2 - "detector"
Starting Step #3 - "analyzer"
Step #3 - "analyzer": Already have image (with digest): us.gcr.io/gae-runtimes/buildpacks/java11/builder:java11_20200223_11_0_RC00
Step #3 - "analyzer": ERROR: failed to access previous image: could not parse reference: us.gcr.io/**google.com:abc-xyz**/app-engine-tmp/ttl-7d/default/buildpack-app:latest
Finished Step #3 - "analyzer"
ERROR
ERROR: build step 3 "us.gcr.io/gae-runtimes/buildpacks/java11/builder:java11_20200223_11_0_RC00" failed: step exited with non-zero status: 1
-Thanks
Your error appears to be a result of the project ID google.com:${ID}
The causes an error for Container Registry because it wants e.g.
[us.]gcr.io/${PROJECT}/${IMAGE}...
but it's getting
[us.]gcr.io/google.com:${PROJECT}..
IIRC google.com: is a defunct method of specifying projects. You should just use ${PROJECT} without any domain prefixing.
Hmmm... you said that the project is in production and this used to work. I think this is possibly (effectively) a breaking change for you. I'll Google it but, using domain prefixes (google.com:) used to be a thing in GCP and perhaps this is now formally deprecated. Guess (!?).

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

IBM Blockchain (Hyperledger) - "Error when deploying chaincode"

I'm following the instructions to deploy some chaincode to the IBM Hyperledger Blockchain, using the swagger API on the IBM Bluemix dashboard.
In order to deploy some chaincode, I need to submit a JSON request, which contains the path to the chaincode repository:
{
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID": {
"path": "https://github.com/series0ne/learn-chaincode/tree/master/finished"
},
"ctorMsg": {
"function": "init",
"args": [
"Hello, world"
]
},
"secureContext": "user_type1_0"
},
"id": 0
}
I have logged in user_type1_0 before attempting to deploy, but this is the result I get:
{
"jsonrpc": "2.0",
"error": {
"code": -32001,
"message": "Deployment failure",
"data": "Error when deploying chaincode: Error getting chaincode package bytes: Error getting code 'go get' failed with error: \"exit status 1\"\npackage github.com/series0ne/learn-chaincode/tree/master/finished: cannot find package \"github.com/series0ne/learn-chaincode/tree/master/finished\" in any of:\n\t/opt/go/src/github.com/series0ne/learn-chaincode/tree/master/finished (from $GOROOT)\n\t/opt/gopath/_usercode_/424324290/src/github.com/series0ne/learn-chaincode/tree/master/finished (from $GOPATH)\n\t/opt/gopath/src/github.com/series0ne/learn-chaincode/tree/master/finished\n"
},
"id": 0
}
Any ideas?
P.S. Currently running commit level 0.6.1 of the Hyperledger blockchain on Bluemix.
Try stripping out the 'tree/master' portion of your deployment url. Notice that the example linked below does not include this portion of the url:
https://github.com/IBM-Blockchain/learn-chaincode#deploying-the-chaincode
This url is going to be passed into a go get <url> command inside the peer, which will download the chaincode so that it can be compiled. So, this url must match the format accepted by this command.
I tried using the Learn Chaincode example based on the advice from Dale to change the address of the repository from https://github.com/GitHub_ID/learn-chaincode/tree/master/finished to https://github.com/GitHub_ID/learn-chaincode/finished. The Blockchain network used for this test was running on Bluemix with version 0.6.1 of the Hyperledger Fabric. With the modified path, it was possible to use the APIs tab within the interface for the Blockchain network to deploy the chaincode.
Following are some things to check.
The v2.0 branch from https://github.com/IBM-Blockchain/learn-chaincode should be used with a Blockchain network running Hyperledger Fabric version 0.6.1. Is your personal fork even with the v2.0 branch from https://github.com/IBM-Blockchain/learn-chaincode?
Was the chaincode deployment issued from the same validating peer used to register the user_type1_0 user? The validating peer can be selected at the top of the APIs tab. There is a note in the Learn Chaincode instructions indicating that the same validating peer must register the user and deploy the chaincode.
Your go get is command either not able to access Location of your package due to ACL or its parameters are invalid as per IBM doc. Please recheck its format

Cloud foundry on Google Compute engine can't create container

I am very new with Cloud foundry. I have added cloud foundry for google compute engine platform by this guides source1 and source2.
Terraform was used for creating needed infrastructure. It seemed all was fine I didn't get any errors during deployment cloud foundry itself and bosh cck command returns that there are no any problems. But when I tried to deploy my hello world app, I got next error message in terminal after cf push command:
Creating container
Failed to create container
FAILED
Error restarting application: StagingError.
After checking log files I found next message:
{
"timestamp":"1474637304.026303530",
"source":"garden-linux",
"message":"garden-linux.loop-mounter.mount-file.mounting",
"log_level":2,
"data":{
"destPath":"/var/vcap/data/garden/aufs_graph/aufs/diff/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"error":"exit status 32",
"filePath":"/var/vcap/data/garden/aufs_graph/backing_stores/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"output":"mount: wrong fs type, bad option, bad superblock on /dev/loop0,\n missing codepage or helper program, or other error\n In some cases useful info is found in syslog - try\n dmesg | tail or so\n\n",
"session":"2.276"
}
}{
"timestamp":"1474637304.026949406",
"source":"garden-linux",
"message":"garden-linux.pool.acquire.provide-rootfs-failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"9.545"
}
}
{
"timestamp":"1474637304.027062416",
"source":"garden-linux",
"message":"garden-linux.garden-server.create.failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"request":{
"Handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"GraceTime":0,
"RootFSPath":"/var/vcap/packages/rootfs_cflinuxfs2/rootfs",
"BindMounts":[
{
"src_path":"/var/vcap/data/executor_cache/6942123d3462ad9d21a45729c3cae183-1474475979582384649-1.d",
"dst_path":"/tmp/lifecycle"
}
],
"Network":"",
"Privileged":true,
"Limits":{
"bandwidth_limits":{
},
"cpu_limits":{
"limit_in_shares":512
},
"disk_limits":{
"inode_hard":200000,
"byte_hard":6442450944,
"scope":1
},
"memory_limits":{
"limit_in_bytes":1073741824
}
}
},
"session":"11.44187"
}
}{
"timestamp":"1474637304.034646988",
"source":"garden-linux",
"message":"garden-linux.garden-server.destroy.failed",
"log_level":2,
"data":{
"error":"unknown handle: ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"11.44188"
}
}
And meantime in dmesg | tail I got next:
[161023.238082] aufs test_add:283:garden-linux[7681]: uid/gid/perm
/var/vcap/data/garden/aufs_graph/aufs/diff/d350dcd30f6d6f8b37eabe06a3b73bcea0a87f9aff4edf15f12792269fc9f97c
4294967294/4294967294/0755, 0/0/0755 [161023.238109] aufs
au_opts_verify:1597:garden-linux[7681]: dirperm1 breaks the protection
by the permission bits on the lower branch [161023.413392] device
wtj3qdqhig0t-0 entered promiscuous mode
I'm not sure that this issues connected or that it is issue at all, but I post them here in order to be sure, that I didn't miss anything.
I don't know how to fix this problem and where, should I look solution for terraform scripts or for bosh manifest files. We have micro service architecture with three nodes on node js and one on ruby, so deployment is very important question for us.
here is my application manifest.yml file:
---
applications:
- name: hello_cloud
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
instances: 1
random-route: true
command: "node server.js"
My goal is to be able deploy applications using cloud foundry. If you have any additional questions or I wrote something unclear feel free to write me.
This issue is related a conflict between garden and the 4.4 Linux kernel. To use the example cloudfoundry manfest, use the follow stemcell:
bosh upload stemcell https://bosh.io/d/stemcells/bosh-google-kvm-ubuntu-trusty-go_agent?v=3262.19
bosh deploy
You may need to delete your cf deployment before re-deploying due to quota issues.