170016 error pushing flask app to Bluemix - ibm-cloud

I'm trying to push a flask app to Bluemix, but receiving the following error:
{
"code": 170016,
"description": "Runner error: desire app failed: 400",
"error_code": "CF-RunnerError"
}
The command:
CF_TRACE=true cf push
By cf tooling version:
$ cf --version
cf version 6.26.0+9c9a261fd.2017-04-06
The logs show:
API/0Created app with guid xxxx Apr 18, 2017 6:16:00 PM
API/0Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:21:31 PM
API/2Updated app with guid xxxx ({"route"=>"xxxx", :verb=>"add", :relation=>:routes, :related_guid=>"xxxx"})Apr 18, 2017 6:21:35 PM
API/1Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:22:28 PM
API/0Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:26:15 PM

As I was writing this question, I realised that I had a typo in my manifest.yml
instances: 0
Hopefully this will help other users find the issue quicker if they have similar typos.

Related

Using Render hosting app to deploy my project- Error Cannot find module

Follows Heroku becoming a paid app, I am trying other options.
Does anyone have experience uploading a MongoDB server-based project to "Render" Application Hosting?
This is the repo I want to deploy:
https://github.com/myyoss/FUNDLE_A_WORDLE_CLONE
On the app settings I use for the Build Command: "yarn" (and it seems to works cause I get "Build successful" message).
But for the Start Command I can't get the correct command/path.
I keep getting this error:
==> Build successful :
Oct 7 06:41:33 PM ==> Deploying…
Oct 7 06:42:01 PM ==> Starting service with ‘node dist/server.js’
Oct 7 06:42:06 PM internal/modules/cjs/loader.js:888
Oct 7 06:42:06 PM throw err;
Oct 7 06:42:06 PM ^
Oct 7 06:42:06 PM
Oct 7 06:42:06 PM Error: Cannot find module ‘…/routes/userRoutes’
Can someone please take a look at the repo LINK and tell me what I'm doing wrong?
On an other project I used "node dist/server.js" for the Start Command and it worked just fine.
I think your problem may be here, where you use "../routes/userRoutes". You want to use "./routes/userRoutes" since the location of that file is relative to the root directory, ./
This answer may be helpful re: routing errors, and Render's quickstart for Node/Express is also worth a look.

Unable to run apache-beam pipeline using KafkaIO on Dataflow

I am trying to consume kafka message using apache beam in dataflow. I write a simple pipeline using apache beam version 2.1.0 below :
public static void main(String[] args) {
DrainOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(DrainOptions.class);
options.setStreaming(true);
Pipeline p = Pipeline.create(options);
Map<String, Object> props = new HashMap<>();
props.put("auto.offset.reset", "latest");
props.put("group.id", "test-group");
p.apply(KafkaIO.readBytes()
.updateConsumerProperties(props)
.withTopic(options.getTopic())
.withBootstrapServers(options.getBootstrapServer())
).apply(ParDo.of(new GetValue()))
.apply("ToString", ParDo.of(new ToString()))
.apply("FixedWindow", Window.<String>into(FixedWindows.of(Duration.standardSeconds(30))))
.apply(TextIO.write().to(options.getOutput()).withWindowedWrites().withNumShards(1));
PipelineResult pipelineResult = p.run();
pipelineResult.waitUntilFinish();
}
When it tried to run it using dataflow runner :
mvn compile exec:java -Dexec.mainClass=com.test.beamexample.Drain -Dexec.args="--project=my-project --gcpTempLocation=gs://my_bucket/tmp/drain --streaming=true --stagingLocation=gs://my_bucket/staging/drain --output=gs://my_bucket/output/staging/drainresult --bootstrapServer=kafka-broker:9092 --topic=test --runner=DataflowRunner" -Pdataflow-runner
The pipeline successfully being built and being uploaded to staging location, but before Dataflow runner run the pipeline, it being executed in local, that makes no Dataflow job created, like when we are using direct-runner :
Nov 14, 2017 2:14:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 111 files. Enable logging at DEBUG level to see which files will be staged.
Nov 14, 2017 2:14:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
Nov 14, 2017 2:14:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
INFO: Uploading 111 files from PipelineOptions.filesToStage to staging location to prepare for execution.
Nov 14, 2017 2:14:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
INFO: Staging files complete: 111 files cached, 0 files newly uploaded
Nov 14, 2017 2:15:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding KafkaIO.Read/Read(UnboundedKafkaSource)/DataflowRunner.StreamingUnboundedRead.ReadWithIds as step s1
Nov 14, 2017 2:15:00 PM org.apache.kafka.common.config.AbstractConfig logAll
INFO: ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
...
Are there something missing ?

Launching a slave Jenkins node when a change is pushed to GitHub

I'm trying to build a project on Jenkins when a change is pushed to GitHub.
I'm using GitHub plugin and building on a slave node.
If the slave is online, everything works fine. However, if the slave is offline, Jenkins doesn't try to launch the slave node and ignores the notification from GitHub.
The following log is output to jenkins.log when the slave is online:
May 30, 2016 4:16:31 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber onEvent
INFO: Received POST for https://github.com/myname/myproject
May 30, 2016 4:16:31 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
INFO: Poked MyProject
May 30, 2016 4:16:33 PM com.cloudbees.jenkins.GitHubPushTrigger$1 run
INFO: SCM changes detected in MyProject. Triggering #22
However, if the slave is offline, the build is not triggered:
May 30, 2016 4:15:58 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber onEvent
INFO: Received POST for https://github.com/myname/myproject
May 30, 2016 4:15:58 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
INFO: Poked MyProject
Starting a build by clicking "Build Now" always makes the slave node online. How can I build a project by GitHub changes on a slave node which might be offline?
Update:
I found the following message on "GitHub Hook Log" of the project:
Started on May 30, 2016 6:00:46 PM
We need to schedule a new build to get a workspace, but deferring 561ms in the hope that one will become available soon (all_suitable_nodes_are_offline)
Done. Took 0.23 sec
No changes
Update(2016/06/01):
This is the current slave setting:
Contents of /var/lib/jenkins/bin/start-slave:
#!/bin/bash -eux
gcloud compute instances start ci-slave --zone us-central1-f
ssh ci-slave /var/lib/jenkins/bin/start

Cannot see a already started TASK in TASK APP in Alfresco Activiti

I am using the Activiti designer to create and deploy a process in Activiti. My process gets deployed with no errors. But when I login to Activiti App, I am unable to start the process in my Task App.
Below are the logs from my Eclipse console :
Mar 18, 2016 2:22:31 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [activiti.cfg.xml]
Mar 18, 2016 2:22:34 PM org.activiti.engine.impl.ProcessEngineImpl
INFO: ProcessEngine default created
Mar 18, 2016 2:22:34 PM org.activiti.engine.impl.bpmn.deployer.BpmnDeployer deploy
INFO: Processing resource myProcess.bpmn20.xml
id 40005 myProcess:12:40004
Below is my Junit Test :
public void startProcess() throws Exception
{
RepositoryService repositoryService = activitiRule.getRepositoryService();
repositoryService.createDeployment().name("working set").addInputStream("myProcess.bpmn20.xml",
new FileInputStream(filename)).deploy();
RuntimeService runtimeService = activitiRule.getRuntimeService();
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("myProcess");
assertNotNull(processInstance.getId());
System.out.println("id " + processInstance.getId() + " "
+ processInstance.getProcessDefinitionId());
}
I have assigned the task to $INITIATOR. The process starts successfully, but when I log into activiti-app as an administrator, I am unable to see the tasks in the task app. Why is that ?

Unable to run ./cim.sh for ATG 10.1.2 using Weblogic App Server

Hi Everyone,
I have installed ATG 10.1.2 , along with CRS , Search , CSC on my Linux machine. I'm using weblogic as my application server. However, when i try to run ./cim.sh , I get an error which is as follows
It says that its unable to find the class weblogic.utils.classloaders.ClassFinder .
I have set my environment variables as follows :
export JAVA_HOME=/home/install/mediaStore/jdk1.6.0_41
export PATH=$JAVA_HOME/bin:/home/install/software/ant/apache-ant-1.8.2/bin:/home/install/Oracle11gR2/install/product/11.2.0/dbhome_1/bin:$PATH
export ANT_HOME=/home/install/software/ant/apache-ant-1.8.2
export PATH=$PATH:ANT_HOME/bin
export DYNAMO_ROOT=/home/install/mediaStore/ATG/ATG10.1.2
export DYNAMO_HOME=$DYNAMO_ROOT/home
export ATGJRE=$JAVA_HOME/bin/java
export CLASSPATH=/home/install/Oracle11gR2/install/product/11.2.0/dbhome_1/jdbc/lib/ojdbc6.jar:$CLASSPATH
export WEBLOGIC_HOME=/home/install/mediaStore/Weblogic
export WEBLOGIC_SERVER=$WEBLOGIC_HOME/wlserver_12.1
[install#JJPLRHEL01 bin]$ ./cim.sh
The following installed ATG components are being used to launch:
ATGPlatform version 10.1.2 installed at /home/install/mediaStore/ATG/ATG10.1.2
Error Thu Feb 28 15:21:35 IST 2013 1362045095625 / **atg.nucleus.NucleusResources->cantResolveComponent : Unable to resolve component /atg/dynamo/service/validation/JavaxValidatorFactory** java.lang.NoClassDefFoundError: weblogic/utils/classloaders/ClassFinder
Error Thu Feb 28 15:21:35 IST 2013 1362045095625 / at javax.validation.Validation.byProvider(Validation.java:166)
Error Thu Feb 28 15:21:35 IST 2013 1362045095625 / ***Caused by :java.lang.ClassNotFoundException: weblogic.utils.classloaders.ClassFinder***
Error Thu Feb 28 15:21:35 IST 2013 1362045095625 / at
Any help or views or guidance would be highly appreciated .
Thanks ,
Aazim
I added the following to my classpath
/home/install/mediaStore/Weblogic/wlserver_12.1/server/lib/wls-api.jar
Also, currently ATG 10.1.2 does not support Weblogic 12.1. So, you won't be able to configure CRS with it.