I am using the Activiti designer to create and deploy a process in Activiti. My process gets deployed with no errors. But when I login to Activiti App, I am unable to start the process in my Task App.
Below are the logs from my Eclipse console :
Mar 18, 2016 2:22:31 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [activiti.cfg.xml]
Mar 18, 2016 2:22:34 PM org.activiti.engine.impl.ProcessEngineImpl
INFO: ProcessEngine default created
Mar 18, 2016 2:22:34 PM org.activiti.engine.impl.bpmn.deployer.BpmnDeployer deploy
INFO: Processing resource myProcess.bpmn20.xml
id 40005 myProcess:12:40004
Below is my Junit Test :
public void startProcess() throws Exception
{
RepositoryService repositoryService = activitiRule.getRepositoryService();
repositoryService.createDeployment().name("working set").addInputStream("myProcess.bpmn20.xml",
new FileInputStream(filename)).deploy();
RuntimeService runtimeService = activitiRule.getRuntimeService();
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("myProcess");
assertNotNull(processInstance.getId());
System.out.println("id " + processInstance.getId() + " "
+ processInstance.getProcessDefinitionId());
}
I have assigned the task to $INITIATOR. The process starts successfully, but when I log into activiti-app as an administrator, I am unable to see the tasks in the task app. Why is that ?
Related
I am trying to consume kafka message using apache beam in dataflow. I write a simple pipeline using apache beam version 2.1.0 below :
public static void main(String[] args) {
DrainOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(DrainOptions.class);
options.setStreaming(true);
Pipeline p = Pipeline.create(options);
Map<String, Object> props = new HashMap<>();
props.put("auto.offset.reset", "latest");
props.put("group.id", "test-group");
p.apply(KafkaIO.readBytes()
.updateConsumerProperties(props)
.withTopic(options.getTopic())
.withBootstrapServers(options.getBootstrapServer())
).apply(ParDo.of(new GetValue()))
.apply("ToString", ParDo.of(new ToString()))
.apply("FixedWindow", Window.<String>into(FixedWindows.of(Duration.standardSeconds(30))))
.apply(TextIO.write().to(options.getOutput()).withWindowedWrites().withNumShards(1));
PipelineResult pipelineResult = p.run();
pipelineResult.waitUntilFinish();
}
When it tried to run it using dataflow runner :
mvn compile exec:java -Dexec.mainClass=com.test.beamexample.Drain -Dexec.args="--project=my-project --gcpTempLocation=gs://my_bucket/tmp/drain --streaming=true --stagingLocation=gs://my_bucket/staging/drain --output=gs://my_bucket/output/staging/drainresult --bootstrapServer=kafka-broker:9092 --topic=test --runner=DataflowRunner" -Pdataflow-runner
The pipeline successfully being built and being uploaded to staging location, but before Dataflow runner run the pipeline, it being executed in local, that makes no Dataflow job created, like when we are using direct-runner :
Nov 14, 2017 2:14:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 111 files. Enable logging at DEBUG level to see which files will be staged.
Nov 14, 2017 2:14:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
Nov 14, 2017 2:14:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
INFO: Uploading 111 files from PipelineOptions.filesToStage to staging location to prepare for execution.
Nov 14, 2017 2:14:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
INFO: Staging files complete: 111 files cached, 0 files newly uploaded
Nov 14, 2017 2:15:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding KafkaIO.Read/Read(UnboundedKafkaSource)/DataflowRunner.StreamingUnboundedRead.ReadWithIds as step s1
Nov 14, 2017 2:15:00 PM org.apache.kafka.common.config.AbstractConfig logAll
INFO: ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
...
Are there something missing ?
I'm trying to push a flask app to Bluemix, but receiving the following error:
{
"code": 170016,
"description": "Runner error: desire app failed: 400",
"error_code": "CF-RunnerError"
}
The command:
CF_TRACE=true cf push
By cf tooling version:
$ cf --version
cf version 6.26.0+9c9a261fd.2017-04-06
The logs show:
API/0Created app with guid xxxx Apr 18, 2017 6:16:00 PM
API/0Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:21:31 PM
API/2Updated app with guid xxxx ({"route"=>"xxxx", :verb=>"add", :relation=>:routes, :related_guid=>"xxxx"})Apr 18, 2017 6:21:35 PM
API/1Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:22:28 PM
API/0Updated app with guid xxxx ({"name"=>"movie-recommend-demo-dublin", "command"=>"PRIVATE DATA HIDDEN", "instances"=>0, "memory"=>128, "environment_json"=>"PRIVATE DATA HIDDEN", "health_check_timeout"=>180})Apr 18, 2017 6:26:15 PM
As I was writing this question, I realised that I had a typo in my manifest.yml
instances: 0
Hopefully this will help other users find the issue quicker if they have similar typos.
I'm trying to build a project on Jenkins when a change is pushed to GitHub.
I'm using GitHub plugin and building on a slave node.
If the slave is online, everything works fine. However, if the slave is offline, Jenkins doesn't try to launch the slave node and ignores the notification from GitHub.
The following log is output to jenkins.log when the slave is online:
May 30, 2016 4:16:31 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber onEvent
INFO: Received POST for https://github.com/myname/myproject
May 30, 2016 4:16:31 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
INFO: Poked MyProject
May 30, 2016 4:16:33 PM com.cloudbees.jenkins.GitHubPushTrigger$1 run
INFO: SCM changes detected in MyProject. Triggering #22
However, if the slave is offline, the build is not triggered:
May 30, 2016 4:15:58 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber onEvent
INFO: Received POST for https://github.com/myname/myproject
May 30, 2016 4:15:58 PM org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
INFO: Poked MyProject
Starting a build by clicking "Build Now" always makes the slave node online. How can I build a project by GitHub changes on a slave node which might be offline?
Update:
I found the following message on "GitHub Hook Log" of the project:
Started on May 30, 2016 6:00:46 PM
We need to schedule a new build to get a workspace, but deferring 561ms in the hope that one will become available soon (all_suitable_nodes_are_offline)
Done. Took 0.23 sec
No changes
Update(2016/06/01):
This is the current slave setting:
Contents of /var/lib/jenkins/bin/start-slave:
#!/bin/bash -eux
gcloud compute instances start ci-slave --zone us-central1-f
ssh ci-slave /var/lib/jenkins/bin/start
I am sending REST POST request to the FIWARE CEP and expecting output even in a file. But nothing in the file.
REST POST (Producer) -> CEP -> File Consumer
http://194.28.122.118:8080/ProtonOnWebServer/rest/events
{"Name":"TrafficReport", "volume":"9000"}
Catalina.out
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.webapp.providers.EventJSONMessageReader readFrom
INFO: started event message body reader
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.webapp.providers.EventJSONMessageReader readFrom
INFO: name value: TrafficReport looking for: Name
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.webapp.providers.EventJSONMessageReader readFrom
INFO: finished event message body reader
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.webapp.resources.EventResource submitNewEvent
INFO: starting submitNewEvent
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.router.EventRouter routeTimedObject
INFO: routeTimedObject: forwarding event TrafficReport; Name=TrafficReport; Certainty=0.0; Cost=0.0; EventSource=; OccurrenceTime=null; Annotation=; Duration=0.0; volume=100000; EventId=f4aee2d0-2d4b-4c0c-a24f-ae452896fa75; ExpirationTime=null; Chronon=null; DetectionTime=1428072859603; to consumer...
Apr 3, 2015 4:54:19 PM com.ibm.hrl.proton.webapp.resources.EventResource submitNewEvent
INFO: events sent to proton runtime...
The reason might be that the path you specified as the Consumer's output file does not exist or that tomcat has no permission to write to this path or to write to the file you specified.
Look at the log file (logs/catalina.out) and see if you see a warning like:
WARNING: initializeAdapters: failed to initialize adapter Output adapter for consumer: DoSAttackTRConsumer, reason: No such file or directorycode here
I would also recommend to use an absolute path and not relative path for the output file path, since in different operating systems, the Tomcat "current" directory might be different.
You don't need to create the file, but you do need to create the directory and make sure tomcat has permission to write to this directory (or if the file exists to write to this file)
So here are my recommendation:
Stop tomcat
Delete catalina.out
Activate tomcat
In the CEP web UI, change the path of the Consumers to an absolute path, save the project, export it to the repository
Make sure the path you specified for the Consumers exists, and that tomcat has permission to write to the directory, and if the file exist, to this file.
Change the status of the CEP engine to Stop
Change the status of the CEP engine to Start
Send an input event
Make sure you don't see the warning listed above in the catalina.out
I've created a simple sinatra app, but can't get sessions to work when running it as an executable war.
I've verified that it works when run via "jruby -S rackup", but when run with "java -jar myapp.war", I find that the session is reset on each request:
INFO: Winstone Servlet Engine v0.9.10 running: controlPort=disabled
session: {"session_id"=>"75936d3d21367f5c1896e749ba401d7715e41a5fd01317484faa44d80c8afaea", "csrf"=>"60367cb6c5ead39b2669668ed28db3a1", "tracking"=>{"
HTTP_USER_AGENT"=>"9f3d63482f1fb48a317c5c9e2de6196f9cd239cc", "HTTP_ACCEPT_LANGUAGE"=>"66eae971492938c2dcc2fb1ddc8d7ec3196037da"}}
Jul 20, 2014 8:00:20 PM winstone.Logger logInternal
INFO: 0:0:0:0:0:0:0:1 - [20/Jul/2014 20:00:20] "GET / " 200 765 0.1670
session: {"session_id"=>"19d266ffb8ccb29108464961e68fa9e29f1c3b45e0097806b4cbc8db156d71d7", "csrf"=>"5ac12991c2ec8d4acf22180d79c494c2", "tracking"=>{"
HTTP_USER_AGENT"=>"9f3d63482f1fb48a317c5c9e2de6196f9cd239cc", "HTTP_ACCEPT_LANGUAGE"=>"66eae971492938c2dcc2fb1ddc8d7ec3196037da"}, "name"=>"john"}
Jul 20, 2014 8:00:31 PM winstone.Logger logInternal
INFO: 0:0:0:0:0:0:0:1 - [20/Jul/2014 20:00:31] "GET /login/john " 200 9 0.0240
session: {"session_id"=>"60f161941822b4f0fae9085db58fe9ea30e86d56dc16fff2ea5859bb4008c58f", "csrf"=>"7dd3977bef9fca9c7ed9b77fdc774657", "tracking"=>{"
HTTP_USER_AGENT"=>"9f3d63482f1fb48a317c5c9e2de6196f9cd239cc", "HTTP_ACCEPT_LANGUAGE"=>"66eae971492938c2dcc2fb1ddc8d7ec3196037da"}}
Jul 20, 2014 8:00:40 PM winstone.Logger logInternal
Other than setting sessions to be enabled, is there any special setup that is needed to have sessions work when the app is packaged with warbler and run as an executable war?
nothing special should be needed - I tried your sample and it worked fine.
it's probably a bug with the jruby-rack version you're using ... please try >= 1.1.15
also I would recommend to try out the jetty webserver (you'll find an option at config/warbler.rb) ... I'll try to make sure jetty is the default for a future Warbler version.