PredictionIO training engine fails with error - WorkflowConfig is empty. Quitting - scala

I'm trying to deploy an engine. I'm following the docs. So I:
create the app,
download the engine,
update the app name in engine.json,
build it: pio build --verbose,
then train: pio train --verbose.
Everything works, building completes successfully. However, training always fails with error:
[ERROR] [CreateWorkflow$] WorkflowConfig is empty. Quitting
I tried downloading another engine but the error is the same. There is nothing on the Internet about the WorkflowConfig. Does anyone have a clue what might be wrong?
I'm attaching pio.log contents below.
2015-07-07 07:20:06,128 INFO io.prediction.tools.console.Console$ [main] - Using existing engine manifest JSON at /home/vagrant/PredictionIO/mubuzz-similar-articles/manifest.json
2015-07-07 07:20:06,875 INFO org.elasticsearch.plugins [main] - [Jude the Entropic Man] loaded [], sites []
2015-07-07 07:20:07,706 INFO io.prediction.tools.Runner$ [main] - Submission command: /home/vagrant/PredictionIO/vendors/spark-1.3.1/bin/spark-submit --class io.prediction.workflow.CreateWorkflow --jars file:/home/vagrant/PredictionIO/mubuzz-similar-articles/target/scala-2.10/template-scala-parallel-similarproduct_2.10-0.1-SNAPSHOT.jar,file:/home/vagrant/PredictionIO/mubuzz-similar-articles/target/scala-2.10/template-scala-parallel-similarproduct-assembly-0.1-SNAPSHOT-deps.jar --files file:/home/vagrant/PredictionIO/conf/log4j.properties,file:/home/vagrant/PredictionIO/vendors/hbase-1.0.0/conf/hbase-site.xml --driver-class-path /home/vagrant/PredictionIO/conf:/home/vagrant/PredictionIO/vendors/hbase-1.0.0/conf file:/home/vagrant/PredictionIO/lib/pio-assembly-0.9.3.jar --engine-id sZTyLTTx277Kv58cgSQub4igE60DDagR --engine-version e7c5e07b70df531e8f7a92d278a16278c56d0581 --engine-variant file:/home/vagrant/PredictionIO/mubuzz-similar-articles/engine.json --verbosity 0 --verbose --json-extractor Both --env PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta,PIO_FS_BASEDIR=/home/vagrant/.pio_store,PIO_STORAGE_SOURCES_HBASE_HOME=/home/vagrant/PredictionIO/vendors/hbase-1.0.0,PIO_HOME=/home/vagrant/PredictionIO,PIO_FS_ENGINESDIR=/home/vagrant/.pio_store/engines,PIO_STORAGE_SOURCES_LOCALFS_PATH=/home/vagrant/.pio_store/models,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/home/vagrant/PredictionIO/vendors/elasticsearch-1.4.4,PIO_FS_TMPDIR=/home/vagrant/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_CONF_DIR=/home/vagrant/PredictionIO/conf,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs --verbose
2015-07-07 07:20:08,903 ERROR io.prediction.workflow.CreateWorkflow$ [main] - WorkflowConfig is empty. Quitting

This is a known issue, and fixed in the next release.
See the JIRA ticket, here: https://predictionio.atlassian.net/browse/PDIO-636
You just need to omit --verbose for now.

Related

Import realm in Keycloak:18.x

I cannot import any realms into Keycloak 18.0.0. That's the Quarkus, and not the Wildfly distribution anymore. Documentation here says it should be pretty simple, and by mounting my exported realm.json file into /opt/keycloak/data/import/...json it actually TRIES to import it, but it ends with :
"[org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled".
Known to be removed, and the old -Dkeycloak.profile.feature.upload_scripts=enabled won't work anymore. OK.
But then what's the way to do import any realms on startup? That'd be used to distribute a ready-made local stack without any handcrafting needed to launch. I could do it with running SQL commands, but that's way too hacky to my taste.
Compose file :
cp-keycloak:
image: quay.io/keycloak/keycloak:18.0.0
environment:
KC_DB: mysql
KC_DB_URL: jdbc:mysql://cp-keycloak-database:3306/keycloak
KC_DB_USERNAME: root
KC_DB_PASSWORD: root
KC_HOSTNAME: localhost
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
ports:
- 8082:8080
volumes:
- ./data/local_stack/init.keycloak.json:/opt/keycloak/data/import/main-realm.json:ro
entrypoint: "/opt/keycloak/bin/kc.sh start-dev --import-realm"
The output :
cp-keycloak_1 | 2022-05-05 14:07:26,801 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
cp-keycloak_1 | 2022-05-05 14:07:26,802 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to import realm: Main-Realm
cp-keycloak_1 | 2022-05-05 14:07:26,803 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled
Thanks
This might be caused because inside of your realm .json there is references to some configuration that is using the deprecated upload script feature.
Try to removed it, export the json and them try to imported again (this time without the upload script feature.
From the comments (credits to jfrantzius): 
See here for what you either need to remove or replace in your
realm-export.json:
https://github.com/keycloak/keycloak/issues/11664#issuecomment-1111062102
. We had to replace the entries, see also here
https://github.com/keycloak/keycloak/discussions/12041#discussioncomment-2768768

Not able to run tests in docker/gatling in different folder path than /home/gatling

I'm trying to run tests in gatling in docker and everything works fine for the command:
docker run -it --rm -v /c/CURRENTPATH/conf:/opt/gatling/conf -v /c/CURRENTPATH/user_files:/opt/gatling/user-files -v /c/CURRENTPATH/results:/opt/gatling/results -e JAVA_OPTS="-Ddebug=true" <IMAGE_NAME>
But when I change CURRENTPATH to DIFFERENT_PATH (I copied same files from CURRENTPATH to DIFFERENT_PATH) instead of getting simulation's list in docker command line, I get:
08:40:51.109 [main] DEBUG io.gatling.compiler.ZincCompiler$ - All initially invalidated sources: Set()
08:40:51.117 [main] DEBUG io.gatling.compiler.ZincCompiler$ - Compilation successful
08:40:54.420 [GatlingSystem-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
There is no simulation script. Please check that your scripts are in user-files/
simulations
08:40:54.893 [GatlingSystem-akka.actor.default-dispatcher-3] INFO akka.actor.CoordinatedShutdown - Starting coordinated shutdown from JVM shutdown hookenter code here
Did anyone get the same issue? Any ideas what could be wrong?

flower UI doesn't show workers

I have set up airflow to execute workflows with celeryExecuter. I have started the webserver, scheduler and worker and they run just fine. But flower UI doesn't show any workers.
The output of airflow worker is:
/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2018-08-02 11:29:09,827] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-08-02 11:29:09,983] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-08-02 11:29:10,052] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
And the output of airflow flower is:
/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2018-08-02 11:29:35,574] {__init__.py:57} INFO - Using executor CeleryExecutor
[2018-08-02 11:29:35,739] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2018-08-02 11:29:35,799] {driver.py:124} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
[I 180802 11:29:36 command:139] Visit me at http://0.0.0.0:5555
[I 180802 11:29:36 command:144] Broker: amqp://guest:**#localhost:5672//
[I 180802 11:29:36 command:147] Registered tasks:
[u'celery.accumulate',
u'celery.backend_cleanup',
u'celery.chain',
u'celery.chord',
u'celery.chord_unlock',
u'celery.chunks',
u'celery.group',
u'celery.map',
u'celery.starmap']
[I 180802 11:29:36 mixins:224] Connected to amqp://guest:**#localhost:5672//
But flower doesn't show any information about workers or tasks and generates following error on CLI:
[E 180802 11:29:55 broker:82] RabbitMQ management API call failed:
[Errno 111] Connection refused
Any ideas as to what's wrong?
Well, I was able to solve the problem. It turns out I was supposed to add export C_FORCE_ROOT=true to my ~/.bashrc file before running the worker. This happens when you're executing the worker as root.

Voltdb init encountered an unrecoverable error and is exiting

I followed the official documents about voltdb, but encounter a error when using
voltdb init --config=deployment.xml
init voltdb configure file.
and the error is
ERROR: Deployment information could not be obtained from cluster node or locally
VoltDB has encountered an unrecoverable error and is exiting
The log may contain additional information.
my voltdb version is voltdb-community-8.0
about the log file volt.log:
2018-05-02 08:52:25,048 INFO [main] HOST: PID of this Volt process is 15950
2018-05-02 08:52:25,062 INFO [main] HOST: Command line arguments: org.voltdb.VoltDB initialize deployment deployment.xml
2018-05-02 08:52:25,063 INFO [main] HOST: Command line JVM arguments: -Xmx2048m -Xms2048m -XX:+AlwaysPreTouch -Djava.awt.headless=true -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.net.inetaddr.ttl=300 -Dsun.net.inetaddr.negative.ttl=3600 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseTLAB -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCondCardMark -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 -XX:CMSWaitDuration=120000 -XX:CMSMaxAbortablePrecleanTime=120000 -XX:+ExplicitGCInvokesConcurrent -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled -Dlog4j.configuration=file:///usr/local/voltdb-community-8.0/voltdb/log4j.xml -Djava.library.path=default
2018-05-02 08:52:25,064 INFO [main] HOST: Command line JVM classpath: /usr/local/voltdb-community-8.0/voltdb/voltdb-8.0.jar:/usr/local/voltdb-community-8.0/lib/vmetrics.jar:/usr/local/voltdb-community-8.0/lib/commons-logging-1.1.3.jar:/usr/local/voltdb-community-8.0/lib/log4j-1.2.16.jar:/usr/local/voltdb-community-8.0/lib/jetty-io-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/avro-1.7.7.jar:/usr/local/voltdb-community-8.0/lib/lz4-1.2.0.jar:/usr/local/voltdb-community-8.0/lib/jetty-server-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/jline-2.10.jar:/usr/local/voltdb-community-8.0/lib/tomcat-juli.jar:/usr/local/voltdb-community-8.0/lib/jsch-0.1.51.jar:/usr/local/voltdb-community-8.0/lib/slf4j-nop-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/kafka-clients-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-4.3.3.jar:/usr/local/voltdb-community-8.0/lib/super-csv-2.1.0.jar:/usr/local/voltdb-community-8.0/lib/felix.jar:/usr/local/voltdb-community-8.0/lib/commons-codec-1.6.jar:/usr/local/voltdb-community-8.0/lib/scala-xml_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-util-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/slf4j-api-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-servlet-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/snappy-java-1.1.1.7.jar:/usr/local/voltdb-community-8.0/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/kafka_2.11-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-security-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/scala-library-2.11.5.jar:/usr/local/voltdb-community-8.0/lib/owner-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/owner-java8-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/snmp4j-2.5.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-continuation-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/httpclient-4.3.6.jar:/usr/local/voltdb-community-8.0/lib/servlet-api-3.1.jar:/usr/local/voltdb-community-8.0/lib/jna.jar:/usr/local/voltdb-community-8.0/lib/jetty-http-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/metrics-core-2.2.0.jar:/usr/local/voltdb-community-8.0/lib/tomcat-jdbc.jar:/usr/local/voltdb-community-8.0/lib/httpasyncclient-4.0.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-nio-4.3.2.jar:/usr/local/voltdb-community-8.0/lib/protobuf-java-3.4.0.jar:/usr/local/voltdb-community-8.0/lib/scala-parser-combinators_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jackson-core-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/commons-lang3-3.0.jar:/usr/local/voltdb-community-8.0/lib/extension/voltdb-rabbitmq.jar
2018-05-02 08:52:25,064 ERROR [main] HOST: Deployment information could not be obtained from cluster node or locally
so, it lead to can't generating configure file. Please tell me what means about the "Deployment information could not be obtained from cluster node or locally".
This error means that it could not find the specified deployment.xml file in the local directory. You can omit the --config=deployment.xml and just run "voltdb init" it will generate a default deployment.xml file for you. Then you could proceed to "voltdb start" if you just want a simple standalone instance with the default settings.
Or, if you want to modify the configuration settings, you could run "voltdb init" to get a default configuration, then run "voltdb get deployment" to retrieve the generated deployment.xml file from the voltdbroot directory to the local directory. Then you could delete the voltdbroot directory, modify this deployment.xml file and start over. You could also start over using a deployment file you generate manually or one copied from the examples/HOWTOs/deployment-file-examples folder.
(Disclosure: I work at VoltDB)

Selenium looping through jenkins and permission denied in cli

After struggling to get proper testsuites, I'm now pretty disappointed by the fact that , while following as close as possible this tutorial (pretty straightforward, right ?) Setting up Selenium server on a headless Jenkins CI build machine, Jenkins keeps looping on the current build, outputting :
So I decided to run a selenium build by hand on the ci machine, and got this :
user#machine:/var/log$ export DISPLAY=":99" && java -jar /var/lib/selenium/selenium- server.jar -browserSessionReuse -htmlSuite *firefox http://staging.site.com /var/lib/jenkins/jobs/project/workspace/tests/selenium/testsuite.html /var/lib/jenkins/jobs/project/workspace/logs/selenium.html
24 janv. 2012 19:27:56 org.openqa.grid.selenium.GridLauncher main
INFO: Launching a standalone server
19:27:59.927 INFO - Java: Sun Microsystems Inc. 20.0-b11
19:27:59.929 INFO - OS: Linux 3.0.0-14-generic amd64
19:27:59.951 INFO - v2.17.0, with Core v2.17.0. Built from revision 15540
19:27:59.958 INFO - Will recycle browser sessions when possible.
19:28:00.143 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
19:28:00.144 INFO - Version Jetty/5.1.x
19:28:00.145 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
19:28:00.147 INFO - Started HttpContext[/selenium-server,/selenium-server]
19:28:00.147 INFO - Started HttpContext[/,/]
19:28:00.183 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#16ba8602
19:28:00.184 INFO - Started HttpContext[/wd,/wd]
19:28:00.199 INFO - Started SocketListener on 0.0.0.0:4444
19:28:00.199 INFO - Started org.openqa.jetty.jetty.Server#6f7a29a1
HTML suite exception seen:
java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:900)
at org.openqa.selenium.server.SeleniumServer.runHtmlSuite(SeleniumServer.java:603)
at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:287)
at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:245)
at org.openqa.grid.selenium.GridLauncher.main(GridLauncher.java:54)
19:28:00.218 INFO - Shutting down...
19:28:00.220 INFO - Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=4444]
While understanding the output is'nt that hard, finding what to do to remove this issue is.
Any chance you guys already have been facing that kind of stuff ? Thanks
I only just got past these problems myself, but I was able to run your command when I pointed it at my .jar, testSuite and report file. I'm thinking that perhaps the location of your files under,
/var/lib/selenium
could be part of the problem. Try putting them where your user has permission perhaps under
/home/USERNAME/selenium
Other than that the only thing I can say is make sure your .jar, testSuite and report file are valid.
Also (I assume this is an error of copy and paste into stack overflow) but, this part of your command is incorrect
/var/lib/selenium/selenium- server.jar
You are not getting the error I would expect from an incorrect jar location so I assume something was lost when you pasted to stackoverflow.