Voltdb init encountered an unrecoverable error and is exiting - init

I followed the official documents about voltdb, but encounter a error when using
voltdb init --config=deployment.xml
init voltdb configure file.
and the error is
ERROR: Deployment information could not be obtained from cluster node or locally
VoltDB has encountered an unrecoverable error and is exiting
The log may contain additional information.
my voltdb version is voltdb-community-8.0
about the log file volt.log:
2018-05-02 08:52:25,048 INFO [main] HOST: PID of this Volt process is 15950
2018-05-02 08:52:25,062 INFO [main] HOST: Command line arguments: org.voltdb.VoltDB initialize deployment deployment.xml
2018-05-02 08:52:25,063 INFO [main] HOST: Command line JVM arguments: -Xmx2048m -Xms2048m -XX:+AlwaysPreTouch -Djava.awt.headless=true -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.net.inetaddr.ttl=300 -Dsun.net.inetaddr.negative.ttl=3600 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseTLAB -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCondCardMark -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 -XX:CMSWaitDuration=120000 -XX:CMSMaxAbortablePrecleanTime=120000 -XX:+ExplicitGCInvokesConcurrent -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled -Dlog4j.configuration=file:///usr/local/voltdb-community-8.0/voltdb/log4j.xml -Djava.library.path=default
2018-05-02 08:52:25,064 INFO [main] HOST: Command line JVM classpath: /usr/local/voltdb-community-8.0/voltdb/voltdb-8.0.jar:/usr/local/voltdb-community-8.0/lib/vmetrics.jar:/usr/local/voltdb-community-8.0/lib/commons-logging-1.1.3.jar:/usr/local/voltdb-community-8.0/lib/log4j-1.2.16.jar:/usr/local/voltdb-community-8.0/lib/jetty-io-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/avro-1.7.7.jar:/usr/local/voltdb-community-8.0/lib/lz4-1.2.0.jar:/usr/local/voltdb-community-8.0/lib/jetty-server-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/jline-2.10.jar:/usr/local/voltdb-community-8.0/lib/tomcat-juli.jar:/usr/local/voltdb-community-8.0/lib/jsch-0.1.51.jar:/usr/local/voltdb-community-8.0/lib/slf4j-nop-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/kafka-clients-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-4.3.3.jar:/usr/local/voltdb-community-8.0/lib/super-csv-2.1.0.jar:/usr/local/voltdb-community-8.0/lib/felix.jar:/usr/local/voltdb-community-8.0/lib/commons-codec-1.6.jar:/usr/local/voltdb-community-8.0/lib/scala-xml_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-util-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/slf4j-api-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-servlet-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/snappy-java-1.1.1.7.jar:/usr/local/voltdb-community-8.0/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/kafka_2.11-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-security-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/scala-library-2.11.5.jar:/usr/local/voltdb-community-8.0/lib/owner-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/owner-java8-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/snmp4j-2.5.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-continuation-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/httpclient-4.3.6.jar:/usr/local/voltdb-community-8.0/lib/servlet-api-3.1.jar:/usr/local/voltdb-community-8.0/lib/jna.jar:/usr/local/voltdb-community-8.0/lib/jetty-http-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/metrics-core-2.2.0.jar:/usr/local/voltdb-community-8.0/lib/tomcat-jdbc.jar:/usr/local/voltdb-community-8.0/lib/httpasyncclient-4.0.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-nio-4.3.2.jar:/usr/local/voltdb-community-8.0/lib/protobuf-java-3.4.0.jar:/usr/local/voltdb-community-8.0/lib/scala-parser-combinators_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jackson-core-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/commons-lang3-3.0.jar:/usr/local/voltdb-community-8.0/lib/extension/voltdb-rabbitmq.jar
2018-05-02 08:52:25,064 ERROR [main] HOST: Deployment information could not be obtained from cluster node or locally
so, it lead to can't generating configure file. Please tell me what means about the "Deployment information could not be obtained from cluster node or locally".

This error means that it could not find the specified deployment.xml file in the local directory. You can omit the --config=deployment.xml and just run "voltdb init" it will generate a default deployment.xml file for you. Then you could proceed to "voltdb start" if you just want a simple standalone instance with the default settings.
Or, if you want to modify the configuration settings, you could run "voltdb init" to get a default configuration, then run "voltdb get deployment" to retrieve the generated deployment.xml file from the voltdbroot directory to the local directory. Then you could delete the voltdbroot directory, modify this deployment.xml file and start over. You could also start over using a deployment file you generate manually or one copied from the examples/HOWTOs/deployment-file-examples folder.
(Disclosure: I work at VoltDB)

Related

Unable to start the daemon process. This problem might be caused by incorrect configuration of the daemon

I Try almost all Solution That I found on Internet But my problem is Still not solved.
I Open gradle.properties and I add The Below code It doesn't help me,
org.gradle.jvmargs=-Xmx1024m -XX:MaxPermSize=512m
I also delete .gradle directory then I try building again. But The Problem Still Exists. Please I want Help?
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at https://docs.gradle.org/6.7/userguide/gradle_daemon.html
Process command line: C:\Program Files\Java\jdk1.8.0_291\bin\java.exe -Xmx1024M -Dfile.encoding=windows-1252 -Duser.country=US -Duser.language=en -Duser.variant -cp C:\Users\likec\.gradle\wrapper\dists\gradle-6.7-all\cuy9mc7upwgwgeb72wkcrupxe\gradle-6.7\lib\gradle-launcher-6.7.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.7
Please read the following process output to find out more:
-----------------------
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048576 bytes for AllocateHeap
# An error report file with more information is saved as:
# C:\Users\likec\.gradle\daemon\6.7\hs_err_pid14492.log

ERROR Exiting Kafka due to fatal exception (kafka.Kafka$) on Windows - Apache Kafka

I'm getting below error while starting the Kafka-Server on Windows machine. I've downloaded Scala 2.11 - kafka_2.11-2.1.0.tgz from the link: https://kafka.apache.org/downloads and I did the following steps:
Go to config folder in Apache Kafka (C:\Apache-Kafka\kafka_2.11-2.1.0\config) and edit “server.properties” using any text editor.
Find log.dirs and repelace after “=/tmp/kafka-logs” to C:\Apache-Kafka\kafka_2.11-2.1.0\kafka-logs.
Now simply start the server:
>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
Error:
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
[2018-12-14 21:09:34,566] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-12-14 21:09:34,583] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.nio.file.AccessDeniedException: C:\Apache-Kafka\kafka_2.11-2.1.0\config
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:560)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:42)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>
Note: I've already setup Apache Zookeeper on my Windows machine and it's running on port 2181.
I run the command using run as administrator.
Try this after kafka-server-start.bat
use this: ....\config\server.properties with slash between 2 dots
in my case was
In general we must not use C: drive to store kafka-logs. You can try using the drive other than the C: for storing Kafka logs. It must work.
Change the property log.dirs={Drive other than C:}/tmp/kafka-logs present in KafkaHome/config/server.properties.

PredictionIO training engine fails with error - WorkflowConfig is empty. Quitting

I'm trying to deploy an engine. I'm following the docs. So I:
create the app,
download the engine,
update the app name in engine.json,
build it: pio build --verbose,
then train: pio train --verbose.
Everything works, building completes successfully. However, training always fails with error:
[ERROR] [CreateWorkflow$] WorkflowConfig is empty. Quitting
I tried downloading another engine but the error is the same. There is nothing on the Internet about the WorkflowConfig. Does anyone have a clue what might be wrong?
I'm attaching pio.log contents below.
2015-07-07 07:20:06,128 INFO io.prediction.tools.console.Console$ [main] - Using existing engine manifest JSON at /home/vagrant/PredictionIO/mubuzz-similar-articles/manifest.json
2015-07-07 07:20:06,875 INFO org.elasticsearch.plugins [main] - [Jude the Entropic Man] loaded [], sites []
2015-07-07 07:20:07,706 INFO io.prediction.tools.Runner$ [main] - Submission command: /home/vagrant/PredictionIO/vendors/spark-1.3.1/bin/spark-submit --class io.prediction.workflow.CreateWorkflow --jars file:/home/vagrant/PredictionIO/mubuzz-similar-articles/target/scala-2.10/template-scala-parallel-similarproduct_2.10-0.1-SNAPSHOT.jar,file:/home/vagrant/PredictionIO/mubuzz-similar-articles/target/scala-2.10/template-scala-parallel-similarproduct-assembly-0.1-SNAPSHOT-deps.jar --files file:/home/vagrant/PredictionIO/conf/log4j.properties,file:/home/vagrant/PredictionIO/vendors/hbase-1.0.0/conf/hbase-site.xml --driver-class-path /home/vagrant/PredictionIO/conf:/home/vagrant/PredictionIO/vendors/hbase-1.0.0/conf file:/home/vagrant/PredictionIO/lib/pio-assembly-0.9.3.jar --engine-id sZTyLTTx277Kv58cgSQub4igE60DDagR --engine-version e7c5e07b70df531e8f7a92d278a16278c56d0581 --engine-variant file:/home/vagrant/PredictionIO/mubuzz-similar-articles/engine.json --verbosity 0 --verbose --json-extractor Both --env PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta,PIO_FS_BASEDIR=/home/vagrant/.pio_store,PIO_STORAGE_SOURCES_HBASE_HOME=/home/vagrant/PredictionIO/vendors/hbase-1.0.0,PIO_HOME=/home/vagrant/PredictionIO,PIO_FS_ENGINESDIR=/home/vagrant/.pio_store/engines,PIO_STORAGE_SOURCES_LOCALFS_PATH=/home/vagrant/.pio_store/models,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/home/vagrant/PredictionIO/vendors/elasticsearch-1.4.4,PIO_FS_TMPDIR=/home/vagrant/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_CONF_DIR=/home/vagrant/PredictionIO/conf,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs --verbose
2015-07-07 07:20:08,903 ERROR io.prediction.workflow.CreateWorkflow$ [main] - WorkflowConfig is empty. Quitting
This is a known issue, and fixed in the next release.
See the JIRA ticket, here: https://predictionio.atlassian.net/browse/PDIO-636
You just need to omit --verbose for now.

Zookeeper issue in setting kafka

To install kafka , I downloaded the kafka tar folder. To start the server I tried this command :
bin/zookeeper-server-start.sh config/zookeeper.properties
The following error occured on entering the above command:
INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:94)
... 2 more
Invalid config, exiting abnormally
Is it that I need to setup zookeeper separately? How could I resolve this?
For Windows:
Go to kafka_2.11-2.0.0\bin\windows folder
Then run zookeeper-server-start.bat ../../config/zookeeper.properties
This is basically because of this
java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
it would be really useful if you could share what exactly have you done so far. Also check if the same file exists at the said location and you are running the command from the correct location .. it is supposed to be run from your $KAFKA_HOME folder (where you've extracted the tar file)
I too faced the same issue when I installed kafka from Brew on Macbook
This is happening because the zookeeper.properties file is not in config of bin.
Follow these step.
Enter the command---> cd /usr/local/Cellar/kafka/2.3.0
Enter the command ---->cd libex
Now enter the command--->zookeeper-server-start config/zookeeper.properties
You will get the INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) Message.
Earlier I was getting this error:
$ zookeeper-server-start config/zookeeper.properties
[2019-10-02 14:35:20,159] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2019-10-02 14:35:20,160] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:140)
... 2 more
Invalid config, exiting abnormally
I saw when you run the above command it doesn't take config file. So if you Put complete path like c:\Kafka\config\zookeeper.properties... this works.
I faced the exact same error, and after a while I realized that the reason for the error was, I wasn't able to find the zookeeper.properties file, and that was because the path wasn't correct, I installed kafka through brew so the config folder was created inside libexec, so find where the config directory is and check for zookeeper.properties inside it and give that path.
Had the same issue.
I was following this guide and step 2 mentions to run this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
I had 2 problems, that first was that I wasn't inside the root directory of the file you untar and the second was that I didn't copy the complete command. Make sure both of them are correct and try again.
Just make sure that whether /config folder exist or not.
Try to type properties directly. e.g. zookeeper-server-start zookeeper.properties
I installed it with homebrew, it works.
This happens because bin/windows is added to the path but kafka/config is not.
Just navigate to your kafka folder and then try to run.
I am adding screenshot if it can help.
Before
After
You can use Powershell as an alternative to CMD.
Consider myKafka is your kafka home directory, Extract your kafka tar file here.
Extracted folder(KafkaDir) will be having ./bin,/config, etc. internal folders.
Now, open Powershell prompt, go to myKafka folder.
Run below command:
.\kafkaDir\bin\windows\zookeeper-server-start.bat
.\kafkaDir\config\zookeeper.properties
Zookeeper will get start.
You need to fix the absolute path to:
$KAFKA_HOME/config/zookeeper.properties
For me I used:
$KAFKA_HOME = /usr/local/kafka
In \bin\windows\kafka-run-class.bat add the file content
rem Classpath addition for release
for %%i in ("%BASE_DIR%\libs\*") do (
call :concat "%%i"
)
// Section to Add
rem Classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call :concat %BASE_DIR%\share\java\kafka\*
)
**
// Above Section to Add
rem Classpath addition for core
for %%i in ("%BASE_DIR%\core\build\libs\kafka_%SCALA_BINARY_VERSION%*.jar") do (
call :concat "%%i"
Have to run in from Kafka home directory, but you are running from the bin.

Hadoop : Filenotfound exception - windows

This problem seems to be already raised in Stackoverflow, but my case is quite different, file or folder location hadoop looking for is created in C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/, in this location job folder are created while run the wordcount example, but even the files and folder are avalilable, its throwing the file not found exception, it seems like files not been identified, i even tried the re-formating of namenode which is one of the solution provided in forums,but still problem exist
Note: Hadoop version 0.20.2
ERROR:
13/04/11 10:24:20 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
13/04/11 10:24:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/04/11 10:24:21 INFO input.FileInputFormat: Total input paths to process : 1
13/04/11 10:24:22 INFO mapred.JobClient: Running job: job_201304111023_0001
13/04/11 10:24:23 INFO mapred.JobClient: map 0% reduce 0%
13/04/11 10:24:34 INFO mapred.JobClient: Task Id : attempt_201304111023_0001_m_000002_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracker/jobcache/job_201304111023_0001/attempt_201304111023_0001_m_000002_0/work/tmp does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
Check if the permissions to that folder have been set properly, this type of error may occur if write permissions are not given to that folder.