Error CREATEing SolrCore ... Specified config does not exist in Zookeeper:default - apache-zookeeper

I used the following command:
./solr -e cloud -z localhost:2181 -noprompt
The final message is the following:
{
"responseHeader":{
"status":0,
"QTime":1616},
"failure":{
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica1': Unable to create core [gettingstarted_shard2_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica1': Unable to create core [gettingstarted_shard1_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: Specified config does not exist in ZooKeeper:default"}}
Confirmed zookeeper is running
[zk: localhost:2181(CONNECTED) 0]
I looked almost everywhere and cannot overcome. Can anyone help?

I recently downloaded and installed solr 4.10.3, was going through the official Quick Start using the command:
bin/solr start -e cloud -noprompt
and encountered the same looking exceptions as you:
"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default"
Looking up in the bash I also saw the error line:
bin/solr: line 1085: jar: command not found
This was the reason for the exceptions, the "jar" command was not in the current path. After putting the "jar" command inside my path, these exceptions do not show up. It might be the same reason why you are getting the exceptions.
I am on a Fedora machine, and used the following guide to set up jar, java, javac etc. via the alternatives command (but I think you could just add the java/bin directory to your current path to solve the issue):
https://ask.fedoraproject.org/en/question/59412/cannot-find-oracle-jdk-on-fedora-21/

Related

Could not find or load main class kafka.Kafka

I'm trying to install Kafka on my linux machine (Mint) following this tutorial: https://kafka.apache.org/quickstart
Here is what I did:
I've downloaded the binary (kafka_2.13-3.1.0.tgz), not the source
I moved Kafka directory to $HOME.
Ran the first line, on the quick tutorial link, but failed
This:
$ bin/kafka-server-start.sh config/server.properties
but received this error:
Error: Could not find or load main class kafka.Kafka
Caused by: java.lang.ClassNotFoundException: kafka.Kafka
So what could be reason to that error?

OCI runtime exec failed: exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown

I am trying to bringup my fabric network.
I got my orderers organization started.
I got my peer organizations started.
I got my cli started.
after that request is failing with
OCI runtime exec failed:
exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown
The error means that either working_dir is undefined, or it does not exist.
Czeck the cli section in your docker-compose file for the above setting.
If you are working on Windows OS, a possible cause is the file encoding (should be in Unix format).
You could open this page:
https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
And search "No such file or directory". There is some related trouble shooting.
Just a short description:
Ensure that the file in question is encoded in the Unix format. This was most likely caused by not setting core.autocrlf to false in your Git configuration. There are several ways of fixing this. If you have access to the vim editor for instance, open the file:
vim ./path/to/the/related-file
Then change its format by executing the following vim command:
:set ff=unix

Zookeeper issue in setting kafka

To install kafka , I downloaded the kafka tar folder. To start the server I tried this command :
bin/zookeeper-server-start.sh config/zookeeper.properties
The following error occured on entering the above command:
INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:94)
... 2 more
Invalid config, exiting abnormally
Is it that I need to setup zookeeper separately? How could I resolve this?
For Windows:
Go to kafka_2.11-2.0.0\bin\windows folder
Then run zookeeper-server-start.bat ../../config/zookeeper.properties
This is basically because of this
java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
it would be really useful if you could share what exactly have you done so far. Also check if the same file exists at the said location and you are running the command from the correct location .. it is supposed to be run from your $KAFKA_HOME folder (where you've extracted the tar file)
I too faced the same issue when I installed kafka from Brew on Macbook
This is happening because the zookeeper.properties file is not in config of bin.
Follow these step.
Enter the command---> cd /usr/local/Cellar/kafka/2.3.0
Enter the command ---->cd libex
Now enter the command--->zookeeper-server-start config/zookeeper.properties
You will get the INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) Message.
Earlier I was getting this error:
$ zookeeper-server-start config/zookeeper.properties
[2019-10-02 14:35:20,159] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2019-10-02 14:35:20,160] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:140)
... 2 more
Invalid config, exiting abnormally
I saw when you run the above command it doesn't take config file. So if you Put complete path like c:\Kafka\config\zookeeper.properties... this works.
I faced the exact same error, and after a while I realized that the reason for the error was, I wasn't able to find the zookeeper.properties file, and that was because the path wasn't correct, I installed kafka through brew so the config folder was created inside libexec, so find where the config directory is and check for zookeeper.properties inside it and give that path.
Had the same issue.
I was following this guide and step 2 mentions to run this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
I had 2 problems, that first was that I wasn't inside the root directory of the file you untar and the second was that I didn't copy the complete command. Make sure both of them are correct and try again.
Just make sure that whether /config folder exist or not.
Try to type properties directly. e.g. zookeeper-server-start zookeeper.properties
I installed it with homebrew, it works.
This happens because bin/windows is added to the path but kafka/config is not.
Just navigate to your kafka folder and then try to run.
I am adding screenshot if it can help.
Before
After
You can use Powershell as an alternative to CMD.
Consider myKafka is your kafka home directory, Extract your kafka tar file here.
Extracted folder(KafkaDir) will be having ./bin,/config, etc. internal folders.
Now, open Powershell prompt, go to myKafka folder.
Run below command:
.\kafkaDir\bin\windows\zookeeper-server-start.bat
.\kafkaDir\config\zookeeper.properties
Zookeeper will get start.
You need to fix the absolute path to:
$KAFKA_HOME/config/zookeeper.properties
For me I used:
$KAFKA_HOME = /usr/local/kafka
In \bin\windows\kafka-run-class.bat add the file content
rem Classpath addition for release
for %%i in ("%BASE_DIR%\libs\*") do (
call :concat "%%i"
)
// Section to Add
rem Classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call :concat %BASE_DIR%\share\java\kafka\*
)
**
// Above Section to Add
rem Classpath addition for core
for %%i in ("%BASE_DIR%\core\build\libs\kafka_%SCALA_BINARY_VERSION%*.jar") do (
call :concat "%%i"
Have to run in from Kafka home directory, but you are running from the bin.

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>