I've downloaded hadoop from Yahoo tutorial, started linux VM with hadoop and in eclipse created new DFS location (entered IP of my VM, Map/Reduce master port 9001, DFS port 9000.
But in the node I got the error "Error:null."
What I'm doing wrong?
I'm using Eclipse Europe 3.3.1 and Hadoop 0.18.0.
Thanks for helping.
You should check the additional properties that you are able to configure. I was also getting this problem but the actual problem was that hadoop.job.ugi property was not available to be setup.To fix this, go to "\workspace\.metadata\.plugins\org.apache.hadoop.eclipse\locations". Here open the XML file and just add the property "hadoop.job.ugi" with value "hadoop-user,ABC" and then restart your eclipse. It worked for me.
I've configured properly eclipse when had installed it in my VM where hadoop was. But due to incompatible new versions of eclipse and eclipse hadoop plugin I refused to use it, because seems to me that using this plugin doesn't have any good benefits.
I was facing the same problem, i thinks the problem because there is no folder or files in the HDFS,i solved with these steps:
hadoop dfs -mkdir /name of folder
hadoop fs -ls /
Related
I am not able to run a Apache Kafka service due to a failure while trying to start a Zookeeper instance. I have downloaded and tried it with all 3 availabe downloads at the official site. (binarys and source) When i try to start zookeeper with
./bin/zookeeper-server-start.sh config/zookeeper.properties
I always get the same error message:
Classpath is empty. Please build the project first e.g. by running
'./gradlew jar -PscalaVersion=2.11.12'
The same goes for (after starting a seperate zookeeper (not the build-in from kakfa) instance)
./bin/kafka-server-start.sh config/server.properties
I have tried it under Ubuntu 17.04 and 18.04. When i try this on a virtual machine using Ubuntu 16.04 it works.
Unfortunatly, all i found regarding this problem, was for Windows.
Thank you for any help.
In my case it has nothing to do with the binary or source cause both of them give that same "classpath is empty please build the project first" error. Its because there is a space in the path where kafka resides.
I had the same issue, the problem was I was downloading the source of Kafka. So to make my Kafka server run, I downloaded the Kafka binaries and it worked for me.
Kafka binaries: http://mirror.cc.columbia.edu/pub/software/apache/kafka/1.1.0/
We need to download kafka-binary and not the source
Download Binary from mirror
http://mirrors.estointernet.in/apache/kafka/2.2.0/kafka_2.11-2.2.0.tgz
Go to your terminal and run:
$ ./gradlew jar -PscalaVersion=2.11.12
I had the same issue. I solved it when removed the white spases from my folder name e.g "Kafka binary" -> "Kafka_binary".
I have the same message when I try bin/kafka-topic.sh.
It's just because you have a space in the full path.
Go to the folder and execute "pwd", in the path, you must change the white space of folder by an underscore or use camel case.
I changed the path:
~/Documents/Formation/Moi/Big Data/Logiciels/kafka_2.12-2.4.1
to
~/Documents/Formation/Moi/Logiciels/kafka_binary
and it works (binary sources)
Try echo $CLASSPATH in the terminal, check if there is a Java in this system.
Or maybe you need to install java
Please check scala version installed in your system. It should be scalaVersion=2.11.12.
Otherwise Download the kafka binary with installed scala version.
for the past several days I've been experiencing this error, while publishing to either JBoss EAP 6.3 or Wildfly 8.2 from Eclipse.
Error renaming D:\Servers\wildfly-8.2.0.Final\standalone\tmp\tmp9064011157118650757.jar
to D:\Servers\wildfly-8.2.0.Final\standalone\deployments\BusinessService.war\WEB-INF\lib\spring-web-4.2.3.RELEASE.jar.
This may be caused by incorrect file permissions, or your server's temporary deploy
directory may be on a different filesystem than the final destination. You may adjust
these settings in the server editor.
The problem occurs when I "Add and Remove..." projects from the server, then try to publish them, so the server can start.
I've experienced this issue on two different machines (home (Wildfly) and work (JBoss EAP)).
I'm using:
Windows 7 / 10
Eclipse Mars / Luna
JBoss Tools plugin 4.3 / 4.2
JDK 1.8.0.66 / 1.8.0.65
Maven
Building with maven from Eclipse and from the command line makes no difference. The server is configured to deploy projects as compressed archives. On both machines my user has administrator rights and has full rights on the server directory.
So far I've tried:
recreating the server multiple times with different configurations
using a newly created workspace
reinstalling JBoss Tools
reinstalling Eclipse
using different JDK versions
I'm really at a loss here and I don't know how to proceed in resolving this issue. Please help.
If you are using Windows, the path could get too long and can cause this error. A simple fix is to move WildFly closer to the root.
I had the same problem and solved it like this:
First of all, stop Server (Servers->WildFly(rigth click)->Stop), than clean. So you can run server again.
I had this problem several times in my new windows 10 machine that my employer gave me. Since I did not have admin rights it was a hectic process to troubleshoot this issue. Simple fix would be moving JBOSS_HOME closer to root. However, you need to do a proper restart of your eclipse. I rather recommend a complete restart of your computer because after all you are going to change JBOSS_HOME in windows environmental variables.
This is related to permissions issue on wildfly folder. Allow full control to the wildfly folder.
https://issues.jboss.org/browse/JBIDE-18697
I have moved the wildfly home to reduce the overall path length, and also removed any non-alphanumeric characters from the folder name (like "-" and "." ) . This worked for me, everything else (removing tmp, deployment, rebooting wildfly, rebooting eclipse, rebooting computer) failed.
I also suspect that the issue was stemming from running Wildfly from a ConEmu and/or git bash shell. Running from a plain CMD shell seems more robust.
I also got stuck with the same problem. I tried the below steps and it worked:
Clear the deployments and tmp folder in standalone folder in wildfly folder.
Delete the server and again add the server
Make a build of the project and start the server after successful build.
This is a terribly annoying error that either the Eclipse team or Redhat need to fix.
The solution is to close Eclipse, right click on the icon -> Run As Administrator. This solved it for me.
I have installed hadoop 1.2.1 (all on single machine). I am getting an error in eclipse that
'The DFS browser cannot browse anything else but a distributed file system'.
Also tried with changing core-site.xml value
<value>hdfs://localhost:50040</value>
to
<value>hdfs://[your IP]:50040</value>
Please make sure your NameNode is running at port 50040. If you have not changed it then it would be running on port 9000. Also, make sure that your Hadoop plugin contains all the required jars in it.
I am very new to Java, eclipse and Hadoop things, so pardon my mistake if it my question seems too silly.
The question is:
I have 3 node CDH4 cluster of RHEL5 on cloud platform. CDH4 setup has been completed and now I want to write some sample mapreduce programs to learn about it.
Here is my understanding to how to do it:
To write Java mapreduce programs I will have to install Eclipse in my main server, right? Which version of eclipse should i go for.
And just installing eclipse will not be enough, I will have to do some setting changes so that it can use my CDH cluster, what are the things needed to do this?
and last but not least, could you guys please suggest some sites where i can get more info regarding same, remember i am just beginner in all these..:)
Thanks in advance...
pankaj
Pankaj, you can always visit the official page. Apart from this you might find these links helpful :
http://blog.cloudera.com/blog/2013/08/how-to-use-eclipse-with-mapreduce-in-clouderas-quickstart-vm/
http://cloudfront.blogspot.in/2012/07/how-to-run-mapreduce-programs-using.html#.UgH8OWT08Vk
It is not mandatory to have Eclipse on the main server(main server=master machine???). Any of the last 3 versions of eclipse work perfectly fine. Don't know about earlier versions. You can either run your job through Eclipse directly or you can write your job in Eclipse and export it as a jar. You can then copy this jar to your JT machine and execute it there through the shell using hadoop/jar command. If you are running your job directly through the eclipse you need to tell it the location of your NameNode and JobTracker machines though these properties :
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://NN_HOST:9000");
conf.set("mapred.job.tracker", "JT_HOST:9001");
(Change the hostnames and ports as per your configuration).
One quick suggestion though. You can always search for these kind of things before posting the question. A lot of info is available over the net and it is very easily accessible.
HTH
I am very new to hadoop. I need to install it and play around with samples.
SO i referred this tutorial . I have installed Sandbox given in that tutorial. I need to configure ECLIPSE in windows mentioning VM location as specified in the image below, which is given in the tutorial.
I have installed eclipse europa and hadoop plugin.
Then in Map/Reduce Locations i gave VM Ip for host name, Linux user name in UserName and 9001 in Map/Reduce port and 9000 in DFS port.
In Advanced Tab I have set value to the mapred.system.dir as /hadoop/mapred/system
and there is no hadoop.job.ugi to give username.
After i click ok, I couldn't get HDFS file system under my DFS locations in ECLIPSE.
Please help me on this
I also got the same problem. The problem here is not related to hadoop configuration but eclipse. To fix this, go to "\workspace\.metadata\.plugins\org.apache.hadoop.eclipse\locations". Here open the XML file and just add the property "hadoop.job.ugi" with value "hadoop-user,ABC" and then restart your eclipse. It worked for me.
I tried by giving just one value i.e. without ABC but it dint work and I dont know the significance of this comma separated value but since I have just started the tutorial I hope to get this answer soon :)
I too ran into the same issue. I installed RedHad cgywin (openssh and openssl packages) and updated the "Path" environment variable with a path to cgywin/bin (c:\rhcygwin\bin). Then my Eclipse DFS location was able to connect to Hadoop on the Virtual Machine. Once that was successful I saw the option "hadoop.job.ugi".
http://v-lad.org/Tutorials/Hadoop/00%20-%20Intro.html describes installing cgywin.
Note: I am running the Hadoop VM on Windows Vista.
I spawned Eclipse from within Cygwin and it worked fine for me (i.e. I could see the "hadoop.job.ugi" parameter). Also, I didn't make any changes to my PATH environment variable.