Hadoop plugin (1.0.3) for eclipse - eclipse

I'm new to Hadoop. Can anyone tell me how to create Hadoop Plugin (version 1.0.3) for eclipse? In fact, they removed the plugin from /hadoop-x.x.x/contrib/ (in my case, x.x.x = 1.0.3)
There's a eclipse-plugin in /hadoop-x.x.x/src/contrib/.
By the way, What's the "typical way" to develop a MapReduce app using eclipse (words count for example) in term of:
Configuration (Standalone or Pseudodistributed...)
Coding convention (Folder structure, code, debug...)

when you have Hadoop Eclipse plugin installed and configured
in eclipse create a mapreduce project, it provides required dependencies of hadoop and other jars.
then you need to create Main class, Mapper class and Reducer class, In Main class you need to configure Job. wordcount example
once done you can run main program as run on hadoop, no need to start hadoop before running the program
for 1.0.3 plugin:
Apache has remove the plugin from Hadoop installation folder. Instead you can find Eclipse Plugin source code with build.xml file at "${HADOOP_HOME}\hadoop-1.0.3\src\contrib\eclipse-plugin". or you can simply download it from here

Related

How to add spark-core-assembly-0.7.0.jar to classpath in Ubuntu to run a Spark project

I am new to Spark. I am trying to run a simple spark project in local system.
So based on tutorials I have run 'sbt/sbt assembly'. Now jar file is created in core/target/scala-2.9.2/spark-core-assembly-0.7.0.jar. To run samples could you please tell where and how I have to add this jar to classpath?
Regards,
Dinesh
The Spark documentation's quick start guide has documentation on developing standalone applications using Spark with Scala and Java. Those instructions show how to add a Spark dependency to your Maven or SBT projects.
If you're not using Maven or SBT to build your project, you'll have to pass the appropriate flags to javac and java to add the Spark assembly JAR to your classpath, the same as you'd do for any other JAR dependency.
As an aside, 0.7.0 is a pretty old version of Spark (it was released almost a year ago); I'd recommend using a newer version, such as 0.9.0.

Trying to connect to Hadoop 2.0.0 Error : server ipc version 7 cannot communicate with client version 3 in eclipse

I need to connect to a unix system having Hadoop 2.0.0 database using Eclipse Juno on a Windows system.I tried adding an eclipse plug-in for an older version of Hadoop but when I add Map-Reduce Location, I get the following error :
server ipc version 7 cannot communicate with client version 3 in eclipse
As per some blog results through google, the version mismatch is causing the issue.
Can anyone help?
Please help me find the correct plugin or lead me to where I am going wrong.
Unless I add this plug-in I would not be able to coonect to the database..is there any workaround?
Thanks,
Hitz
Couple of things, Hadoop is not a database, it's an opensource framework for distributed computing. You can directly run MapReduce programs on Hadoop with out an eclipse plugin. Simply package the classes in to a Jar, copy the jar to the unix system and use the below command to run the jar.
hadoop jar <Jar Name> <Name of Main Class> <Input Dir> <Output Dir>
If the version of eclipse you have is not compatible with the version of Hadoop or your eclipse. Check the Link to build your plugin.

using hadoop 2.2.0 jar files in netbeans

I was previously using hadoop 1.2.1 in one of my netbeans project. I did this by including the various jar files in the 1.2.1 distribution I downloaded from hadoop's website.
I was wondering, is a similar approach with hadoop 2.2.0 possible? Namely, can I just include a bunch of jar files in my netbeans project and plug into hadoop that way?
Thanks in advance!
You can - There are more jars in the 2.x distributions of hadoop but the same principle should work.
On a side note, you may also want to look into using Maven for dependency management that will manage the list of included jars in Netbeans for you.

Maven Eclipse Integration

I have recently started out on Maven. I am trying to integrate Maven+eclipse(Juno)+tomcat7.
I have downloaded m2e-wtp plugin for eclipse and created a Maven project whose structure follows a standard Maven project structure. It is also configured a dynamic web project.
It is a multi module project with two modules of flex(f1 AND f2) and one module of webapp(w).I have configured all the plugins correctly and there is no problem with configuration of POMs.
What I want to achieve is :
When I clean and Build project in Eclipse using Project-->Clean,Eclipse does not build the war in target folder of my web application project (w). I also does not copy any of the flex resources to target folder. However,
When I run the project as maven build by right-clicking the web application project and running it as a "maven install" it creates everything as expected.
My question is that if it is possible to achieve what I mentioned in point (1)? Or the only correct way to do this is the way mentioned in point (2).
I am also not able to deploy the generted files in step 2 automatically in tomcat.
Do I need to use another maven plugin for this?
Please note that this i my first experience with Maven + eclispe. I have followed certain tutorials. So, Please be lenient while voting negatively.
From what I know it is not possible to force Eclipse to use Maven directly (I would gladly be proven wrong).
Eclipse does not use Maven to build (1). Using the m2e plugin, it is possible to run maven to perform the build as you discovered (2).
If you are looking for that kind of tight integration you can look at NetBeans or IntelliJ who are using Maven natively.
EDIT:
About (3) there is a Tomcat-Maven-Plugin that can deploy the WAR file created on a running tomcat instance. Check the Usage page for more details.

Launching OSGi from IDEA

I develop scala application using IntelliJ IDEA. I'd like my application modules to be OSGi bundles.
In Eclipse it is possible to create a project which is both scala project and plug-in project. Eclipse also supports launching of Equinox platform and provides great configuration tool of which bundles to start and how. But I can't use Eclipse because of poor and slow scala plugin, so I need to use IntelliJ IDEA.
In IDEA I tried Osmorc for running OSGi but this solution is very immature and doesn't work well. What are the other ways of launching and configuring an OSGi application from IDEA?
Not an exact answer, but one possibility would be to:
set up a scala project with sbt and Intellij
use bnd4sbt (It enables you to create OSGi bundles for your SBT projects)
use scalamodules (a domain specific language for OSGi development)
(All thanks to the work of WeigleWilczek, including Heiko Seeberger who contributes here)
All the OSGi frameworks can be launched as standard Java processes. For example to launch Felix:
java -jar path/to/felix.jar
To launch Equinox:
java -jar path/to/org.eclipse.osgi_version.jar
And so on.
Unfortunately the initial configuration differs substantially between framework implementations. For Felix you need a config.properties file, which is typically in the conf directory of the Felix installation directory (or you can set the felix.config.properties system property to point it elsewhere).
I'm using PAX runner from inside Intellij IDEA to provision (deploy) OSGI bundles to Apache Felix and run the framework, but this is very annoying: I have to run "mvn install" first, then stop the running pax provisioning session, then restart it - for every change I make in the bundle. There got to be be a better way...