I am new to ElasticSearch and I am facing issues while connecting to ElasticSearch. Please find below details:
hq plugin and head plugin are showing different results:
Output of HQ Plugin:
Output of Head Plugin:
When I try to connect from my scala code, I get following error:
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:305)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:200)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:102)
at org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:340)
at com.sksamuel.elastic4s.IndexDsl$IndexDefinitionExecutable$$anonfun$apply$1.apply(IndexDsl.scala:23)
at com.sksamuel.elastic4s.IndexDsl$IndexDefinitionExecutable$$anonfun$apply$1.apply(IndexDsl.scala:23)
at com.sksamuel.elastic4s.Executable$class.injectFuture(Executable.scala:21)
at com.sksamuel.elastic4s.IndexDsl$IndexDefinitionExecutable$.injectFuture(IndexDsl.scala:20)
at com.sksamuel.elastic4s.IndexDsl$IndexDefinitionExecutable$.apply(IndexDsl.scala:23)
at com.sksamuel.elastic4s.IndexDsl$IndexDefinitionExecutable$.apply(IndexDsl.scala:20)
at com.sksamuel.elastic4s.ElasticClient.execute(ElasticClient.scala:28)
Here is my Code which I use for connection:
val settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "elasticsearch")
.build()
val client = ElasticClient.remote(settings, ElasticsearchClientUri("elasticsearch://10.50.xxx.xxx:9300"))
I also checked my connection and I am able to successfully telnet 10.50.xxx.xxx on both 9200 and 9300 ports
I read somewhere that the problem might be with http.cors, So I added following lines to /etc/elasticsearch/elasticsearch.yml file on the server:
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
Please suggest what am I doing wrong ?
-- Update --
Thanks # Evaldas Buinauskas, It was version problem, I had installed elastic version 2.0 and was using libraries and plugins of version 1.7. I downgraded my elastic to version 1.7 and everything worked!
The issue comes from different Elasticsearch, head plugin and Scala client versions.
In pre 2.0 Elasticsearch still supported deprecated _status endpoint (deprecated in 1.2.0)
Version 2.0 completely dropped it and replaced it with _recovery.
Both head and Scala weren't upgraded and tried to call dropped api.
Related
Using the steps documented in structured streaming pyspark, I'm unable to create a dataframe in pyspark from the Azure Event Hub I have set up in order to read the stream data.
Error message is:
java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.eventhubs.EventHubsSourceProvider could not be instantiated
I have installed the Maven libraries (com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.12 is unavailable) but none appear to work:
com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.15
com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.6
As well as ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString) but the error message returned is:
java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$(Lorg/apache/spark/internal/Logging;)V
The connection string is correct as it is also used in a console application that writes to the Azure Event Hub and that works.
Can someone point me in the right direction, please. Code in use is as follows:
from pyspark.sql.functions import *
from pyspark.sql.types import *
# Event Hub Namespace Name
NAMESPACE_NAME = "*myEventHub*"
KEY_NAME = "*MyPolicyName*"
KEY_VALUE = "*MySharedAccessKey*"
# The connection string to your Event Hubs Namespace
connectionString = "Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2};EntityPath=ingestion".format(NAMESPACE_NAME, KEY_NAME, KEY_VALUE)
ehConf = {}
ehConf['eventhubs.connectionString'] = connectionString
# For 2.3.15 version and above, the configuration dictionary requires that connection string be encrypted.
# ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
df = spark \
.readStream \
.format("eventhubs") \
.options(**ehConf) \
.load()
To resolve the issue, I did the following:
Uninstall azure event hub library versions
Install com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.15 library version from Maven Central
Restart cluster
Validate by re-running code provided in the question
I received this same error when installing libraries with the version number com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.* on a Spark cluster running Spark 3.0 with Scala 2.12
For anyone else finding this via google - check if you have the correct Scala library version. In my case, my cluster is Spark v3 with Scala 2.12
Changing the "2.11" in the library version from the tutorial I was using to "2.12", so it matches my cluster runtime version, fixed the issue.
I had to take this a step further. in the format method I had to add in this:
.format("org.apache.spark.sql.eventhubs.EventHubsSourceProvider") directly.
check the cluster scala version and the library version
Unisntall the older libraries and install :
com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.17
in the shared workspace(right click and install library) and also in the cluster
In one of the ActiveMQ instance (v5.13.3), when tried to create a queue manually using Web console, getting error as shown below
Within the ActiveMQ logs, there is no errors but an warn as below. This instance is working fine for the existing queues & topics. Only facing problem while trying a new queue. Even the restart doesn't the help. With the similar setup it works fine in another instances.
2018-07-05 14:42:49,700 | WARN | /admin/createDestination.action | org.eclipse.jetty.servlet.ServletHandler | qtp2076549461-5084797
java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:438)[:1.8.0_112]
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:416)[:1.8.0_112]
at org.apache.activemq.util.RecoverableRandomAccessFile.readFully(RecoverableRandomAccessFile.java:75)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.PageFile.readPage(PageFile.java:878)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:456)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:266)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeNode.getChild(BTreeNode.java:233)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeNode.getLast(BTreeNode.java:624)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getLast(BTreeIndex.java:248)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.addLast(MessageDatabase.java:3298)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.configureLast(MessageDatabase.java:3289)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase.loadStoredDestination(MessageDatabase.java:2346)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase.getStoredDestination(MessageDatabase.java:2298)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$7.execute(KahaDBStore.java:722)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$7.execute(KahaDBStore.java:716)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:802)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recoverMessageStoreStatistics(KahaDBStore.java:716)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.AbstractMessageStore.start(AbstractMessageStore.java:45)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.start(KahaDBStore.java:671)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.ProxyMessageStore.start(ProxyMessageStore.java:75)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.Queue.initialize(Queue.java:385)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:629)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.jmx.ManagedQueueRegion.createDestination(ManagedQueueRegion.java:56)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:155)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:348)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:239)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.MutableBrokerFilter.addDestination(MutableBrokerFilter.java:178)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.jmx.BrokerView.addQueue(BrokerView.java:405)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.web.DestinationFacade.addDestination(DestinationFacade.java:56)[activemq-web-5.13.3.jar:5.13.3]
at org.apache.activemq.web.controller.CreateDestination.handleRequest(CreateDestination.java:38)[file:/C:/Talend/6.2.1/esb/activemq/webapps/admin/WEB-INF/classes/:]
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:50)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:965)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:867)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)[tomcat-servlet-api-8.0.24.jar:]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:841)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)[tomcat-servlet-api-8.0.24.jar:]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.apache.activemq.web.AuditFilter.doFilter(AuditFilter.java:59)[activemq-web-5.13.3.jar:5.13.3]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:102)[file:/C:/Talend/6.2.1/esb/activemq/webapps/admin/WEB-INF/classes/:]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.Server.handle(Server.java:499)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_112]
[1]: https://i.stack.imgur.com/gNbCT.png
This is mostly because of incompatible Jetty version with JDK 1.8. You can resolve this
either by downgrading the JDK version to 1.7 and the set the JAVA_HOME or by upgrading the Jetty compatible with JDK 1.8.
I have got the similar issue with activemq version 5.10.0. Web console was working fine with JDK 1.7 but throwing an exception for JDK 1.8.
The root cause of the problem is due to corrupted kahadb database files of the Active MQ. Copied the kahadb folder from another runtime instance (similar setup) and restarted the Active MQ service. This issue is now fixed.
I've been installing Ceilometer for Openstack Pike on Ubuntu 16.04 LTS using this install guide.
Everything went ok, up to the moment, when but when I've tried to restart gnocchi-api I got message
Failed to start gnocchi-api.service: Unit gnocchi-api.service not found.
I checked /etc/init.d and there is no script gnocchi-api (although gnocchi-metricd is, and it's working properly). Tried reinstalling package gnocchi-api, but it didn't help. When starting gnocchi-api normally, from the command line it works, although sends a bunch of warnings (but I think they are common)
I'm looking for a way to make it work normally - like a service and using conf file.
2017-11-27 20:01:40.593 6059 INFO gnocchi.rest.app [-] WSGI config used: /usr/lib/python2.7/dist-packages/gnocchi/rest/api-paste.ini
2017-11-27 20:01:40.753 6059 WARNING keystonemiddleware._common.config [-] The option "__file__" in conf is not known to auth_token
2017-11-27 20:01:40.759 6059 WARNING keystonemiddleware._common.config [-] The option "configkey" in conf is not known to auth_token
2017-11-27 20:01:40.760 6059 WARNING keystonemiddleware._common.config [-] The option "here" in conf is not known to auth_token
2017-11-27 20:01:40.762 6059 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
2017-11-27 20:01:40.768 6059 WARNING keystonemiddleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
STARTING test server gnocchi.rest.app.build_wsgi_app
Available at http://127.0.1.1:8000/
DANGER! For testing only, do not use in production
apt-get currently pulls version 3.1.9 of gnocchi-api. If you manually install gnocchi-api 3.1.2, this service file is very much there in it.
service gnocchi-api start works fine with this.
But I am not sure if functionality is ok or if this is an intended change with 3.1.9.. Still to check these.
This is the same on the latest version on Ubuntu 16.04 / gnocchi version 4.2.0
Confirmed bug as of now: https://bugs.launchpad.net/ceilometer/+bug/1750933
gnocchi-api.service unit cannot be started as it has not been created.
I'm still a newbie in Apache Spark dev.
I'm using apache spark to query data from google ads words using spark-google-adwords. But, I always got this org.apache.axis.AxisFault: (404)Not Found
I'm using Scala 2.11 and latest stable Apache Spark. I've tried to look for the solution for this problem, but I still couldn't find out the cause.
Regards,
This issue was resolved by adding a copy of axis2.xml to classpath and overriding few connection manager params as follows:
HttpConnectionManagerParams params = new HttpConnectionManagerParams();
params.setDefaultMaxConnectionsPerHost(20); //SET VALUE BASED ON YOUR REQUIREMENTS/LOAD TESTING etc
MultiThreadedHttpConnectionManager multiThreadedHttpConnectionManager = new MultiThreadedHttpConnectionManager();
multiThreadedHttpConnectionManager.setParams(params);
HttpClient httpClient = new HttpClient(multiThreadedHttpConnectionManager);
ConfigurationContext configurationContext = ConfigurationContextFactory.createConfigurationContextFromFileSystem("**PATH TO COPY OF AXIS2.XML**");
configurationContext.setProperty(HTTPConstants.CACHED_HTTP_CLIENT, httpClient);
credit : https://issues.apache.org/jira/browse/AXIS2-4807
I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem