Apache-Beam exception while running WordCount example in eclipse - apache-beam

Downloaded maven dependecies in eclipse using
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>0.2.0-incubating</version></dependency>
When download and run WordCount example after changing the gs path to the C://examples//misc.txt.Getting the below exception.I did not pass any runner.How to pass the runner option and output params while running from eclipse??
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.IllegalStateException: Failed to validate C://examples//misc.txt
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:288)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:195)
at org.apache.beam.sdk.runners.PipelineRunner.apply(PipelineRunner.java:76)
at org.apache.beam.runners.direct.DirectRunner.apply(DirectRunner.java:205)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:401)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:324)
at org.apache.beam.sdk.values.PBegin.apply(PBegin.java:59)
at org.apache.beam.sdk.Pipeline.apply(Pipeline.java:174)
at org.apache.beam.examples.WordCount.main(WordCount.java:206)
Caused by: java.io.IOException: Unable to find handler for C://examples//misc.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:188)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:283)
... 8 more

I believe Apache Beam will parse syntax C://directory/file similar to http://domain/file -- it will think that C is a protocol name and that directory is a domain. The inner exception is saying that C is an unknown protocol.
Please try to avoid using :// symbol when referring to local files. I'd suggest using regular Windows standard: C:\directory\file.

Related

Maven SpringBoot app starts up from mvn spring-boot:run, but fails with Websphere errors running Spring Boot from Eclipse

I work on quite a few services with similar architectures, with small differences. They all use Java 8, Maven, SpringBoot, and Jersey.
I normally debug them in Eclipse (currently on 2021-06) using "Run As->SpringBoot". This works perfectly fine. I can also run them from a command line using "mvn spring-boot:run", but that's just an academic exercise, because I prefer to run them from Eclipse.
When I run it from mvn, it successfully starts up, and I can hit listener endpoints (test case is actuator/info right now) with no problem.
When I run it from Eclipse, I get the following mystifying error:
BeanCreationException: Error creating bean with name 'mbeanExporter' defined in class path resource [org/springframework/boot/autoconfigure/jmx/JmxAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.jmx.export.annotation.AnnotationMBeanExporter]: Factory method 'mbeanExporter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mbeanServer' defined in class path resource [org/springframework/boot/autoconfigure/jmx/JmxAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [javax.management.MBeanServer]: Factory method 'mbeanServer' threw exception; nested exception is org.springframework.jmx.MBeanServerNotFoundException: Could not access WebSphere's AdminServiceFactory.getMBeanFactory/getMBeanServer method; nested exception is java.lang.NullPointerException
Notice the last couple of phrases in that.
This service uses an ejb client class that is configured to connect to Websphere EJBs. One side effect of this is that jmx is not available. I have to set "spring.jmx.enabled=false".
Note that I went to the trouble of storing the log file for both runs, and I painstakingly compared them, verifying that they were logging the same information (varying by timestamps). That stacktrace above is the first place where they truly vary.
Curiously, although the Eclipse run shows this error, and listeners do not respond, it doesn't terminate the startup attempt. The service just sits there, sort of brain-dead.
I'm sure what I've provided isn't enough information, but I'm not sure what else would be useful information.

Apache phoenix not getting started

I have installed HBase 1.1.3 on multi cluster configuration and wanted to run Apache phoenix over it. I download phoenix 4.7, installed it as per the guidelines mentioned here: https://phoenix.apache.org/installation.html
But when i am running the following command: sqlline.py
it is getting hanged till the point shown below.
hadoop#hostname:~$ sqlline.py hostname
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/05/10 13:06:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Well it seems that the Phoenix client is unable to connect to the Hbase z-node present at zookeeper cluster. Please do the following :
Check if zookeeper is up .
Check with what name you have registered hbase at the zookeeper.If the name is not hbase we need to specify it to the client. in that case the command would look like sqlline.py hostname:2181:/znode-for-hbase-name.
Please check if u have added phoenix-[version]-server.jar to the lib folder in all hbase nodes and try again .
You need to add following jars to Hbase/lib directory.
phoenix-spark-4.7.0-HBase-1.1.jar
phoenix-4.7.0-HBase-1.1-server.jar

typesafe activator ui not launching

play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: req
uirement failed: Source file 'C:\Users\shriv_000\.activator\1.3.6\templates\inde
x.db_e25b80033130c08.tmp' is a directory.]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_
2.11-2.3.9.jar:2.3.9]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_
2.11-2.3.9.jar:2.3.9]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfu
n$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:205) [play_2.11-2.3.9.jar
:2.3.9]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$14$$anonfu
n$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:202) [play_2.11-2.3.9.jar
:2.3.9]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.s
cala:36) [scala-library.jar:0.13.8]
Caused by: java.lang.IllegalArgumentException: requirement failed: Source file '
C:\Users\shriv_000\.activator\1.3.6\templates\index.db_e25b80033130c08.tmp' is a
directory.
at scala.Predef$.require(Predef.scala:219) ~[scala-library.jar:0.13.8]
at sbt.IO$.copyFile(IO.scala:584) ~[client-all-2-11-0.3.5.jar:0.3.5]
at sbt.IO$.move(IO.scala:786) ~[client-all-2-11-0.3.5.jar:0.3.5]
at activator.package$RichIO$.createViaTemporary$extension(package.scala:
30) ~[activator-templates-cache-1.0-a0afb008ea619bf9d87dc010156cddffa8a6f880.jar
:1.3.6]
at activator.templates.repository.UriRemoteTemplateRepository$$anonfun$r
esolveIndexTo$1.apply(UriRemoteTemplateRepository.scala:228) ~[activator-templat
es-cache-1.0-a0afb008ea619bf9d87dc010156cddffa8a6f880.jar:1.0-a0afb008ea619bf9d8
7dc010156cddffa8a6f880]
[info] application - onStop received closing down the app
I am at activator 1.3.6
I have seen TypeSafe Activator installation error and it might be different then my problem. I have been able to create and launch a play project.
The solution provided there did not work for me.
I see this problem frequently.
I found a workaround. Try to execute activator as an administrator.

Scalding Tutorial with HDFS: Data is missing from one or more paths in: List(tutorial/data/hello.txt)

After configuring ssh and rsync when I try to run Scalding tutorial (https://github.com/Cascading/scalding-tutorial/) with command:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
I get the following error:
com.twitter.scalding.InvalidSourceException: [com.twitter.scalding.TextLineWrappedArray(tutorial/data/hello.txt)] Data is missing from one or more paths in: List(tutorial/data/hello.txt)
This error happens notwithstanding file tutorial/data/hello.txt really exists.
How to fix this?
Stdout:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
dkondratev#hadoop-n002.maxus.lan's password:
dkondratev#hadoop-n002.maxus.lan's password:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.0.0.2.1.2.1-471-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
Exception in thread "main" java.lang.Throwable: GUESS: Data is missing from the path you provided.
If you know what exactly caused this error, please consider contributing to GitHub via following link.
https://github.com/twitter/scalding/wiki/Common-Exceptions-and-possible-reasons#comtwitterscaldinginvalidsourceexception
at com.twitter.scalding.Tool$.main(Tool.scala:132)
at com.twitter.scalding.Tool.main(Tool.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I think the problem you are having is that you are telling Scalding to run in HDFS, but the file you are providing as input is in your local file system, not in HDFS. Before running the example, upload the file to your HDFS:
hadoop fs -mkdir tutorial
hadoop fs -mkdir tutorial/data
hadoop fs -put tutorial/data/hello.txt tutorial/data/hello.txt
Try to pack your job into a fat jar using Maven shade plugin and then run your Scalding job via the hadoop command:
hadoop jar your-uber.jar com.twitter.scalding.Tool bar.foo.MyClassJob --hdfs --input ... --output ...

jboss server log file

I am trying to run jboss.
But I get the following error:
[javac] C:\Program Files\jbpm-5.0-try3\jbpm-installer\build.xml:518: warning
: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to
false for repeatable builds
[java] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[java] SLF4J: Defaulting to no-operation (NOP) logger implementation
[java] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for fu
rther details.
[java] Task service started correctly !
[java] Task service running ...
as well, it does not run on port:8080.
What could be the problem?
How can I see the log file?
About the log files:
jboss should have the log files in the folder %JBOSS%\server\default\log.
About the port:8080:
I would check whether some other server is configured to listen on 8080 already. Especially if you are doing a lot of experimantal installations you can end up with several Jbosses, Tomcats and Glassfishes listening at the same port, so most of the servers won't receive their requests. At least to me it has happened once (but don't tell anybody).
It ccould be that logging failed to initialize and further errors prevent your application from correct deployment.
Check this first:
Failed to load class org.slf4j.impl.StaticLoggerBinder
This error is reported when the org.slf4j.impl.StaticLoggerBinder class could not be loaded into memory. This happens when no appropriate SLF4J binding could be found on the class path. Placing one (and only one) of slf4j-nop.jar, slf4j-simple.jar, slf4j-log4j12.jar, slf4j-jdk14.jar or logback-classic.jar on the class path should solve the problem.
As of SLF4J version 1.6, in the absence of a binding, SLF4J will default to a no-operation (NOP) logger implementation.