Apache phoenix not getting started - apache-phoenix

I have installed HBase 1.1.3 on multi cluster configuration and wanted to run Apache phoenix over it. I download phoenix 4.7, installed it as per the guidelines mentioned here: https://phoenix.apache.org/installation.html
But when i am running the following command: sqlline.py
it is getting hanged till the point shown below.
hadoop#hostname:~$ sqlline.py hostname
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/05/10 13:06:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Well it seems that the Phoenix client is unable to connect to the Hbase z-node present at zookeeper cluster. Please do the following :
Check if zookeeper is up .
Check with what name you have registered hbase at the zookeeper.If the name is not hbase we need to specify it to the client. in that case the command would look like sqlline.py hostname:2181:/znode-for-hbase-name.
Please check if u have added phoenix-[version]-server.jar to the lib folder in all hbase nodes and try again .

You need to add following jars to Hbase/lib directory.
phoenix-spark-4.7.0-HBase-1.1.jar
phoenix-4.7.0-HBase-1.1-server.jar

Related

How do I install google cloud jdbc driver manually for the flyway cli?

Looking at this reference: https://docs.kony.com/konylibrary/konyfabric/kony_fabric_manual_install_guide/Content/FlywayNew.htm
It says the Google Cloud SQL drivers need to be installed manually for the flyway cli, but how do I install them manually? I can't find any documentation on it.
EDIT:
I added this to my flyway.conf: flyway.jarDirs=/Users/my/flyway
Then I downloaded the driver into that folder:
mvn dependency:get -Ddest=/Users/my/flyway -Dartifact=com.google.cloud.sql:postgres-socket-factory:1.3.0
but when I try to use it I get this error:
The SocketFactory class provided com.google.cloud.sql.postgres.SocketFactory could not be instantiated.
EDIT: this is the debug output when running flyway baseline against my DB
Flyway Community Edition 7.10.0 by Redgate
DEBUG: AWS SDK available: false
DEBUG: Google Cloud Storage available: true
DEBUG: Scanning for filesystem resources at 'sql'
ERROR: Skipping filesystem location:sql (not found).
ERROR: Unexpected error
org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database (jdbc:postgresql:///mydb ?unixSocketPath=/var/run/cloudsql/ &cloudSqlInstance=my-gcp-project:us-central1:my-cloudsql-instance &socketFactory=com.google.cloud.sql.postgres.SocketFactory &user=jenkins-flyway &password=******************************************) for user 'null': The SocketFactory class provided com.google.cloud.sql.postgres.SocketFactory could not be instantiated.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08006
Error Code : 0
Message : The SocketFactory class provided com.google.cloud.sql.postgres.SocketFactory could not be instantiated.
at org.flywaydb.core.internal.jdbc.JdbcUtils.openConnection(JdbcUtils.java:71)
at org.flywaydb.core.internal.jdbc.JdbcConnectionFactory.<init>(JdbcConnectionFactory.java:68)
at org.flywaydb.core.Flyway.execute(Flyway.java:510)
at org.flywaydb.core.Flyway.baseline(Flyway.java:406)
at org.flywaydb.commandline.Main.executeOperation(Main.java:226)
at org.flywaydb.commandline.Main.main(Main.java:149)
Caused by: org.postgresql.util.PSQLException: The SocketFactory class provided com.google.cloud.sql.postgres.SocketFactory could not be instantiated.
at org.postgresql.core.SocketFactoryFactory.getSocketFactory(SocketFactoryFactory.java:43)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:182)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at org.flywaydb.core.internal.jdbc.DriverDataSource.getConnectionFromDriver(DriverDataSource.java:268)
at org.flywaydb.core.internal.jdbc.DriverDataSource.getConnection(DriverDataSource.java:232)
at org.flywaydb.core.internal.jdbc.JdbcUtils.openConnection(JdbcUtils.java:55)
... 5 more
Caused by: java.lang.ClassNotFoundException: com.google.cloud.sql.postgres.SocketFactory
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:636)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:182)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:519)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:375)
at org.postgresql.util.ObjectFactory.instantiate(ObjectFactory.java:46)
at org.postgresql.core.SocketFactoryFactory.getSocketFactory(SocketFactoryFactory.java:39)
... 13 more
This is for Postgres on Google Cloud, yes?
The intention is that you put driver JARs into the drivers folder inside the Flyway product (not 100% sure for v4.0 as that's very elderly now - 7.11 is current). jarDirs should work, but it's the intended place for Java-based migrations.
However, it looks like a problem within the driver. Are you making sure to put all dependencies of the driver there too? Could you provide a full debug log (that is, flyway migrate -X) with exception details so we could take a look?
EDIT: It's definitely a problem within the driver, looking at that debug log - it's not able to instantiate a class internal to the driver.

kafka-connect file-pulse connector standalone could not start(class not found exception on offset manager)

I want to use filepulse connector to load xml files to kafka.
below are my environment:
Win10 WSL, installed Ubuntu
downloaded the confluent platform 5.5.1 (see "https://www.confluent.io/download/"), unpacked
downloaded zip file version 1.5.2 from github (https://github.com/streamthoughts/kafka-connect-file-pulse/releases), unzipped
modified the "connect-standalone.properties" located under confluent path (etc/kafka/connect-standalone.properties) to include path "/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib"
I did not create any topic.
I started zookeeper; kafka, and tried to start kafka-connect standalone like below:
$ zookeeper-server-start etc/kafka/zookeeper.properties
$ kafka-server-start etc/kafka/server.properties
$ connect-standalone \
etc/kafka/connect-standalone.properties \
/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/etc/quickstart-connect-file-pulse-csv.properties
but I have failures see below
[2020-09-08 15:57:45,522] INFO Loading plugin from: /home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib/kafka-connect-file-pulse-expression-1.5.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:239)
[2020-09-08 15:57:45,541] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib/kafka-connect-file-pulse-expression-1.5.2.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:262)
[2020-09-08 15:57:45,541] INFO Loading plugin from: /home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib/kafka-connect-file-pulse-filters-1.5.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:239)
[2020-09-08 15:57:45,553] INFO Registered loader: PluginClassLoader{pluginLocation=file:/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib/kafka-connect-file-pulse-filters-1.5.2.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:262)
[2020-09-08 15:57:45,554] INFO Loading plugin from: /home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib/kafka-connect-file-pulse-plugin-1.5.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:239)
[2020-09-08 15:57:45,575] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)
java.lang.NoClassDefFoundError: io/streamthoughts/kafka/connect/filepulse/offset/OffsetManager
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:385)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:355)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:328)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:261)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:253)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:79)
Caused by: java.lang.ClassNotFoundException: io.streamthoughts.kafka.connect.filepulse.offset.OffsetManager
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
question
Can you please help what could be the fix?
ps. below is what I tried for jdbc connector, just to see whether other connectors are working or not. It did not have exceptions.
connect-standalone \
etc/kafka/connect-standalone.properties \
/home/min/confluent-5.5.1/etc/kafka-connect-jdbc/sink-quickstart-sqlite.properties
pps. below is the content of file-pulse connector (I did not make changes)
connector.class=io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector
topic=connect-file-pulse-quickstart-csv
tasks.max=1
filters=ParseDelimitedRow
# Delimited Row filter
filters.ParseDelimitedRow.extractColumnName=headers
filters.ParseDelimitedRow.trimColumn=true
filters.ParseDelimitedRow.type=io.streamthoughts.kafka.connect.filepulse.filter.DelimitedRowFilter
skip.headers=1
task.reader.class=io.streamthoughts.kafka.connect.filepulse.reader.RowFileInputReader
# File scanning
fs.cleanup.policy.class=io.streamthoughts.kafka.connect.filepulse.clean.LogCleanupPolicy
fs.scanner.class=io.streamthoughts.kafka.connect.filepulse.scanner.local.LocalFSDirectoryWalker
fs.scan.directory.path=/tmp/kafka-connect/examples/
fs.scan.interval.ms=10000
# Internal Reporting
internal.kafka.reporter.bootstrap.servers=localhost:9092
internal.kafka.reporter.id=connect-file-pulse-quickstart-csv
internal.kafka.reporter.topic=connect-file-pulse-status
# Track file by name and hash
offset.strategy=name+hash
ppps. below are the key info for the connect-standalone.properties file (I added the /home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib)
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java,/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib
# plugin.path=/usr/share/java,/home/min/confluent-5.5.1/share/java/kafka-connect-jdbc
I would not call this one a perfect answer, but it is just the way I made it working.
I found the plugin.path property for the 'file-pulse' connector has to be the parent folder of the jar files.
so below worked.
$ cat etc/kafka/connect-standalone.properties
key info extracted:
plugin.path=/user/share/java,/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2
# plugin.path=/usr/share/java,/home/min/confluent-5.5.1/share/java/kafka-connect-jdbc
before it was like below, and throw exception as I mentioned.
plugin.path=/user/share/java,/home/min/streamthoughts-kafka-connect-file-pulse-1.5.2/lib
# plugin.path=/usr/share/java,/home/min/confluent-5.5.1/share/java/kafka-connect-jdbc
maybe I did not understand below instructions in the connect-standalone.properties property file, or maybe the file-pulse connectors did not follow the standards/instructions stated here. certainly, the JDBC connectors the path is the path containing the JAR files.
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies

Apache-Beam exception while running WordCount example in eclipse

Downloaded maven dependecies in eclipse using
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>0.2.0-incubating</version></dependency>
When download and run WordCount example after changing the gs path to the C://examples//misc.txt.Getting the below exception.I did not pass any runner.How to pass the runner option and output params while running from eclipse??
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.IllegalStateException: Failed to validate C://examples//misc.txt
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:288)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:195)
at org.apache.beam.sdk.runners.PipelineRunner.apply(PipelineRunner.java:76)
at org.apache.beam.runners.direct.DirectRunner.apply(DirectRunner.java:205)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:401)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:324)
at org.apache.beam.sdk.values.PBegin.apply(PBegin.java:59)
at org.apache.beam.sdk.Pipeline.apply(Pipeline.java:174)
at org.apache.beam.examples.WordCount.main(WordCount.java:206)
Caused by: java.io.IOException: Unable to find handler for C://examples//misc.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:188)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:283)
... 8 more
I believe Apache Beam will parse syntax C://directory/file similar to http://domain/file -- it will think that C is a protocol name and that directory is a domain. The inner exception is saying that C is an unknown protocol.
Please try to avoid using :// symbol when referring to local files. I'd suggest using regular Windows standard: C:\directory\file.

Scalding Tutorial with HDFS: Data is missing from one or more paths in: List(tutorial/data/hello.txt)

After configuring ssh and rsync when I try to run Scalding tutorial (https://github.com/Cascading/scalding-tutorial/) with command:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
I get the following error:
com.twitter.scalding.InvalidSourceException: [com.twitter.scalding.TextLineWrappedArray(tutorial/data/hello.txt)] Data is missing from one or more paths in: List(tutorial/data/hello.txt)
This error happens notwithstanding file tutorial/data/hello.txt really exists.
How to fix this?
Stdout:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
dkondratev#hadoop-n002.maxus.lan's password:
dkondratev#hadoop-n002.maxus.lan's password:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.0.0.2.1.2.1-471-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/07/07 19:05:45 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
Exception in thread "main" java.lang.Throwable: GUESS: Data is missing from the path you provided.
If you know what exactly caused this error, please consider contributing to GitHub via following link.
https://github.com/twitter/scalding/wiki/Common-Exceptions-and-possible-reasons#comtwitterscaldinginvalidsourceexception
at com.twitter.scalding.Tool$.main(Tool.scala:132)
at com.twitter.scalding.Tool.main(Tool.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I think the problem you are having is that you are telling Scalding to run in HDFS, but the file you are providing as input is in your local file system, not in HDFS. Before running the example, upload the file to your HDFS:
hadoop fs -mkdir tutorial
hadoop fs -mkdir tutorial/data
hadoop fs -put tutorial/data/hello.txt tutorial/data/hello.txt
Try to pack your job into a fat jar using Maven shade plugin and then run your Scalding job via the hadoop command:
hadoop jar your-uber.jar com.twitter.scalding.Tool bar.foo.MyClassJob --hdfs --input ... --output ...

jboss server log file

I am trying to run jboss.
But I get the following error:
[javac] C:\Program Files\jbpm-5.0-try3\jbpm-installer\build.xml:518: warning
: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to
false for repeatable builds
[java] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[java] SLF4J: Defaulting to no-operation (NOP) logger implementation
[java] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for fu
rther details.
[java] Task service started correctly !
[java] Task service running ...
as well, it does not run on port:8080.
What could be the problem?
How can I see the log file?
About the log files:
jboss should have the log files in the folder %JBOSS%\server\default\log.
About the port:8080:
I would check whether some other server is configured to listen on 8080 already. Especially if you are doing a lot of experimantal installations you can end up with several Jbosses, Tomcats and Glassfishes listening at the same port, so most of the servers won't receive their requests. At least to me it has happened once (but don't tell anybody).
It ccould be that logging failed to initialize and further errors prevent your application from correct deployment.
Check this first:
Failed to load class org.slf4j.impl.StaticLoggerBinder
This error is reported when the org.slf4j.impl.StaticLoggerBinder class could not be loaded into memory. This happens when no appropriate SLF4J binding could be found on the class path. Placing one (and only one) of slf4j-nop.jar, slf4j-simple.jar, slf4j-log4j12.jar, slf4j-jdk14.jar or logback-classic.jar on the class path should solve the problem.
As of SLF4J version 1.6, in the absence of a binding, SLF4J will default to a no-operation (NOP) logger implementation.