While I'm trying to run apache atlas. I'm facing some hbase error(I'm using embedded hbase and solr) - apache-atlas

My apache atlas server is started but I found errors in my application.log file.
ui for apache atlas is also not running.
I've followed each and every step from apache website. All went good.
I gave all permissions in atlas-env.sh and application-properties files.
can anyone help me to how to figure it out?
Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:49,366 WARN - [main:] ~ Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:50,104 WARN - [main:] ~ Retrieve cluster id failed (ConnectionImplementation:551)
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:549)
at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
at org.janusgraph.diskstorage.hbase2.HBaseCompat2_0.createConnection(HBaseCompat2_0.java:46)
at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.<init>(HBaseStoreManager.java:314)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:408)
at org.janusgraph.graphdb.configuration.GraphDatabaseC

When HBase starts, HBase Master node creates the node "/hbase/hbaseid" in zookeeper.
1. Check the processes.
check HBase and zookeeper are running or not with 'jps -m'.
If you configured HBase manages zookeeper internally, you can not see the zookeeper process with jps command then you can check its port with 'netstat -nt | grep ZK_PORT' and normally it uses 2181.
netstat -nt | grep 2181
2. Check the zookeeper node
If you run zookeeper cluster independently, you can check the node "/hbase/hbaseid" with the zookeeper CLI like this.
ZOOKEEPER/bin/zkCli.sh
[zk: ...] ls /
[zk: ...] get /hbase/hbaseid

I hope this could help you.
install atlas
you can download the source code of atlas v2.0.0 or master branch from here and build it.
$ export MAVEN_OPTS="-Xms2g -Xmx2g"
$ mvn clean install
$ mvn clean package -Pdist
If you build the master branch, you can find the server package from /SOURCE_CODE/distro/target/apache-atlas-3.0.0-SNAPSHOT-server/apache-atlas-3.0.0-SNAPSHOT.
You should configure the server prior to run it.
Here are the minimum settings. Please find the atlas-application.properties file in conf directory.
atlas.graph.storage.hostname=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
atlas.graph.index.search.backend=[solr or elasticsearch] => choose one you want to use.
atlas.graph.index.hostname=xxx.xxx.xxx.xxx => solr or elasticsearch server's addr
atlas.kafka.zookeeper.connect=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for Kafka
atlas.kafka.bootstrap.servers=xxx.xxx.xxx.xxx:xxxx => kafka addr
atlas.audit.hbase.zookeeper.quorum=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
To run the server,
$ bin/atlas_start.py
install zookeeper
Actually, to install zookeeper, there is almost nothing to do.
just follow the steps
In this case, you should change your hbase env. in hbase-env.sh
export HBASE_MANAGES_ZK=false
If you see some warnings from hbase log file like 'Could not start ZK at requested port of 2181.' then please check the hbase-site.xml file and set hbase.cluster.distributed to true.

Related

Kafka Sink: ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)

Am trying to stream data from one stream file to another file. It was working earlier and suddenly it providing the error as ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130). Have restarted the zookeeper, kafka-server, schema-registry, source and sink connectors, but still am facing same issue and unable to resolve it. Any suggestion would be helpful.
Source connector:
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Source.csv
topic=Jim
Sink connector:
name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Sink.csv
topics=Jim
Error:
[2020-08-18 06:25:50,482] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)
org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST server
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:217)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:87)
Caused by: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8083
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.server.Server.doStart(Server.java:385)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:215)
... 1 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 8 more
Resolved the error by starting the connect-standalone using source and sink properties together.
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties ../config/connect-file-sink_Topic_Jim.properties
Earlier i was starting it separately as below,
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-sink_Topic_Jim.properties
Cause of the issue,
When am starting separately connect-standalone is getting started first for source properties using the port number 8083. Again when am starting the sink properties, it tries to use same port number and failes.
Solutions,
Both source and sink properties should be while starting connect-standalone which shares the same port.
Or define the different port numbers in the properties file and start it separately

Kafka connect doesn't find available brokers when volume attached

Symptom : A modified bitnami kafka image contains the kafka-connect jars, they work fine.
But once I add a volume for persistence, it can't find existing brokers.
Details:
I modded the bitnami image in a way to copy the connect jars and launching the connect-distributed.sh.
It works fine, connectors can consume and produce from/to the topics
But once I add persistent volume to the kafka image, the first startup is ok but the next onwards dont. connect.log says:
"[2020-05-21 15:59:34,786] ERROR [Worker clientId=connect-1, groupId=my-group1] Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:297)
g.apache.kafka.common.KafkaException: Unexpected error fetching metadata for topic connect-offsets
at org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:403)
at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1965)
at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1933)
at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:138)
at org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
at org.apache.kafka.connect.runtime.Worker.start(Worker.java:186)
at org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:123)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:284)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor is below 1 or larger than the number of available brokers."
The kafka itself still works well, every topic is present (replfactor of 1) and I can consumer/produce messages by hand. And I can also launch the connector system by hand successfully.
edit: My guess is that without PV it will start the connectors after kafka is up, but with PV it sees immediatly that connectors are already present and tries to load them before the kafka started.
edit2:
modded image:
FROM bitnami/kafka
// copying connect jars..
ADD connect-distributed.properties /opt/prop/connect-distributed.properties
ADD modded-kafka-run.sh /opt/bitnami/scripts/kafka/run.sh
RUN chmod 755 /opt/bitnami/scripts/kafka/run.sh
modded run.sh(I just added the distributed.sh and curl to it):
info "** Starting Kafka **"
/opt/bitnami/kafka/bin/connect-distributed.sh -daemon /opt/prop/connect-distributed.properties
//.. adding the connectors with curl
if am_i_root; then
exec gosu "$KAFKA_DAEMON_USER" "${START_COMMAND[#]}"
else
exec "${START_COMMAND[#]}"
fi
original run.sh: https://github.com/bitnami/bitnami-docker-kafka/blob/master/2/debian-10/rootfs/opt/bitnami/scripts/kafka/run.sh
Hard to tell what the issue is, but the ENTRYPOINT that starts Kafka actually starts after any RUN command.
Not clear why you need to create your own Kafka Connect image when at least two exist
You should be using docker-compose to start 3 separate Zookeeper, Kafka, and Connect clusters

Kafka Failed to create new KafkaAdminClient on Kerberized Cluster

i have a kerberized cluster with Kafka on it.
I want to use Confluent Schema Registry with Kafka on cluster.
Launching the Schema Registry from my local pc, everything works just fine.
But when i uploaded it on a machine in the cluster and tried to run from it i get:
Error
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:50)
org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
...
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Message stream modified (41)
...
Caused by: javax.security.auth.login.LoginException: Message stream modified (41)
I also tried to run it from another machine on the cluster and i get the same result.
Schema-registry.properties
listeners=http://0.0.0.0:8081
kafkastore.connection.url=master01.domain.ext:2181,master02.domain.ext:2181
kafkastore.bootstrap.servers=SASL_PLAINTEXT://xxx.domain.ext:6667,SASL_PLAINTEXT://xxx.domain.ext:6667
kafkastore.topic=_schemas
debug=true
kafkastore.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/path/schema-registry/etc/schema-registry/my-keytab.keytab" \
principal="kafka/xxx.domain.ext#DOMAIN.EXT";
kafkastore.sasl.kerberos.service.name=kafka
kafkastore.security.protocol=SASL_PLAINTEXT
kafkastore.sasl.mechanism=GSSAPI
EXECUTION COMMAND:
sudo bash bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
QUESTIONS:
Why it works in my local pc and not on a machine on cluster?
What should i change?
P.S. i get the same result even trying to run the CMAK yahoo kafka-manager ( using same jaas.confing and same keytab )

Error trying to start zookeeper server- Confluent setup

I am trying to setup Confluent-4.1.1 on Ubuntu 16.04. To start the ZooKeeper server, I ran ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties.txt from the root directory of Confluent by following this tutorial.
The error that comes up is-
log4j:ERROR Could not read configuration file from URL [file:./bin/../config/log4j.properties].
java.io.FileNotFoundException: ./bin/../config/log4j.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:64)
log4j:ERROR Ignoring configuration file [file:./bin/../config/log4j.properties].
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I am new to kafka, and I have no clue what this means. Any help in resolving this would be appreciated.
The link you're following is only Apache Kafka, not Confluent, though they should work similarly at least for starting Zookeeper.
If you've downloaded the Confluent distribution, though, and want a single node cluster, you can use the Confluent CLI
To start Zookeeper, Kafka, and the rest of the Confluent Platform, run
./bin/confluent start
Otherwise, the Zookeeper startup script doesn't use a txt file, and it might be unable to detect where you've extracted the tarball, so instead you can use apt like a normal software package
https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html
According to the documentation in the link
1:run these commands after changing path-to-confluent with your path
export CONFLUENT_HOME=
export PATH="${CONFLUENT_HOME}/bin:$PATH"
(these commands will make the "confluent" command recognizable from
terminal)
2: run below command
confluent local services start
(this command will start all the services including zookeeper ,
kafka, schema registry etc)

Kafka Connect failed to start

I installed kafka confluent oss 4.0 on a fresh linux centos 7 but kafka connect failed to start.
Steps to reproduce :
- Install Oracle JDK 8
- Copy confluent-4.0.0 folder on opt/confluent-4.0.0
- Run /opt/confluent-4.0.0/confluent start
Result :
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
\Kafka Connect failed to start
connect is [DOWN]
Error Log (connect.stderr) :
Exception in thread "main" java.lang.NoClassDefFoundError: io/confluent/connect/storage/StorageSinkConnectorConfig
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:54)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:279)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:201)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:193)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:153)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:70)
Caused by: java.lang.ClassNotFoundException: io.confluent.connect.storage.StorageSinkConnectorConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more
Additional informations :
Java version :
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode
Centos version :
centos-release-7-4.1708.el7.centos.x86_64
[Edit : 30/11/2017]
Editing plugin.path variables in every properties files didn't fix the problem.
List of files containing 'plugin.path' variable :
./etc/schema-registry/connect-avro-distributed.properties:84:plugin.path=/opt/confluent-4.0.0/share/java
./etc/schema-registry/connect-avro-standalone.properties:51:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-distributed.properties:95:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-standalone.properties:50:plugin.path=/opt/confluent-4.0.0/share/java
With Confluent 4.0.0, classloading isolation with plugin.path is enabled by default for Kafka Connect.
When you install Confluent Platform from deb or rpm packages the default location of your plugin.path is known beforehand.
However, when you download and extract the zip or tar.gz archive of Confluent Platform somewhere in your filesystem, it's set to:
plugin.path=share/java
This is a relative path, because when you download Confluent Platform as an archive (zip or tar.gz), the location where you extract the archive is not known (in your example above it's /opt/confluent-4.0.0/).
The CLI or Connect's bin scripts will be able to guess this location if you run it from the directory where you extracted Confluent platform:
For instance, in the example above:
cd /opt/confluent-4.0.0
./bin/confluent start
In order for you to be able to start Connect from any directory within your filesystem, given that the bin directory for Confluent Platform is in your PATH, you will need to set the property plugin.path to the absolute path location of your plugins:
To use Confluent CLI edit:
etc/schema-registry/connect-avro-distributed.properties
and set your plugin.path appropriately (here: plugin.path=/opt/confluent-4.0.0/share/java)
For the regular bin scripts edit:
./etc/kafka/connect-distributed.properties
and
./etc/kafka/connect-standalone.properties
and set your plugin.path as above (again, in your example: plugin.path=/opt/confluent-4.0.0/share/java).
Given that you used the tar installation (rather than the Docker image approach). Long story short, you need to be inside of the Confluent distribution, in my example, confluent-6.1.0
As shown in the screenshot, when you run the command confluent local services start in the root directory, Connect failed. And anything after that failure (e.g. ksqlDB, Control Center etc.) didn't even get a chance to start.
When you run the same command inside of confluent-6.1.0, everything worked out.