kafka start up failed due to no auth issue - apache-kafka

I've a single node cluster setup of Apache kafka_2.12-2.5.1 (embedded zookeeper) on the same host. Enabled SSL on Zookeeper and it starts just fine. Kafka, however, throws fatal error "NoAuth for /brokers/ids" at start up.
Please help out if you've any pointers.

Related

No active Drillbit endpoint found from ZooKeeper

I am currently working with a simple project to query the messages from Apache Kafka topic using Apache Drill. And now I am encountering an error when running the Apache Drill cluster when running this command.
sqlline.bat -u "jdbc:drill:zk=localhost:2181"
And the error that I encountered is:
No active Drillbit endpoint found from ZooKeeper. Check connection parameters
I am using the single cluster instance of ZooKeeper that came from Apache Kafka.
Can anyone help me with this problem? Is it ok to use the Zookeeper from Apache Kafka installation with Drill?
sqlline.bat -u "jdbc:drill:zk=localhost:2181" command only connects to running DrillBit. If you have Drill running in distributed mode, please replace localhost with the correct IP address of the node, where Zookeeper is running and update port if needed.
If you want to start Drill in embedded mode, you may try running drill-embedded.bat or sqlline.bat -u "jdbc:drill:zk=local" command.
For more details please refer to https://drill.apache.org/docs/starting-drill-on-windows/.

unable to start Kafka sandbox HDP

I'am trying to run kafka from ambari UI. sandbox hdp. it does not start. i checked the log the error is :
java.lang.IllegalArgumentException: Error creating broker listeners from 'sandbox-hdp.hortonworks.com:9092': Unable to parse sandbox-hdp.hortonworks.com:9092 to a broker $.
It seems this was resolved on the Cloudera Community. The key conclusion:
It's worked for me after changing my listener ip from localhost to
exact ip of virtualbox.

Kafka - zookeeper doesn't run with others

I have a problem with Apache kafka
I have 4 clusters where I want to install kafka instances. On 3 clusters its works, they can product, and consume messages between each other, zookepers work fine. But on 4th cluster I can't run zookeeper connected with others zookeepers. If I set in zoo.cfg only local server (0.0.0.0:2888:3888) zookeeper runs in mode standalone, but if I add others servers I get error
./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /etc/zookeeper/conf/zoo.cfg
Error contacting service. It is probably not running.
How I can fix this error? I will add, that I can ping servers, so they can see each other.

Implementing KafkaConnect on EC2 using HDP2.3

I am following the steps given in the link http://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect to install the kafka -connect on EC2 having HDP2.3 platform.
But I am getting the error :
ERROR Failed to flush WorkerSourceTask{id=test-mysql-jdbc-0}, timed out while waiting for producer to flush outstanding messages, 1 left
Complete error can be seen in the following :
image
Is this a kafka issue or HDP issue ?, because i did the same thing on AWS EMR and it worked .

Kafka scheduler in Vertica 7.2 is running and working, but produce errors

At the time when I run /opt/vertica/packages/kafka/bin/vkconfig launch I get such warning:
Unable to determine hostname, defaulting to 'unknown' in scheduler history
But the scheduler continues working fine and consuming messages from Kafka. What does it means?
The next strange thing is thet I find next records in /home/dbadmin/events/dbLog (I think it is Kafka consumer log file):
%3|14470569%3|1446726706.945|FAIL|vertica#consumer-1|
localhost:4083/bootstrap: Failed to connect to broker at
[localhost]:4083: Connection refused
%3|1446726706.945|ERROR|vertica#consumer-1| localhost:4083/bootstrap:
Failed to connect to broker at [localhost]:4083: Connection refused
%3|1446726610.267|ERROR|vertica#consumer-1| 1/1 brokers are down
As I mention, the scheduler is finally starting, but this records periodicaly appear in logs. What is this localhost:4083? Normally my broker runs on 9092 port on separate server which is described in kafka_config.kafka_scheduler table.
In the scheduler history table it attempts to get the hostname using Java:
InetAddress.getLocalHost().getHostAddress();
This will sometimes result in an UnknownHostException for various reasons (you can check documentation here: https://docs.oracle.com/javase/7/docs/api/java/net/UnknownHostException.html)
If this occurs, the hostname will default to "unknown" in that table. Luckily, the schedulers work by locking with your Vertica database, so knowing exactly which Scheduler host is unnecessary for functionality (just monitoring).
The Kafka-related logging in dbLog probably is the standard out from rdkafka (https://github.com/edenhill/librdkafka). I'm not sure what is going on with that log message, unfortunately. Vertica should only be using the configured broker list.