i have been trying to run my Apache kafka Node. however, I keep in geeting error message Invalid config, exiting normally - apache-kafka

I cannot start my Node server in Apache Kafka. my id is not displayed rather it only shows [myid: ]
i tried changing the port numbers yet I still have the same problem. find the attached copy of error message.

Related

Apache ZooKeeper

I am trying to start Kafka, but my Zookeeper id is not displayed rather it only shows [myid:]
I tried changing the port numbers yet I still have the same problem. Please find the attached copy of the error message.
I tried changing the file name, and port number and removing extra spaces in server properties yet I could not run my Kafka Cluster.
Find the image of the config file and the error message

i cannot start my kafka node zookeeper cluster

I am trying to start my zookeeper node Kafka server. I keep on getting errors "invalid config, exiting abnormally".enter image description here
I tried changing the name of the directory and created a new directory. yet, I am unsuccessful to run my Kafka server.

Kafka consumer Error : ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer)

In my case, I am having Kafka binary kafka_2.11-1.0.0 install both on server and client side, but after creating the topic my consumer is not working when I was using --bootstrap-server instead of --zookeeper.
And I changed as per warning coming. Would you please update why the consumer is not working with expected one but working for the old way of calling consumer.
As mentioned in the comments as well:
2181 is a common zookeeper port number.
It seems you tried to update the command but not the url. Make sure to use the actual broker url and port rather than trying to talk with the new command to the zookeeper port and url.

Lagom Kafka Unexpected Close Error

In Lagom Dev Enviornment, after starting Kafka using lagomKafkaStart
sometimes it shows KafkaServer Closed Unexpectedly, after that i need to run clean command to again get it running.
Please suggest is this the expected behaviour.
This can happen if you forcibly shut down sbt and the ZooKeeper data becomes corrupted.
Other than running the clean command, you can manually delete the target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/ directory.
This will clear your local data from Kafka, but not from any other database (Cassandra or RDBMS). If you are using Lagom's message broker API, it will automatically repopulate the Kafka topic from the source database when you restart your service.

Kafka scheduler in Vertica 7.2 is running and working, but produce errors

At the time when I run /opt/vertica/packages/kafka/bin/vkconfig launch I get such warning:
Unable to determine hostname, defaulting to 'unknown' in scheduler history
But the scheduler continues working fine and consuming messages from Kafka. What does it means?
The next strange thing is thet I find next records in /home/dbadmin/events/dbLog (I think it is Kafka consumer log file):
%3|14470569%3|1446726706.945|FAIL|vertica#consumer-1|
localhost:4083/bootstrap: Failed to connect to broker at
[localhost]:4083: Connection refused
%3|1446726706.945|ERROR|vertica#consumer-1| localhost:4083/bootstrap:
Failed to connect to broker at [localhost]:4083: Connection refused
%3|1446726610.267|ERROR|vertica#consumer-1| 1/1 brokers are down
As I mention, the scheduler is finally starting, but this records periodicaly appear in logs. What is this localhost:4083? Normally my broker runs on 9092 port on separate server which is described in kafka_config.kafka_scheduler table.
In the scheduler history table it attempts to get the hostname using Java:
InetAddress.getLocalHost().getHostAddress();
This will sometimes result in an UnknownHostException for various reasons (you can check documentation here: https://docs.oracle.com/javase/7/docs/api/java/net/UnknownHostException.html)
If this occurs, the hostname will default to "unknown" in that table. Luckily, the schedulers work by locking with your Vertica database, so knowing exactly which Scheduler host is unnecessary for functionality (just monitoring).
The Kafka-related logging in dbLog probably is the standard out from rdkafka (https://github.com/edenhill/librdkafka). I'm not sure what is going on with that log message, unfortunately. Vertica should only be using the configured broker list.