When configuring authentication for kafka, the document mentioned that JVM parameters need to be added when starting kafka server. like:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
Since we are using bin/kafka-server-start.sh to start the server, the document didn't mention where to specify the JVM parameters.
Modifying the kafka-server-start.sh or kafka-server-class.sh is not a good idea, then what will be the right way to add the parameter at the start?
I'd recommend to use the KAFKA_OPTS environment variable for this.
This environment variable is recognized by Kafka, and defaults to the empty string (= no settings). See the following code snippet from bin/kafka-run-class.sh in the Kafka source code:
# Generic jvm settings you want to add
if [ -z "$KAFKA_OPTS" ]; then
KAFKA_OPTS=""
fi
So, for example, you can do:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh
or
$ KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf" bin/kafka-server-start.sh
Related
I am using multiple kafka connectors. But every connector is writing log within same connect.log file. But I want connectors to write different log files. For that, during startup I need to change default /etc/kafka/connect-log4j.properties file. But unable to change it.
Sample Start Script:
/usr/bin/connect-standalone ../properties/sample-worker.properties ../properties/sample-connector.properties > /dev/null 2>&1 &
Is there any way to change default /etc/kafka/connect-log4j.properties file during the startup of connectors.
Kafka uses log4j, and has a variable for overriding it
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:///some/other/log4j.properties"
connect-standalone.sh ...
Generally, it would be best to use connect-distributed and use some log aggregation tool like the ELK stack to parse through log events for different connectors
I am trying to achieve cronjob which sends a Kafka message once a day. Kafka broker is running on remote instance.
I am currently accomplishing this by installing entire Kafka package, and then having cronjob call a script which look like:
less msg.txt | <kafka_path>/bin/kafka-console-producer.sh --broker-list <AWS_INSTANCE_IP> --topic <topic> --producer.config <property.file>
Is it possible to isolate the Jar(s) kafka-console-producer.sh require so I can do this without dragging in rest of the stuff in kafka directory (i.e broker related stuff) into my system? Since they aren't required. Is there some pre-existing solution for this?
Thank you.
If you must use JVM solutions, you'd have to build your own tooling with a dependency on org.apache.kafka:kafka-clients
If you can use other solutions, kafkacat is its own standalone, portable script.
Is there any way to create kafka topic in kafka/zookeeper configuration files before I will run the services, so once they will start - the topics will be in place?
I have looked inside of script bin/kafka-topics.sh and found that in the end, it executes a live command on the live server. But since the server is here, its config files are here and zookeeper with its configs also are here, is it a way to predefined topics in advance?
Unfortunately haven't found any existing config keys for this.
The servers need to be running in order to allocate metadata and log directories for them, so no
I am using the following command to get the list of all topics in kafka
./bin/kafka-topics.sh --list --zookeeper localhost:2181
But I am getting the following error
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
java.net.BindException: Address already in use (Bind failed)
my kafka version is 0.10.1.0
Sorry I'm coming to this so late, but I had this issue today and this post helped me figure out a working solution, so I thought I'd post my workaround. I have tested this with both Kafka v2.0 and 1.0 under Cloudera (née Hortworks) HDP v3.1 and 2.6.5, respectively.
As suggested in the earlier answers, kafka-run-class.sh needs to be updated. You indicate that you're running ./bin/kafka-topics.sh; the run class script should be along side the topics script in that same directory. On my Hadoop brokers, the path is /usr/hdp/current/kafka-broker/bin/
Find this section of the script:
# JMX port to use
if [ $ISKAFKASERVER = "true" ]; then
JMX_REMOTE_PORT=$JMX_PORT
else
JMX_REMOTE_PORT=$CLIENT_JMX_PORT
fi
And change it to:
# JMX port to use
if [ $ISKAFKASERVER = "true" ]; then
JMX_REMOTE_PORT=$JMX_PORT
else
JMX_REMOTE_PORT=$CLIENT_JMX_PORT
unset KAFKA_OPTS
fi
This assumes, of course, that the JMX port is set upstream via $KAFKA_OPTS and that you don't need any special settings.
I haven't tested it, but in theory unsetting JMX_PORT would work as well, and you could always override the KAFKA_OPTS value with something like KAFKA_OPTS="my special options".
The port 9999 sounds to me to be the JMX port. Have you enabled the JMX port on the broker side (setting JMX_PORT and KAFKA_JMX_OPTS env vars) ? If yes, Have you set these env vars system wise ?
The kafka-topics.sh command, as all the other tools, uses kafka-run-class.sh internally which launch the Java Class (TopicCommand in your case). If the above env vars are set in the console where you are launching the tool, it tries to enable JMX there as well conflicting with JMX port enabled in the broker. Can you confirm if the above variables are set ?
I am newbie in Kafka. I want to consume remote kafka message in shell script. Basically I have linux machine where I cannot run any web server (some strange reasons) only thing I can do is use crontab/shell script to listen for kafka message which is hosted remotely. Is it possible to write simple shell script which will consume kafka message, parse it and take corresponding action.
kafka clients are available in multiple languages. you can use any client, you don't need any web server or browser for it.
you may use shell script for consuming message & parsing but that script have to use any kafka client provided here because currently there is no client written in pure shell script.
Kafka has provided kafka client console producer and consumer, you can use that as well.
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
follow the document properly.
You could also use the tool kafkacat which is documented for example here.
This is a very powerful and fast tool to read data out of Kafka from the console and is open source: https://github.com/edenhill/kafkacat.
Many exmples are provided on GitHub and one example is shown below:
kafkacat -C -b mybroker -t mytopic