Apache Kafka adding username and password SCRAM-SHA - apache-kafka

I am currently using Kafka 2.6.0
I am trying to add SCRAM credential to zookeeper by following steps here:
https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html
However, the command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
--entity-type users --entity-name Alice
returns below warning and successfully adds the credential for the user Alice
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Completed updating config for entity: user-principal 'Alice'
I have tried using bootstrap-server but getting this warning and does not add credential.
bin/kafka-configs --bootstrap-server localhost:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=bob-secret]'
--entity-type users --entity-name Bob
Only quota configs can be added for 'users' using --bootstrap-server. Unexpected config names: Set(SCRAM-SHA-512)
The Kafka Broker and Zookeeper are up and running and I can currently produce/consume messages successfully with Alice's credential.
Is there a way to add SCRAM credentials in zookeeper using bootstrap-server?

Related

How to delete a user in Apache Kafka?

I want to delete a Kafka user.
Is this something that can be achieved?
Honestly, im not even sure im actually understanding the concept of Kafka users.
I started by reading this, but im not being able to easily extract any conclusion.
Thank you
Kafka user per say it's just the sum of its permissions, there is no "real" kafka user object in kafka.
Let's say that the "kafka" saves just the definitions of authorisations the user/principle possess.
You can remove the principle permissions, with an adequate command, for example kafka-acls.sh with the --remove parameter.
You can remove SCRAM password and/or quota config from kafka with a command like:
kafka-configs.sh --bootstrap-server 127.0.0.1:9092 --alter --delete-config "SCRAM-SHA-256" --entity-type users --entity-name USERNAME
And describe existing user configs with:
kafka-configs.sh --bootstrap-server 127.0.0.1:9092 --describe --entity-type users

get current config in kafka

I was trying to get the config for one of the kafka clusters we have. After doing a config change through puppet, I want to know if kafka has reloaded the config, or if we need to restart the service for that.
I have tried with ./kafka-configs.sh --describe --zookeeper my-zookeeper:2181 --entity-type brokers but I only have empty output.
I have also tried to find for the config browsing inside the zookeepers but i have found nothing.
Is there any way to retrieve which config is being used?
Here's full working command to list all configs for the broker with id=1:
./bin/kafka-configs.sh --bootstrap-server <broker-host>:9092 --entity-type
brokers --entity-name 1 --all --describe
as suggested #LijuJohn i found the config in the server.log file. Thanks a lot!!
Since Kafka 2.5.0 (see issue here), you can now specify the --all flag to list all configs (not just the dynamic ones) when using ./kafka-configs.sh
Have you tried with parameter --entity-name 0 where 0 is the id of the broker?
This is required at least for my cluster.
On CentOS7, kafka version:3.3.2
Should be able to find all the configurations via:
bin/kafka-configs.sh --describe --bootstrap-server <advertised.listeners>:9092 --entity-type brokers --entity-name <broker-id> --all
#example:
bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --all
Note:
Both <advertised.listeners> and could be found in config/server.properties

How to alter the TTL for a particular topic in Kafka

I would like to update the TTL of a specific Kafka topic to 10 days.
How can I do that?
You previously asked about that and I already replied here : Update TTL for a particular topic in kafka using Java
Unless you are asking to do that using Kafka tools? (And not in Java)
In this case there is the kafka-topics.sh command line tool, allowing you to do that using the --alter option.
bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic test --config retention.ms=10000
Because altering using kafka-topics script could be removed in next release, you should use the kafka-configs script:
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type topics --entity-name test --add-config retention.ms=5000

Getting error while consuming kafka message

I am getting below error while I am consuming message from kafka broker, can someone please suggest what I am doing wrong or i am missing, I have put the steps i am following to create a topi , produce a message and then consume the message (FYI this is on HDP 2.5.5, and kafka 0.10.x)
export BK="node1:6667,node1:6667,node1:6667"
export ZK="zk1:2181,zk1:2181,zk1:2181"
Created a topic:
Kinit to kafka user
bin/kafka-topics.sh --create --zookeeper zk1:2181,zk1:2181,zk1:2181 --replication-factor 3 --partitions 1 --topic test3
List the topics:
bin/kafka-topics.sh --list --zookeeper zk1:2181,zk1:2181,zk1:2181 localhost:2181
Produce a message on a topic:
bin/kafka-console-producer.sh --broker-list $BK --topic test3
I can produce message
or with port 9092
bin/kafka-console-producer.sh --broker-list node1:9092,node2:9092,node2:9092 --topic test3
Consume the message:
bin/kafka-console-consumer.sh --zookeeper $ZK --bootstrap-server $BK --topic test3 --from-beginning
also tried with –security-protocol PLAINTEXTSASL getting error :
[2017-06-21 02:09:09,620] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit ' (where is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t ' (where is the name of the Kerberos principal, and is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-06-21 02:09:09,622] WARN SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn) No brokers found in ZK.

Delete topic in Kafka 0.8.1.1

I need to delete the topic test in Apache Kafka 0.8.1.1.
As expressed in the documentation here, I have executed:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
However, this results in the following message:
Command must include exactly one action: --list, --describe, --create or --alter
How can I delete this topic?
Deleting topic isn't always working in 0.8.1.1
Deletion should be working in the next release, 0.8.2
kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
Topic your_topic_name is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
You may also pass in a bootstrap server instead of zookeeper:
kafka-topics.sh --bootstrap-server kafka:9092 --delete --topic your_topic_name
Is it possible to delete a topic?
Jira KAFKA-1397
It seems that the deletion command was not officially documented in Kafka 0.8.1.x because of a known bug (https://issues.apache.org/jira/browse/KAFKA-1397).
Nevertheless, the command was still shipped in the code and can be executed as:
bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand --zookeeper localhost:2181 --topic test
In the meantime, the bug got fixed and the deletion command is now officially available from Kafka 0.8.2.0 as:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
Add below line in ${kafka_home}/config/server.properties
delete.topic.enable=true
Restart the kafka server with new config:
${kafka_home}/bin/kafka-server-start.sh ~/kafka/config/server.properties
Delete the topics you wish to:
${kafka_home}/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic daemon12
Andrea is correct. we can do it using command line.
And we still can program it, by
ZkClient zkClient = new ZkClient("localhost:2181", 10000);
zkClient.deleteRecursive(ZkUtils.getTopicPath("test2"));
Actually I do not recommend you delete topic on Kafka 0.8.1.1. I can delete this topic by this method, but if you check log for zookeeper, deletion mess it up.
Steps to Delete 1 or more Topics in Kafka
#To delete topics in kafka the delete option needs to be enabled in Kafka server.
1. Go to {kafka_home}/config/server.properties
2. Uncomment delete.topic.enable=true
#Delete one Topic in Kafka enter the following command
kafka-topics.sh --delete --zookeeper localhost:2181 --topic <your_topic_name>
#To Delete more than one topic from kafka
(good for testing purposes, where i created multiple topics & had to delete them for different scenarios)
Stop the Kafka Server and Zookeeper
go to server folder where the logs are stored (defined in their config files) and delete the kafkalogs and zookeeper folder manually
Restart the zookeeper and kafka server and try to list topics,
bin/kafka-topics.sh --list --zookeeper localhost:2181
if no topics are listed then the all topics have been deleted successfully.If topics are listed, then the delete was not successful. Try the above steps again or restart your computer.
You can delete a specific kafka topic (example: test) from zookeeper shell command (zookeeper-shell.sh). Use the below command to delete the topic
rmr {path of the topic}
example:
rmr /brokers/topics/test
This steps will delete all topics and data
Stop Kafka-server and Zookeeper-server
Remove the data directories of both services, by default on windows they are
C:/tmp/kafka-logs and C:/tmp/zookeeper.
then start Zookeeper-server and Kafka-server
The command:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
unfortunately only marks topic for deletion.
Deletion does not happen.
That makes troubles, while testing any scripts, which prepares Kafka configuration.
Connected threads:
Purge Kafka Queue
Is there a way to delete all the data from a topic or delete the topic before every run?
As mentioned in doc here
Topic deletion option is disabled by default. To enable it set the server config
delete.topic.enable=true
Kafka does not currently support reducing the number of partitions for a topic or changing the replication factor.
Make sure delete.topic.enable=true
Adding to above answers one has to delete the meta data associated with that topic in zookeeper consumer offset path.
bin/zookeeper-shell.sh zookeeperhost:port
rmr /consumers/<sample-consumer-1>/offsets/<deleted-topic>
Otherwise the lag will be negative in kafka-monitoring tools based on zookeeper.
First, you run this command to delete your topic:
$ bin/kafka-topics.sh --delete --bootstrap-server localhost:9092 --topic <topic_name>
List active topics to check delete completely:
$ bin/kafka-topics.sh --list --bootstrap-server localhost:9092
If you have issues deleting the topics, try to delete the topic using:
$KAFKA_HOME/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
command. Then in order to verify the deletion process, go to the kafka logs directory which normally is placed under /tmp/kafka-logs/, then delete the your_topic_name file via rm -rf your_topic_name command.
Remember to monitor the whole process via a kafka management tool like Kafka Tool.
The mentioned process above will remove the topics without kafka server restart.
There is actually a solution without touching those bin/kafka-*.sh: If you have installed kafdrop, then simply do:
url -XPOST http://your-kafdrop-domain/topic/THE-TOPIC-YOU-WANT-TO-DELETE/delete
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic <topic-name>
Step 1: Make sure you are connected to zookeeper and Kafka running
Step 2: To delele the Kafka topic run kafka-topics script, add the port and --topic with name of your topic and --delete it just delete the topic with success.
# Delete the kafka topic
# it will delete the kafka topic
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --topic name_of_topic --delete
This worked for me:
kafka-topics --delete --bootstrap-server localhost:9092 --topic user-commands
I am using Confluent CLI and run this on the directory I installed it.
Recent Kafka versions are about to remove the Zookeeper dependency. Therefore, you should instead reference the brokers (through --boostrap-server):
kafka-topics \
--bootstrap-server localhost:9092,localhost:9093,localhost:9094 \
--delete \
--topic topic_for_deletion
for confluent cloud use -
$ confluent kafka topic delete my_topic