I am getting below error while I am consuming message from kafka broker, can someone please suggest what I am doing wrong or i am missing, I have put the steps i am following to create a topi , produce a message and then consume the message (FYI this is on HDP 2.5.5, and kafka 0.10.x)
export BK="node1:6667,node1:6667,node1:6667"
export ZK="zk1:2181,zk1:2181,zk1:2181"
Created a topic:
Kinit to kafka user
bin/kafka-topics.sh --create --zookeeper zk1:2181,zk1:2181,zk1:2181 --replication-factor 3 --partitions 1 --topic test3
List the topics:
bin/kafka-topics.sh --list --zookeeper zk1:2181,zk1:2181,zk1:2181 localhost:2181
Produce a message on a topic:
bin/kafka-console-producer.sh --broker-list $BK --topic test3
I can produce message
or with port 9092
bin/kafka-console-producer.sh --broker-list node1:9092,node2:9092,node2:9092 --topic test3
Consume the message:
bin/kafka-console-consumer.sh --zookeeper $ZK --bootstrap-server $BK --topic test3 --from-beginning
also tried with –security-protocol PLAINTEXTSASL getting error :
[2017-06-21 02:09:09,620] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit ' (where is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t ' (where is the name of the Kerberos principal, and is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-06-21 02:09:09,622] WARN SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn) No brokers found in ZK.
Related
I am currently using Kafka 2.6.0
I am trying to add SCRAM credential to zookeeper by following steps here:
https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html
However, the command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
--entity-type users --entity-name Alice
returns below warning and successfully adds the credential for the user Alice
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Completed updating config for entity: user-principal 'Alice'
I have tried using bootstrap-server but getting this warning and does not add credential.
bin/kafka-configs --bootstrap-server localhost:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=bob-secret]'
--entity-type users --entity-name Bob
Only quota configs can be added for 'users' using --bootstrap-server. Unexpected config names: Set(SCRAM-SHA-512)
The Kafka Broker and Zookeeper are up and running and I can currently produce/consume messages successfully with Alice's credential.
Is there a way to add SCRAM credentials in zookeeper using bootstrap-server?
I have installed Docker on my Windows 10 and also installed Kafka. I have created a "test" Topic inside a Kafka cluster. Now I want to secure the Topic with a simple username and password. I am super new to Kafka, any help would really be appreciated.
To run Kafka commands, I am using windows "Power Shell".
I have tried running a few commands on the command line
To create Topics:-
kafka-topics --create --topic test --partitions 1 --replication-factor 1 --if-not-exists --zookeeper zookeeper:2181
To secure Topic I used command:
kafka-acls --topic test --producer --authorizer-properties --zookeeper zookeeper:2181 --add --allow-principal User:alice
Unfortunately, it says "bash: afka-acl: command not found"
Do I need to include anything in the Kafka configuration file? or Is it possible to just run commands from power shell and secure Topic?
Securing with username and password is the same as ACL or different?
Kafka support authentication of connections to brokers from clients (producers and consumers) using
SSL
SASL (Kerberos) and SASL/PLAIN
This need configuration changes in for both broker and clients.
What you are asking for seems like SASL plain. However as mentioned above this cannot be done from CLI and required configuration changes. If you follow the steps in the documentation link, it is pretty straightforward.
ACL is authorization which defines which user has access to what topics. See this link
I've a local Apache Kafka setup and there are total 2 broker (id - 0 and 1) on port 9092 and 9093.
I created a topic and published the messages using this command:
bin/kafka-console.-producer.sh --broker-list localhost:9092 --topic test
Then I consumed the messages on other terminal using the command:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Till now everything is fine.
But when i type command -
bin/kafka-console.-producer.sh --broker-list localhost:9093 --topic test
and write some messages it is showing in the 2nd terminal where I've typed this command -
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Why port 9093 messages are publishing to 9092?
Your cluster contains of two brokers. It is not important which host you use for initial connection. Using kafka client you don't specify from which broker you consume or to which your produce messages. Those hostname are only to discover whole list of kafka brokers (cluster)
According to documentation:
https://kafka.apache.org/documentation/#producerconfigs
https://kafka.apache.org/documentation/#consumerconfigs
bootstrap.servers::
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers.
How can I produce and consume messages from different servers?
I tried the Quickstart tutorial, but there is no instructions on how to setup for multi server clusters.
My Steps
Server A
1)bin/zookeeper-server-start.sh config/zookeeper.properties
2)bin/kafka-server-start.sh config/server.properties
3)bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor
1 --partitions 1 --topic test
4)bin/kafka-console-producer.sh --broker-list SERVER-a.IP:9092 --topic test
Server B
1A)bin/kafka-console-consumer.sh --bootstrap-server SERVER-a.IP:9092 --topic
test --from-beginning
1B)bin/kafka-console-consumer.sh --bootstrap-server SERVER-a.IP:2181 --topic
test --from-beginning
When I run 1A) consumer and enter messages into the producer, there is no messages appearing in the consumer. Its just blank.
When I run 1B consumer instead, I get a huge & very fast stream of error logs in Server A until I Ctrl+C the consumer. See below
Error log on Server A streaming at hundreds per second
WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn)
O Closed socket connection for client /188.166.178.40:51168 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Thanks
Yes, if you want to have your producer on Server A and your consumer on server B, you are in the right direction.
You need to run a Broker on server A to make it work.
bin/kafka-server-start.sh config/server.properties
The other commands are correct.
If anyone is looking for a similar topic for kafka-steams application, it appears that multiple kafka cluster is not supported yet:
Here is a documentation from kafka: https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams.html#bootstrap-servers
bootstrap.servers
(Required) The Kafka bootstrap servers. This is the same setting that is used by the underlying producer and consumer clients to connect to the Kafka cluster. Example: "kafka-broker1:9092,kafka-broker2:9092".
Tip:
Kafka Streams applications can only communicate with a single Kafka
cluster specified by this config value. Future versions of Kafka
Streams will support connecting to different Kafka clusters for
reading input streams and writing output streams.
My problem is: how can I send data from a kafka producer to broker?
the schema below explains my network configuration :
I have a producer in VM which is located in server A, and my broker too is in VM which is located in Server B.
I use an SSH connection from my producer VM to the server B with a redirection port : ssh -L 9092:192.168.56.101:9092 xx#IP1
I use kafka console to test :
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
thanks
You need to set the --broker-list to wherever the broker resides. In your code, you are saying that I want to produce a message and send it to a broker that is on the localhost machine at port 9092. Try
bin/kafka-console-producer.sh --broker-list 192.168.56.101:9092 --topic test