I am having some issue with a kafka client configuration using kerberos to authenticate from a realm to the realm of the kafka brokers.
I receive the error Krb5LoginModule] authentication failed
Message stream modified (41)
I found on the internet to edit the krb5.conf file and delete the renew_lifetime property. Once I do that, the call to kafka brokers goes in timeout, even if kerberos commit is done successfully.
I am using the same principal that other kafka clients use in the same realm to obtain service from the same kafka brokers, so I don't understand why it should be different.
I tried to add this option: sun.security.krb5.disablereferrals=true to the java.security file of the client, but nothing changed.
Can you help me? Any idea?
Sorry I am new and have only a little experience.
Related
We have a Spring Boot application producing messages to a AWS MSK Kafka cluster. Every now and then our MSK cluster gets an automatic security update (or such) and after that our KafkaTemplate producer loses connection to the cluster or something so all sends end up in a timeout. The producer doesn't recover from this automatically and keeps on trying to send messages. The following idempotent sends throw an exception:
org.apache.kafka.common.errors.ClusterAuthorizationException: The producer is not authorized to do idempotent sends
Restarting the producer application fixes the issue. Our producer is very simple application using KafkaTemplate to send messages without any custom retry logic or such.
One suggestion was to add a producer reset call to the error handler but testing the solution is very hard as there seems to be no real way to reproduce the issue.
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/ProducerFactory.html#reset()
Any ideas why this happens and what is the best way to fix it?
We have an open issue to close the producer on any timeout...
https://github.com/spring-projects/spring-kafka/issues/2251
Contributions are welcome.
Apologies if this is a very basic question.
I'm just starting to get to grips with Kafka and have been given a kafka endpoint and topic to push messages to but I'm not actually sure where, when writing the consumer, to specify the end point. Atm I've only had experience in creating a consumer for a broker and producer that is running locally on my machine and so was able to do this by setting the bootstrap server to my local host and port.
I have an inkling that it may be something to do with the advertised listeners settings but I am unsure how it works.
Again sorry if this seems like a very basic question but I couldn't find the answer
Thank you!
Advertised listeners are a broker setting. If someone else setup Kafka, then all you need to do is change the bootstrap address
If it's "public" over the internet, then chances are you might also need to configure certificates & authentication
Connecting to a public cluster is same as connecting to a local deployment.
Im assuming that your are provided with a FQDN of the cluster and the topic name.
You need to add the FQDN to the bootstrap.servers property of your consumer and subscribe to the topics using the subscribe()
you might want to look into the client.dns.lookup property if you want to change the discovery strategy.
Additionally you might have to configure the keystore and a truststore depending on the security configuration on the cluster
I'm using spring-kafka-2.2.7.RELEASE and have a producer and consumer. My cluster has zookeepers, brokers and a shema registry as well to handle the avro schema validation. So, in my producer configuration, I'll pass in both brokers URL and Schema Registry URL. Now I've couple of questions,
when publishing/producing a message, Does the producer make two different connections to broker and schema registry or just one connection to broker and from there broker would communicate with schema registry?
If it opens only one connection, how long does the connection would be open? Can the producer use the same connection to produce multiple messages or should it open multiple connections to produce multiple messages?
If there is a connection open, does it use HTTP/HTTPS protocol to communicate ?
The schema registry has nothing to do with Kafka; there is a separate HTTP connection made directly from the client.
I have kafka installed in a ubuntu server, and node-red is in my personal laptop. I want to send data from node-red to the kafka topic.
I tried using the kafka node in node-red to connect, but I am getting error like "Client is not a constructor". Also I am bit confused between the listeners and advertised listeners configuration. How should I configure the server.properties file for the same, and also which nodes should I include the node-red flow to achieve this, please suggest me some ways.
I expect that if I send some message from node-red, I should be able to see the same in kafka topic.
I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics