RabbitMQ Connector gives "TimeoutException: License topic could not be created" - apache-kafka

My rabbitmq connector works fine when I run it in a server with no SASL. Actually it was working in a SASL activated server too but after restarting the Kafka Connect service now it won't start working. The error is:
org.apache.kafka.common.errors.TimeoutException: License topic could not be created
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Is it a licensing issue? I don't think we purchased any license for it and it says there is a 30 days free trial. But also, I think we were using it for more than 30 days.
Edit: Found this in connect.log file:
INFO [AdminClient clientId=RabbitMQSinkConnector2-license-manager] Metadata update failed
Edit2: It has something to do with SASL. After enabling SASL for my broker, rabbitmq connector started giving this error.

Solved it by adding a little extra configuration for a sasl enabled broker, like I did for a debezium connector. You need to add these lines to your connector config:
"confluent.topic.sasl.mechanism": "PLAIN",
"confluent.topic.security.protocol": "SASL_PLAINTEXT",
"confluent.topic.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=secret;"
If you don't want to expose your sasl credentials, you can save it in a file and change the last one like that:
"confluent.topic.sasl.jaas.config": "${file:/kafka/vty/pass.properties:sasl}"
But of course you need to enable reading from a file first.

Related

kafka start up failed due to no auth issue

I've a single node cluster setup of Apache kafka_2.12-2.5.1 (embedded zookeeper) on the same host. Enabled SSL on Zookeeper and it starts just fine. Kafka, however, throws fatal error "NoAuth for /brokers/ids" at start up.
Please help out if you've any pointers.

Message Stream Modified (41) or Timeout for kafka client

I am having some issue with a kafka client configuration using kerberos to authenticate from a realm to the realm of the kafka brokers.
I receive the error Krb5LoginModule] authentication failed
Message stream modified (41)
I found on the internet to edit the krb5.conf file and delete the renew_lifetime property. Once I do that, the call to kafka brokers goes in timeout, even if kerberos commit is done successfully.
I am using the same principal that other kafka clients use in the same realm to obtain service from the same kafka brokers, so I don't understand why it should be different.
I tried to add this option: sun.security.krb5.disablereferrals=true to the java.security file of the client, but nothing changed.
Can you help me? Any idea?
Sorry I am new and have only a little experience.

unable to start Kafka sandbox HDP

I'am trying to run kafka from ambari UI. sandbox hdp. it does not start. i checked the log the error is :
java.lang.IllegalArgumentException: Error creating broker listeners from 'sandbox-hdp.hortonworks.com:9092': Unable to parse sandbox-hdp.hortonworks.com:9092 to a broker $.
It seems this was resolved on the Cloudera Community. The key conclusion:
It's worked for me after changing my listener ip from localhost to
exact ip of virtualbox.

Kafka connector exception ERROR Failed to send HTTP request to endpoint: http://localhost:8081/subjects/jdbc-source-accounts-value/versions

I'm using Confluent Platform 3.3 as a Kafka connector , while starting the connector using the below command,
./bin/connect-standalone ./etc/schema-registry/connect-avro-standalone.properties ./etc/kafka-connect-jdbc/connect-jdbc-source.properties
getting the below error
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:52)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
[2017-10-30 13:49:56,178] ERROR Failed to send HTTP request to endpoint: http://localhost:8081/subjects/jdbc-source-accounts-value/versions
(io.confluent.kafka.schemaregistry.client.rest.RestService:156)
The Zookeeper is running in kafka client 2181 port, and I tried to start schema by the below command
./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties &
But it didn't show any error messages but the port 8081 didn't up.Please help me to sort this out.
If you're using Confluent Platform 3.3, I would recommend using Confluent CLI since it's part of the download you've already got and makes life much simpler. Then you can easily check the status of the components.
confluent start
confluent status kafka
etc
Check out this vid here: https://vimeo.com/228505612
In terms of the issue you've got, I would check the log for Schema Registry. You can do that with Confluent CLI
confluent log schema-registry
I also seen same issue while running the producer with Spring Cloud Stream. Replacing the localhost with actual IP address will help
This might help someone, l faced a similar issue. The solution that worked for me was to change localhost in my connector settings (in my schema.registry.url) and replace it with the ip address of of the container for/running schema registry and it worked. For example
l had set it up like below:
"value.converter.schema.registry.url": "http://localhost:8081"
"key.converter.schema.registry.url": "http://localhost:8081"
and l changed it to the following
"value.converter.schema.registry.url": "http://172.19.0.4:8081",
"key.converter.schema.registry.url": "http://172.19.0.4:8081"

Implementing KafkaConnect on EC2 using HDP2.3

I am following the steps given in the link http://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect to install the kafka -connect on EC2 having HDP2.3 platform.
But I am getting the error :
ERROR Failed to flush WorkerSourceTask{id=test-mysql-jdbc-0}, timed out while waiting for producer to flush outstanding messages, 1 left
Complete error can be seen in the following :
image
Is this a kafka issue or HDP issue ?, because i did the same thing on AWS EMR and it worked .