Kafka Admin Client giving Timeout Error for ListTopic - scala

Hi I am trying to run this code in
but it is working fine in another EC2 Azkaban instance but not giving below error for another instance.
private val adminprops = new Properties()
adminprops.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,"Kafka Endpoint")
private val admin = AdminClient.create(adminprops)
def topicExist(topicName: String): Boolean = {
val result = admin.listTopics.names.get.contains(topicName)
result
}
"Kafka Exception java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node
assignment. Call: listTopics"

Network problem or setting problem.
make sure "bootstrap.servers" is correct,right host and port.
make sure the network is ok between the client and the broker server.
The default timeout of adminClient is 120000ms(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG). Normally,list topics return immediately.

Related

Consuming from a Kafka topic that requires authentication using reactor kafka

I have a micro-service that consumes from a Kafka topic that requires authentication. Below is the code I wrote for that. I am fetching the username and password from environment variables that I am sure is working as expected.
val receiverOptions = ReceiverOptions.create<ByteBuffer, ByteBuffer(defaultKafkaBrokerConfig.getAsProperties())
val kafkaJaasConfig = String.format(
"org.apache.kafka.common.security.scram.ScramLoginModule required username='%s' password='%s';",
kafkaUsername,
kafkaPassword
)
val schedulerKafkaConsumer = Schedulers.newSingle("consumer")
val options = receiverOptions
.subscription(topicConfig.topics)
.pollTimeout(Duration.ofMillis(topicConfig.pollWaitTimeoutMs.toLong()))
.consumerProperty(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-512")
.consumerProperty(SaslConfigs.SASL_JAAS_CONFIG, kafkaJaasConfig)
.consumerProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT")
return KafkaReceiver.create(options)
.receive()
.subscribeOn(schedulerKafkaConsumer)
.map { record: ReceiverRecord<ByteBuffer, ByteBuffer> -> handleConsumerRecord(record) }
.onErrorContinue { throwable: Throwable?, _: Any? ->
log.error(
"Error consuming and deserializing messages",
throwable
)
}
The code works fine when I run it locally. However, on GCP development environment, i get the following error:
Bootstrap broker <<some_ip>>:9093 (id: -2 rack: null) disconnected
org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-service_gcp-edge_6ef14fcc-3443-4782-aa38-0910a5aea9b9-2, groupId=service_gcp-edge_6ef14fcc-3443-4782-aa38-0910a5aea9b9] Connection to node -1 (kafka0-data-europe-west4-kafka.internal/<<some_ip>>:9093) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
Upon bashing into the cluster, I could connect to the topic and consume messages with a command line tool and so it removes the possibility of any infra related issue.
Can someone please help me figure out what am I doing wrong here and how can I fix this?

Flink with kafka issue: Timeout expired while fetching topic metadata

I tried submitting the simple flink job to accept messages from kafka, but after submitting the job, within less than a minute, the job fails with the following kafka exception. I have kafka 2.12 running on my local machine and I have configured the topic which this job consumes from.
public static void main(String[] args) throws Exception {
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "127.0.0.1:9092");
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> kafkaData = env
.addSource(new FlinkKafkaConsumer<String>("test-topic",
new SimpleStringSchema(), properties));
kafkaData.print();
env.execute("Aggregation Job");
}
Here's the exception:
Job has been submitted with JobID 5cc30fe72f685406126e2f5a26f10341
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 5cc30fe72f685406126e2f5a26f10341)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
...
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
I saw another question in stackoverflow, but that does not resolve the problem. I have not configured any SSL on the kafka broker. Any suggestions would be appreciated.
I had this same issue today. In my case, the problem was that I failed to put my flink application in a VPC (my MSK cluster lives in a VPC). After editing the flink application and moving it into the appropriate VPC, the problem went away.
I realize this question is a few months old, but I figured I'd post my findings in case anyone else happens to come across this from a Google search like I did.

Kafka 1.0.0 admin client cannot create topic with EOFException

Using the 1.0.0 Kafka admin client I wish to programmatically create a topic on the broker. I happen to be using Scala. I've tried using the following code to either create a topic on the Kafka broker or simply to list the available topics
import org.apache.kafka.clients.admin.{AdminClient, ListTopicsOptions, NewTopic}
import scala.collection.JavaConverters._
val zkServer = "localhost:2181"
val topic = "test1"
val zookeeperConnect = zkServer
val sessionTimeoutMs = 10 * 1000
val connectionTimeoutMs = 8 * 1000
val partitions = 1
val replication:Short = 1
val topicConfig = new Properties() // add per-topic configurations settings here
import org.apache.kafka.clients.admin.AdminClientConfig
val config = new Properties
config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, zkServer)
val admin = AdminClient.create(config)
val existing = admin.listTopics(new ListTopicsOptions().timeoutMs(500).listInternal(true))
val nms = existing.names()
nms.get().asScala.foreach(nm => println(nm)) // nms.get() fails
val newTopic = new NewTopic(topic, partitions, replication)
newTopic.configs(Map[String,String]().asJava)
val ret = admin.createTopics(List(newTopic).asJavaCollection)
ret.all().get() // Also fails
admin.close()
With either command, the ZooKeeper (3.4.10) side throws an EOFException and closes the connection. Debugging the ZooKeeper side itself, it seems it is unable to deserialize the message that the admin client is sending (it runs out of bytes it is trying to read)
Anyone able to make the 1.0.0 Kafka admin client work for creating or listing topics?
The AdminClient directly connects to Kafka and does not need access to Zookeeper.
You need to set AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG to point to your Kafka brokers (for example localhost:9092) instead of Zookeeper.

Apache Kafka: Fetching topic metadata with correlation id 0

I sent a single message to my Kafka by using the following code:
def getHealthSink(kafkaHosts: String, zkHosts: String) = {
val kafkaHealth: Subscriber[String] = kafka.publish(ProducerProperties(
brokerList = kafkaHosts,
topic = "health_check",
encoder = new StringEncoder()
))
Sink.fromSubscriber(kafkaHealth).runWith(Source.single("test"))
}
val kafkaHealth = getHealthSink(kafkaHosts, zkHosts)
and I got the following error message:
ERROR kafka.utils.Utils$ fetching topic metadata for topics
[Set(health_check)] from broker
[ArrayBuffer(id:0,host:****,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics
[Set(health_check)] from broker
[ArrayBuffer(id:0,host:****,port:9092)] failed
Do you have any idea what can be the problem?
The error message is incredibly unclear, but basically "Fetching topic metadata" is the first thing the producer does, which means this is where it is first establishing a connection to Kafka.
There's a good chance that either the broker you are trying to connect to is down, or there is another connectivity issue (ports, firewalls, dns, etc).
In unrelated news: You seem to be using the old and deprecated Scala producer. We recommend moving to the new Java producer (org.apache.kafka.clients.KafkaProducer)

Is there any way to check if kafka is up and running from kafka-net

I am using kafka-net client to send messages to kafka. I'm just wondering if there is any way to check is kafka server up and can receive messages. I shut kafka down, but the producer has been created successfully and SendMessageAsync just freezes for quite a long time. I've tried to pass timeout but it doesn't change anything. I use kafka-net 0.9
It works just fine when kafka server is up and running
Broker's id is registered in zookeeper(/brokers/ids/[brokerId]) as ephemeral node, which allow other brokers and consumers to detect failures.(Right now the definition of health is fairly naive., if registered in zk /brokers/ids/[brokerId] the broker is healthy, otherwise it is dead).
zookeeper ephemeral node exists as long as the broker's session is
active.
You could check if broker is up via ZkUtils.getSortedBrokerList(zkClient), which return all active broker id under /brokers/ids
import org.I0Itec.zkclient.ZkClient;
ZkClient zkClient = new ZkClient(properties.getProperty("zkQuorum"), zkSessionTimeout, zkConnectionTimeout,ZKStringSerializer$.MODULE$);
ZkUtils.getSortedBrokerList(zkClient);
Reference
Kafka data structures in Zookeeper
Try this.
In your constructor, put
options = new KafkaOptions(uri);
var endpoint = new DefaultKafkaConnectionFactory().Resolve(options.KafkaServerUri.First(), options.Log);
client = new KafkaTcpSocket(new DefaultTraceLog(), endpoint);
and then before you send each message,
// test if the broker is alive
var request = new MetadataRequest { Topics = new List<string>() { Topic } };
var task1 = client.WriteAsync(request.Encode()).ConfigureAwait(false);
Task<KafkaDataPayload> task2 = Task.Factory.StartNew(() => task1.GetAwaiter().GetResult());
if (task2.Wait(30000) == false)
{
throw new TimeoutException("Timeout while sending message to kafka broker!");
}
If you have a high volume of messages, this is going to be a performance hit, but with a low volume of messages it shouldn't matter.