Kafka-Spark Batch Streaming: WARN clients.NetworkClient: Bootstrap broker disconnected - scala

I am trying to write the rows of a Dataframe into a Kafka topic. The kafka cluster is Kerberized and I am providing the jaas.conf in the --conf arguments to be able to authenticate and connect to the cluster. Below is my code:
object app {
val conf = new SparkConf().setAppName("Kerberos kafka ")
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
System.setProperty("java.security.auth.login.config", "path to jaas.conf")
spark.sparkContext.setLogLevel("ERROR")
def main(args: Array[String]): Unit = {
val test= spark.sql("select * from testing.test")
test.show()
println("publishing to kafka...")
val test_final = test.selectExpr("cast(to_json(struct(*)) as string) AS value")
test_final .show()
test_final.write.format("kafka")
.option("kafka.bootstrap.servers","XXXXXXXXX:9093")
.option("topic", "test")
.option("security.protocol", "SASL_SSL")
.option("sasl.kerberos.service.name","kafka")
.save()
}
}
When I run the above code, it fails with this error:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
When I looked into the error logs on executors, I see this:
18/08/20 22:06:05 INFO producer.ProducerConfig: ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [xxxxxxxxx:9093]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
**security.protocol = PLAINTEXT**
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
18/08/20 22:06:05 **INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2**
18/08/20 22:06:05 INFO utils.AppInfoParser: Kafka commitId : unknown
18/08/20 22:06:05 INFO datasources.FileScanRDD: Reading File path: hdfs://nameservice1/user/test5/dt=2017-08-04/5a8bb121-3cab-4bed-a32b-9d0fae4a4e8b.parquet, range: 0-142192, partition values: [2017-08-04]
18/08/20 22:06:05 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 4
18/08/20 22:06:05 INFO memory.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 33.9 KB, free 5.2 GB)
18/08/20 22:06:05 INFO broadcast.TorrentBroadcast: Reading broadcast variable 4 took 224 ms
18/08/20 22:06:05 INFO memory.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 472.2 KB, free 5.2 GB)
18/08/20 22:06:06 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:06 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 4
18/08/20 22:06:08 INFO executor.Executor: Running task 1.0 in stage 2.0 (TID 4)
18/08/20 22:06:08 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:08 INFO datasources.FileScanRDD: Reading File path: hdfs://nameservice1/user/test5/dt=2017-08-10/2175e5d9-e969-41e9-8aa2-f329b5df06bf.parquet, range: 0-77484, partition values: [2017-08-10]
18/08/20 22:06:08 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:09 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:09 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:10 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:10 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
In this above log, I see three entries that are conflicting:
security.protocol = PLAINTEXT
sasl.kerberos.service.name = null
INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2
I am setting the security.protocol and sasl.kerberos.service.name values in my test_final.write..... Does that mean the configs are not being passed? The Kafka dependency I am using in my jar is:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.2.1</version>
</dependency>
Does the 0.10.2.1 version conflict with the 0.9.0-kafka-2.0.2 ? and could this be causing the issue?
Here is my jaas.conf:
/* $Id$ */
kinit {
com.sun.security.auth.module.Krb5LoginModule required;
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
useKeyTab=true
principal="user#CORP.COM"
serviceName="kafka"
keyTab="/data/home/keytabs/user.keytab"
client=true;
};
Here is my spark-submit command:
spark2-submit --master yarn --class app --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=path to jaas.conf" --conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=path to jaas.conf" --files path to jaas.conf --conf "spark.driver.extraClassPath=path to spark-sql-kafka-0-10_2.11-2.2.0.jar" --conf "spark.executor.extraClassPath=path to spark-sql-kafka-0-10_2.11-2.2.0.jar" --num-executors 2 --executor-cores 4 --executor-memory 10g --driver-memory 5g ./KerberizedKafkaConnect-1.0-SNAPSHOT-shaded.jar
Any help would be appreciated. Thank you!

I am not completely sure if this will fix your specific issue, but in spark structured streaming, the security options must be prefixed with kafka.
So you will have the following:
import org.apache.kafka.clients.CommonClientConfigs
import org.apache.kafka.common.config.SslConfigs
val security = Map(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG -> security.securityProtocol,
SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG -> security.sslTrustStoreLocation,
SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG -> security.sslTrustStorePassword,
SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG -> security.sslKeyStoreLocation,
SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG -> security.sslKeyStorePassword,
SslConfigs.SSL_KEY_PASSWORD_CONFIG -> security.sslKeyPassword
).map(x => "kafka." + x._1 -> x._2)
test_final.write.format("kafka")
.option("kafka.bootstrap.servers","XXXXXXXXX:9093")
.option("topic", "test")
.options(security)
.save()

Related

Structured Streaming - suddenly giving error while writing to (Strimzi)Kafka topic

i've a Structured Streaming code which reads data from a Kafka Topic (on a VM) & writes to another Kafka Topic on GKE (i should be using a Mirror Maker for this, but have not implemented that yet). it suddenly stopped working (been working fine for many months) giving following error :
22/10/18 19:02:35 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
22/10/18 19:03:42 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (stream2kafka2-w-1.c.versa-sml-googl.internal executor 2): org.apache.kafka.common.errors.TimeoutException: Topic syslog.ueba-us4.v1.versa.demo4 not present in metadata after 60000 ms.
22/10/18 19:03:42 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) (stream2kafka2-w-1.c.versa-sml-googl.internal executor 2): org.apache.spark.sql.execution.streaming.continuous.ContinuousTaskRetryException: Continuous execution does not support task retry
at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD.compute(ContinuousDataSourceRDD.scala:76)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:53)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)
at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Suppressed: java.lang.NullPointerException
at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$7(ContinuousWriteRDD.scala:84)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1495)
... 11 more
Code is very simple, and has been working for many months now :
class ReadFromKafka:
def readAndWrite(self):
df = spark \
.readStream \
.format('kafka') \
.option("kafka.bootstrap.servers", kafkaBrokersSrc) \
.option("subscribe", srcTopic) \
.option("startingOffsets", "latest") \
.option("failOnDataLoss", "false") \
.load()
query = df.selectExpr("CAST(value AS STRING)", "cast(key AS String)") \
.writeStream \
.format("kafka") \
.option("checkpointLocation", checkpoint) \
.option("outputMode", "append") \
.option("truncate", "false") \
.option("kafka.security.protocol", security_protocol) \
.option("kafka.ssl.truststore.location", ssl_truststore_location) \
.option("kafka.ssl.truststore.password", ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location) \
.option("kafka.ssl.keystore.password", ssl_keystore_password) \
.option("kafka.bootstrap.servers", kafkaBrokersTgt) \
.option("topic", tgtTopic) \
.option("kafka.ssl.keystore.type", "PKCS12") \
.option("kafka.ssl.truststore.type", "PKCS12") \
.trigger(continuous='5 seconds') \
.start()
query.awaitTermination()
I'm running this on google dataproc
gcloud dataproc jobs submit pyspark /Users/karanalang/PycharmProjects/Kafka/versa-movedata2kafka/StructuredStreaming-readFromKafka-versa-sml-googl-v1.py --cluster stream2kafka --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2,spark.dynamicAllocation.enabled=true,spark.shuffle.service.enabled=true --files gs://kafka-certs/versa-kafka-gke-ca.p12,gs://kafka-certs/syslog-vani-noacl.p12 --region us-east1
any ideas on what the issue might be & how to debug this ?
tia!
Update :
I'm able to read & write in to the Kafka Topic when i use python Kafka Producer/Consumer- but Structured Streaming code is failing
Update :
Update :
I'm able to read the topic from GKE using spark-submit (batch & streaming mode), the SSL certs are stored on my local mac from where spark-submit is run
So, it seems like Spark is behaving correctly.
However, I tried reading from the kafka topic on GKE using - google cloud submit, and it gives error saying broker is not found (shown below) ..
the SSL certs are stored in the storage bucket, and i'm passing the certs as '--files gs://kafka-certs/versa-kafka-gke-ca.p12,gs://kafka-certs/syslog-vani-noacl.p12'
In the pyspark code, i access them using the file names - this has been working earlier, however i suspect - this might be causing the issue.
Question - is this the corect way to access the certs when I'm using Dataproc ?
commands :
gcloud dataproc jobs submit pyspark /Users/karanalang/PycharmProjects/Kafka/versa-movedata2kafka/StructuredStream-stream-readfrom-versa-sml-googl-certs-gs.py --cluster stream2kafka2 --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2,spark.dynamicAllocation.enabled=true,spark.shuffle.service.enabled=true --files gs://kafka-certs/versa-kafka-gke-ca.p12,gs://kafka-certs/syslog-vani-noacl.p12 --region us-east1
Code :
kafkaBrokersTgt='IP:port'
tgtTopic = "syslog.ueba-us4.v1.versa.demo3"
checkpoint='gs://versa-move2syslogdemo3/'
security_protocol="SSL"
ssl_truststore_location="versa-kafka-gke-ca.p12"
ssl_truststore_password='xxxx'
ssl_keystore_location = 'syslog-vani-noacl.p12'
ssl_keystore_password ='yyyy'
print(" reading from Kafka topic syslog-demo3 on versa-sml-googl, certs on gs storage ")
df_reader = spark.readStream.format('kafka')\
.option("kafka.bootstrap.servers",kafkaBrokersTgt)\
.option("kafka.security.protocol",security_protocol) \
.option("kafka.ssl.truststore.location",ssl_truststore_location) \
.option("kafka.ssl.truststore.password",ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location)\
.option("kafka.ssl.keystore.password", ssl_keystore_password)\
.option("subscribe", tgtTopic) \
.option("startingOffsets", "earliest") \
.option("maxOffsetsPerTrigger", 20) \
.option("kafka.max.poll.records", 20) \
.option("kafka.ssl.keystore.type", "PKCS12") \
.option("kafka.ssl.truststore.type", "PKCS12") \
.load()
# .option("kafka.group.id", "ss.consumer1") \
query = df_reader.selectExpr("CAST(value AS STRING)", "cast(key AS String)") \
.writeStream \
.format("console") \
.option("numRows",500)\
.option("outputMode", "complete")\
.option("truncate", "false") \
.trigger(processingTime='3 minutes') \
.option("checkpointLocation", checkpoint) \
.start()
query.awaitTermination()
Error :
reading from Kafka topic syslog-demo3 on versa-sml-googl, certs on gs storage
22/10/20 04:37:48 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
22/10/20 04:37:50 INFO org.apache.kafka.clients.consumer.ConsumerConfig: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [34.138.213.152:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 1
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = syslog-vani-noacl.p12
ssl.keystore.password = [hidden]
ssl.keystore.type = PKCS12
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = versa-kafka-gke-ca.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
22/10/20 04:37:50 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka version: 2.6.0
22/10/20 04:37:50 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka commitId: 62abe01bee039651
22/10/20 04:37:50 INFO org.apache.kafka.common.utils.AppInfoParser: Kafka startTimeMs: 1666240670692
22/10/20 04:37:50 INFO org.apache.kafka.clients.consumer.KafkaConsumer: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Subscribed to topic(s): syslog.ueba-us4.v1.versa.demo3
22/10/20 04:40:01 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:40:01 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
22/10/20 04:42:12 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:42:12 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
22/10/20 04:44:23 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:44:23 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
22/10/20 04:46:34 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:46:34 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
22/10/20 04:48:45 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:48:45 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
22/10/20 04:50:56 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Connection to node -1 (/34.138.213.152:9094) could not be established. Broker may not be available.
22/10/20 04:50:56 WARN org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0-1, groupId=spark-kafka-source-10bf0d29-761e-4b5a-95c6-308e036ca6f9-764682263-driver-0] Bootstrap broker 34.138.213.152:9094 (id: -1 rack: null) disconnected
Update :
per comment from #Daganag, when i use SparkFiles.get(filename) .. here is the error i get :
d-9fe7bb774985/syslog-vani-noacl.p12
java.nio.file.NoSuchFileException: /hadoop/spark/tmp/spark-19943d8b-d8c7-4406-b5cf-c352837ad71e/userFiles-32e5ebe3-7013-44f2-a0bd-9fe7bb774985/syslog-vani-noacl.p12
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.getLastModifiedTime(Files.java:2266)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$SecurityStore.lastModifiedMs(DefaultSslEngineFactory.java:312)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$SecurityStore.<init>(DefaultSslEngineFactory.java:284)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:255)
#Dagang, #OneCrickteer - i logged onto the worker nodes & i see the SSL certs uploaded (when i pass the certs as --files gs:// in google cloud submit.
How do i access them in code .. SparkFiles.get('cert') is not working, since the path SparkFiles gets is not the same
SSL certs on The Worker Node :
------------------------------
root#stream2kafka2-w-0:/# find . -name versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1666127856693_0016/container_e01_1666127856693_0016_01_000002/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/67/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/39/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/165/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/194/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/109/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/81/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/53/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/208/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/179/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/151/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/137/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/23/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/95/versa-kafka-gke-ca.p12
./hadoop/yarn/nm-local-dir/usercache/root/filecache/123/versa-kafka-gke-ca.p12

java.nio.file.InvalidPathException : "Illegal char <:> at index 2" error with Kafka Connector in Mule 4 Munits

I am facing a very tough situation running MUnit on my project. I am using Mule 4.3.0 and Anypoint Studio 7.4.
Apparently, the point of error is occuring while loading a certs/cacerts file as a property used in some Apache Kakfa Connector TLS Configuration.
Its working absolutely fine when running the normal Mule code (having the TLS Context).
But fails to work, when running MUnit.
I have tried many ways to get this resolved with my team, but couldn't fix it. Although similar errors been reported by other developers occasionally, I couldn't concluded that this as a possible bug with the mule runtime or especially a Kafka Connector problem.
Finally, once removing the TLS context in Kafka Config, MUnit is working fine. But without TLS enabled, my project is basically useless
I need your help in resolving this and making my test work, but with TLS configurations present in its rightful place. Also, please take a look at this question in the forums: Mule Kafka Problem
Given below are the two snapshots of the Configuration used, and the error reported in the MUnit console:
Kafka Consumer TLS Configuration Snapshot:
Error Snapshot:
Compete error report given below:
INFO 2020-10-21 18:44:12,077 [main] org.mule.munit.remote.container.SuiteRunDispatcher: Suite errortopic-db-test-suite.xml will not be deployed: Suite was filtered from running
INFO 2020-10-21 18:44:12,078 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Waiting for client connection
INFO 2020-10-21 18:44:12,086 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Client connection received from 127.0.0.1 - true
WARN 2020-10-21 18:44:19,766 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:19,767 [munit.01] org.mule.runtime.api.tls.AbstractTlsContextFactoryBuilderFactory: Loaded TlsContextFactoryBuilderFactory implementation 'org.mule.runtime.module.tls.api.DefaultTlsContextFactoryBuilderFactory' from classloader 'java.net.URLClassLoader#7fd8c559'
INFO 2020-10-21 18:44:21,684 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration_oauth
INFO 2020-10-21 18:44:21,736 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration-By
WARN 2020-10-21 18:44:21,800 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,802 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Consumer_configuration
WARN 2020-10-21 18:44:21,810 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,872 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Producer_configuration
WARN 2020-10-21 18:44:21,879 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,917 [munit.01] org.apache.kafka.clients.producer.ProducerConfig: ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [hiding intentional]
buffer.memory = 1024000
client.dns.lookup = default
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = PLAIN
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = SunJSSE
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
ssl.truststore.password = [hidden]
ssl.truststore.type = jks
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
INFO 2020-10-21 18:44:21,985 [munit.01] org.apache.kafka.common.security.authenticator.AbstractLogin: Successfully logged in.
INFO 2020-10-21 18:44:21,996 [munit.01] org.apache.kafka.clients.producer.KafkaProducer: [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.
org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:434)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at com.mulesoft.connectors.kafka.internal.connection.provider.ProducerConnectionProvider.initialise(ProducerConnectionProvider.java:437)
at com.mulesoft.connectors.kafka.internal.connection.provider.KafkaConnectionProvider.initialise(KafkaConnectionProvider.java:129)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.module.extension.internal.runtime.config.ClassLoaderConnectionProviderWrapper.initialise(ClassLoaderConnectionProviderWrapper.java:96)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.doInitialise(LifecycleAwareConfigurationInstance.java:297)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.initialise(LifecycleAwareConfigurationInstance.java:145)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$null$0(LifecycleAwareConfigurationProvider.java:83)
at org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager.invokePhase(AbstractLifecycleManager.java:132)
at org.mule.runtime.core.internal.lifecycle.DefaultLifecycleManager.fireInitialisePhase(DefaultLifecycleManager.java:46)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$initialise$1(LifecycleAwareConfigurationProvider.java:81)
at org.mule.runtime.core.api.util.ExceptionUtils.tryExpecting(ExceptionUtils.java:224)
at org.mule.runtime.core.api.util.ClassUtils.withContextClassLoader(ClassUtils.java:966)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.initialise(LifecycleAwareConfigurationProvider.java:80)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.util.func.CheckedConsumer.accept(CheckedConsumer.java:19)
at org.mule.runtime.core.internal.lifecycle.phases.DefaultLifecyclePhase.applyLifecycle(DefaultLifecyclePhase.java:115)
at org.mule.runtime.core.internal.lifecycle.phases.MuleContextInitialisePhase.applyLifecycle(MuleContextInitialisePhase.java:73)
at org.mule.runtime.config.internal.SpringRegistryLifecycleManager$SpringContextInitialisePhase.applyLifecycle(SpringRegistryLifecycleManager.java:128)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.doApplyLifecycle(RegistryLifecycleManager.java:175)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.applyPhase(RegistryLifecycleManager.java:146)
at org.mule.runtime.config.internal.SpringRegistry.applyLifecycle(SpringRegistry.java:289)
at org.mule.runtime.core.internal.registry.MuleRegistryHelper.applyLifecycle(MuleRegistryHelper.java:339)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:287)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.lambda$applyLifecycle$4(LazyMuleArtifactContext.java:250)
at org.mule.runtime.core.internal.context.DefaultMuleContext.withLifecycleLock(DefaultMuleContext.java:531)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.applyLifecycle(LazyMuleArtifactContext.java:248)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:329)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:317)
at org.mule.munit.runner.config.TestComponentLocator.initializeComponents(TestComponentLocator.java:63)
at org.mule.munit.runner.model.builders.SuiteBuilder.build(SuiteBuilder.java:78)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.buildSuite(RunMessageHandler.java:108)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseSuiteMessage(RunMessageHandler.java:94)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseAndRun(RunMessageHandler.java:81)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.handle(RunMessageHandler.java:75)
at org.mule.munit.runner.remote.api.server.RunnerServer.handleClientMessage(RunnerServer.java:145)
at org.mule.munit.runner.remote.api.server.RunnerServer.run(RunnerServer.java:91)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:172)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:73)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:442)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:423)
... 56 more
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.lastModifiedMs(SslEngineBuilder.java:298)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.<init>(SslEngineBuilder.java:275)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.createTruststore(SslEngineBuilder.java:182)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.<init>(SslEngineBuilder.java:100)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:95)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:168)
... 61 more
This has been identified as a bug with the Anypoint Studio.
MuleSoft Support has suggested the below.
Upgrade the studio version to the latest patch 4.3.0-20200925 which has all the fixes included.
The patches are cumulative so all fixes from the august release will be available in the latest too.
The distributions are stored in the Mulesoft Nexus EE Repo repo so you will need to ensure you have the credentials configured in your settings.xml.
Follow this documentation Patching, which explains how to patch MUNITS using maven or studio.
Note: The first time execution may take a while because the Runtime artifacts will be downloaded to your local repo.

Kafka Spark Structured Streaming with SASL_SSL authentication

I have been trying to use Spark Structured Streaming API to connect to Kafka cluster with SASL_SSL. I have passed the jaas.conf file to the executors. It seems I couldn't set the values of keystore and truststore authentications.
I tried passing the values as mentioned in thisspark link
Also, tried passing it through the code as in this link
Still no luck.
Here is the log
20/02/28 10:00:53 INFO streaming.StreamExecution: Starting [id = e176f5e7-7157-4df5-93ce-1e267bae6125, runId = 03225a69-ec00-45d9-8092-1467da34980f]. Use flight/checkpoint to store the query checkpoint.
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
20/02/28 10:00:53 INFO spark.SparkContext: Invoking stop() from shutdown hook
20/02/28 10:00:53 INFO server.AbstractConnector: Stopped Spark#46202f7b{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
20/02/28 10:00:53 INFO consumer.ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [broker1:9093, broker2:9093]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 1
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = spark-kafka-source-93d170e9-977c-40fc-9e5d-790d253fcff5-409016337-driver-0
retry.backoff.ms = 100
ssl.secure.random.implementation = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = SASL_SSL
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = earliest
20/02/28 10:00:53 INFO ui.SparkUI: Stopped Spark web UI at http://<Server>:41037
20/02/28 10:00:53 ERROR streaming.StreamExecution: Query [id = e176f5e7-7157-4df5-93ce-1e267bae6125, runId = 03225a69-ec00-45d9-8092-1467da34980f] terminated with error
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:702)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:557)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:540)
at org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:62)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.createConsumer(KafkaOffsetReader.scala:297)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.<init>(KafkaOffsetReader.scala:78)
at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:88)
at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:243)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:158)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:155)
at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:155)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:153)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:153)
at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:147)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:276)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:623)
... 22 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Unknown Source)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Unknown Source)
at com.sun.security.auth.module.Krb5LoginModule.login(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.access$000(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(Unknown Source)
at javax.security.auth.login.LoginContext.login(Unknown Source)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:69)
at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:110)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:46)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)
... 25 more
20/02/28 10:00:53 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
20/02/28 10:00:53 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
20/02/28 10:00:53 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
20/02/28 10:00:53 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/02/28 10:00:53 INFO memory.MemoryStore: MemoryStore cleared
20/02/28 10:00:53 INFO storage.BlockManager: BlockManager stopped
20/02/28 10:00:53 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
20/02/28 10:00:53 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/02/28 10:00:53 INFO spark.SparkContext: Successfully stopped SparkContext
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
20/02/28 10:00:53 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/hasif.subair/.sparkStaging/application_1582866369627_0029
20/02/28 10:00:53 INFO util.ShutdownHookManager: Shutdown hook called
20/02/28 10:00:53 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/hasif.subair/appcache/application_1582866369627_0029/spark-5addfec0-a99f-49e1-b9d1-671c331efb40
Code
val rawData = spark.readStream.format("kafka")
.option("kafka.bootstrap.servers", "broker1:9093, broker2:9093")
.option("subscribe", "hasif_test")
.option("spark.executor.extraJavaOptions", "-Djava.security.auth.login.config=jaas.conf")
.option("kafka.security.protocol", "SASL_SSL")
.option("ssl.truststore.location", "/etc/connect_ts/truststore.jks")
.option("ssl.truststore.password", "<PASSWORD>")
.option("ssl.keystore.location", "/etc/connect_ts/keystore.jks")
.option("ssl.keystore.password", "<PASSWORD>")
.option("ssl.key.password", "<PASSWORD>")
.load()
rawData.writeStream.option("path", "flight/output")
.option("checkpointLocation", "flight/checkpoint").format("csv").start()
spark-submit
spark2-submit --master yarn --deploy-mode cluster \
--conf spark.yarn.keytab=hasif.subair.keytab \
--conf spark.yarn.principal=hasif.subair#TEST.ABC \
--files /home/hasif.subair/jaas.conf \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./jaas.conf" \
--conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=./jaas.conf" \
--conf "spark.kafka.clusters.hasif.ssl.truststore.location=/etc/ts/truststore.jks" \
--conf "spark.kafka.clusters.hasif.ssl.truststore.password=testcluster" \
--conf "spark.kafka.clusters.hasif.ssl.keystore.location=/etc/ts/keystore.jks" \
--conf "spark.kafka.clusters.hasif.ssl.keystore.password=testcluster" \
--conf "spark.kafka.clusters.hasif.ssl.key.password=testcluster" \
--jars spark-sql-kafka-0-10_2.11-2.2.0.jar \
--class TestApp test_app_2.11-0.1.jar \
jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
principal="hasif.subair#TEST.ABC"
useKeyTab=true
serviceName="kafka"
keyTab="hasif.subair.keytab"
client=true;
};
Any help will be deeply appreciated.
Kafka’s own configurations can be set via DataStreamReader.option with kafka. prefix, e.g.
val clusterName = "hasif"
stream.option(s"spark.kafka.clusters.${clusterName}.kafka.ssl.keystore.location", "/etc/connect_ts/keystore.jks")
Use kafka.ssl.truststore.location instead of ssl.truststore.location.
Similarly, you can prefix kafka for other properties and try.
I suspect the values for SSL is not getting picked up. As you can notice in your log the values are shown as null.
ssl.truststore.location = null
ssl.truststore.password = null
ssl.keystore.password = null
ssl.keystore.location = null
If the values are set properly it would reflect as
ssl.truststore.location = /etc/connect_ts/truststore.jks
ssl.truststore.password = [hidden]
ssl.keystore.password = [hidden]
ssl.keystore.location = /etc/connect_ts/keystore.jks

Problem with sending message from flume to kafka

I have two hosts in two different VMs,
i have flume on a Centos 7 host and kafka on cloudera host.
So i connected the two of them and make kafka in the cloudera host as sink in flume as represent the code bellow :
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/Bureau/V1/outputCV
a1.sources.r1.fileHeader = true
a1.sources.r1.interceptors = timestampInterceptor
a1.sources.r1.interceptors.timestampInterceptor.type = timestamp
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = flume-topic
a1.sinks.k1.kafka.bootstrap.servers =
192.168.5.129:9090,192.168.5.129:9091,192.168.5.129:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
I got this error when flume try to send messages to kafka :
[root#localhost V1]# /root/flume/bin/flume-ng agent --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
Warning: No configuration directory set! Use --conf <dir> to override.
Warning: JAVA_HOME is not set!
Info: Including Hadoop libraries found via (/root/hadoop/bin/hadoop) for HDFS access
Info: Including Hive libraries found via () for Hive access
+ exec /usr/bin/java -Xmx20m -cp '/root/flume/lib/*:/root/hadoop/etc/hadoop:/root/hadoop/share/hadoop/common/lib/*:/root/hadoop/share/hadoop/common/*:/root/hadoop/share/hadoop/hdfs:/root/hadoop/share/hadoop/hdfs/lib/*:/root/hadoop/share/hadoop/hdfs/*:/root/hadoop/share/hadoop/mapreduce/lib/*:/root/hadoop/share/hadoop/mapreduce/*:/root/hadoop/share/hadoop/yarn:/root/hadoop/share/hadoop/yarn/lib/*:/root/hadoop/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib org.apache.flume.node.Application --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-05-07 19:04:41,363 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
2019-05-07 19:04:41,366 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/root/Bureau/V1/flumeconf/flumetest.conf
2019-05-07 19:04:41,374 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 WARN conf.FlumeConfiguration: Agent configuration for 'a1' has no configfilters.
2019-05-07 19:04:41,392 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
2019-05-07 19:04:41,392 INFO node.AbstractConfigurationProvider: Creating channels
2019-05-07 19:04:41,397 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
2019-05-07 19:04:41,400 INFO node.AbstractConfigurationProvider: Created channel c1
2019-05-07 19:04:41,401 INFO source.DefaultSourceFactory: Creating instance of source r1, type spooldir
2019-05-07 19:04:41,415 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: org.apache.flume.sink.kafka.KafkaSink
2019-05-07 19:04:41,420 INFO kafka.KafkaSink: Using the static topic flume-topic. This may be overridden by event headers
2019-05-07 19:04:41,426 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
2019-05-07 19:04:41,432 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:Spool Directory source r1: { spoolDir: /root/Bureau/V1/outputCV } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#492cf580 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
2019-05-07 19:04:41,432 INFO node.Application: Starting Channel c1
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
2019-05-07 19:04:41,492 INFO node.Application: Starting Sink k1
2019-05-07 19:04:41,494 INFO node.Application: Starting Source r1
2019-05-07 19:04:41,494 INFO source.SpoolDirectorySource: SpoolDirectorySource source starting with directory: /root/Bureau/V1/outputCV
2019-05-07 19:04:41,525 INFO producer.ProducerConfig: ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.5.129:9090, 192.168.5.129:9091, 192.168.5.129:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka version : 2.0.1
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka commitId : fa14705e51bd2ce5
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
2019-05-07 19:05:01,780 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2019-05-07 19:05:01,781 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /root/Bureau/V1/outputCV/cv.txt to /root/Bureau/V1/outputCV/cv.txt.COMPLETED
2019-05-07 19:05:03,673 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,776 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,856 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,910 INFO clients.Metadata: Cluster ID: F_Byx5toQju8jaLb3zFwAA
2019-05-07 19:05:04,084 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,140 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,231 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,393 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
Error connectiong to node shows up only after sending messages.
Can anyone helps me solve this problem.

Kafka Connect Distributed is not working

I am trying to run kafka connect in distributed mode to read from topic and write to HDFS.
The process is starting successfully with the required properties files given, but I am not able to commit the events to HDFS.
Please see the full logs below.
Note: Broker details have been masked and all other details are mentioned in the logs.
[2018-06-26 11:49:48,474] INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [****:****, ****:****]
check.crcs = true
client.id = consumer-3
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = connect-cluster
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
(org.apache.kafka.clients.consumer.ConsumerConfig:180)
[2018-06-26 11:49:48,476] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,477] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,477] INFO Kafka version : 0.10.1.0-IBM-8 (org.apache.kafka.common.utils.AppInfoParser:83)
[2018-06-26 11:49:48,477] INFO Kafka commitId : unknown (org.apache.kafka.common.utils.AppInfoParser:84)
[2018-06-26 11:49:48,682] INFO Discovered coordinator ****:**** (id: 2147483644 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555)
[2018-06-26 11:49:48,686] INFO Finished reading KafkaBasedLog for topic connect-test-configs (org.apache.kafka.connect.util.KafkaBasedLog:148)
[2018-06-26 11:49:48,686] INFO Started KafkaBasedLog for topic connect-test-configs (org.apache.kafka.connect.util.KafkaBasedLog:150)
[2018-06-26 11:49:48,686] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:261)
[2018-06-26 11:49:48,687] INFO Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:171)
[2018-06-26 11:49:48,689] INFO Discovered coordinator ****:**** (id: 2147483644 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555)
[2018-06-26 11:49:48,691] INFO (Re-)joining group connect-cluster (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2018-06-26 11:49:48,702] INFO Successfully joined group connect-cluster with generation 3 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2018-06-26 11:49:48,702] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-67ad9053-4b20-41dd-9a6a-ec8e4f4d622e', leaderUrl='http://****:****/', offset=-1, connectorIds=[], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2018-06-26 11:49:48,702] INFO Starting connectors and tasks using config offset -1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:745)
[2018-06-26 11:49:48,702] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:752)
[2018-06-26 11:49:55,748] INFO Reflections took 7022 ms to scan 225 urls, producing 15521 keys and 98977 values (org.reflections.Reflections:229)
but I am not able to commit the events to HDFS
When you start distributed mode, it doesn't take any individual connector properties, you load those later
Just want to know where should we give the properties of the topics,hdfs url,task in a distributed mode?
You POST JSON into it.
Make a connect-hdfs.json file, for example
{
"name": "quickstart-hdfs-sink"
"config": {
"store.url": "hdfs:///apps/kafka-connect",
"hadoop.conf.dir": "/etc/hadoop/conf",
"topics" : ...
}
}
Send it to Connect
curl -XPOST -d#connect-hdfs.json http://connect-server:8083/connectors