Spark Streaming NoClassDefFoundError error - scala

I am trying to create Spark Kafka Cassandra Integration. Now I am able to connect to cassandra but when I m trying to create SparkStreamingContext object using
val ssc = new StreamingContext(sparkConf, Seconds(60))
I am able to import and write the above code. But when I m trying build and run the code, I m facing below error:
org/apache/spark/SparkConf
at KafkaSparkCassandra$.main(KafkaSparkCassandra.scala:38)
at KafkaSparkCassandra.main(KafkaSparkCassandra.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
now I am not able to understand why I m unable to create SparkStreaming Object at runtime.
Please Help. As I m new using the whole scala and lambda Architecture stack.
below is the configuration inside build.sbt:
libraryDependencies ++=Seq(
"org.apache.spark" % "spark-core_2.10" % "1.4.1",
"com.datastax.spark" % "spark-cassandra-connector_2.10" % "1.4.0",
"org.apache.spark" % "spark-sql_2.10" % "1.4.1",
"mysql" % "mysql-connector-java" % "5.1.12")
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.6.0" % "provided"
libraryDependencies += ("org.apache.spark" %% "spark-streaming-kafka" % "1.6.0").exclude("org.spark-project.spark", "unused")
/*
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka" % "1.6.0" % "provided"
*/
javaOptions ++= Seq("-Xmx5G", "-XX:MaxPermSize=5G", "-XX:+CMSClassUnloadingEnabled"
below are the logs. Now M unable to print word count and also to store the same into cassandra db.
log4j:WARN No appenders could be found for logger (com.datastax.driver.core.SystemProperties).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/11/15 18:54:52 INFO SparkContext: Running Spark version 1.6.0
16/11/15 18:54:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/15 18:54:52 INFO SecurityManager: Changing view acls to: romit.srivastava
16/11/15 18:54:52 INFO SecurityManager: Changing modify acls to: romit.srivastava
16/11/15 18:54:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(romit.srivastava); users with modify permissions: Set(romit.srivastava)
16/11/15 18:54:53 INFO Utils: Successfully started service 'sparkDriver' on port 53789.
16/11/15 18:54:53 INFO Slf4jLogger: Slf4jLogger started
16/11/15 18:54:53 INFO Remoting: Starting remoting
16/11/15 18:54:53 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.56.1:53802]
16/11/15 18:54:53 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 53802.
16/11/15 18:54:54 INFO SparkEnv: Registering MapOutputTracker
16/11/15 18:54:54 INFO SparkEnv: Registering BlockManagerMaster
16/11/15 18:54:54 INFO DiskBlockManager: Created local directory at C:\Users\romit.srivastava\AppData\Local\Temp\blockmgr-c60aeba8-a317-4066-99ce-71ec3595bdf3
16/11/15 18:54:54 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
16/11/15 18:54:54 INFO SparkEnv: Registering OutputCommitCoordinator
16/11/15 18:54:54 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/11/15 18:54:54 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
16/11/15 18:54:54 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
16/11/15 18:54:54 WARN Utils: Service 'SparkUI' could not bind on port 4043. Attempting port 4044.
16/11/15 18:54:54 WARN Utils: Service 'SparkUI' could not bind on port 4044. Attempting port 4045.
16/11/15 18:54:54 INFO Utils: Successfully started service 'SparkUI' on port 4045.
16/11/15 18:54:54 INFO SparkUI: Started SparkUI at http://192.168.56.1:4045
16/11/15 18:54:54 INFO Executor: Starting executor ID driver on host localhost
16/11/15 18:54:54 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53814.
16/11/15 18:54:54 INFO NettyBlockTransferService: Server created on 53814
16/11/15 18:54:54 INFO BlockManagerMaster: Trying to register BlockManager
16/11/15 18:54:54 INFO BlockManagerMasterEndpoint: Registering block manager localhost:53814 with 2.4 GB RAM, BlockManagerId(driver, localhost, 53814)
16/11/15 18:54:54 INFO BlockManagerMaster: Registered BlockManager
16/11/15 18:54:55 INFO VerifiableProperties: Verifying properties
16/11/15 18:54:55 INFO VerifiableProperties: Property group.id is overridden to
16/11/15 18:54:55 INFO VerifiableProperties: Property zookeeper.connect is overridden to
16/11/15 18:54:58 INFO Cluster: New Cassandra host /136.243.174.23:9042 added
16/11/15 18:54:58 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
16/11/15 18:55:00 INFO ForEachDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO ShuffledDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO FilteredDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO FlatMappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO DirectKafkaInputDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream#2d1a0e90
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#678a042d
16/11/15 18:55:00 INFO FlatMappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO FlatMappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO FlatMappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO FlatMappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO FlatMappedDStream: Initialized and validated org.apache.spark.streaming.dstream.FlatMappedDStream#7d8e7cf5
16/11/15 18:55:00 INFO FilteredDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO FilteredDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO FilteredDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream#183e79df
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#652d8ac6
16/11/15 18:55:00 INFO ShuffledDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO ShuffledDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO ShuffledDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO ShuffledDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO ShuffledDStream: Initialized and validated org.apache.spark.streaming.dstream.ShuffledDStream#52b15122
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#5c56f655
16/11/15 18:55:00 INFO ForEachDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO ForEachDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO ForEachDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#37cd8c81
16/11/15 18:55:00 INFO ForEachDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO ShuffledDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO FilteredDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO FlatMappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO MappedDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO DirectKafkaInputDStream: metadataCleanupDelay = -1
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.DirectKafkaInputDStream#2d1a0e90
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#678a042d
16/11/15 18:55:00 INFO FlatMappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO FlatMappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO FlatMappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO FlatMappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO FlatMappedDStream: Initialized and validated org.apache.spark.streaming.dstream.FlatMappedDStream#7d8e7cf5
16/11/15 18:55:00 INFO FilteredDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO FilteredDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO FilteredDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream#183e79df
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#652d8ac6
16/11/15 18:55:00 INFO ShuffledDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO ShuffledDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO ShuffledDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO ShuffledDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO ShuffledDStream: Initialized and validated org.apache.spark.streaming.dstream.ShuffledDStream#52b15122
16/11/15 18:55:00 INFO MappedDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO MappedDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO MappedDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream#5c56f655
16/11/15 18:55:00 INFO ForEachDStream: Slide time = 60000 ms
16/11/15 18:55:00 INFO ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/11/15 18:55:00 INFO ForEachDStream: Checkpoint interval = null
16/11/15 18:55:00 INFO ForEachDStream: Remember duration = 60000 ms
16/11/15 18:55:00 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#3e3f4b04
16/11/15 18:55:00 INFO RecurringTimer: Started timer for JobGenerator at time 1479216360000
16/11/15 18:55:00 INFO JobGenerator: Started JobGenerator at 1479216360000 ms
16/11/15 18:55:00 INFO JobScheduler: Started JobScheduler
16/11/15 18:55:00 INFO StreamingContext: StreamingContext started
16/11/15 18:55:00 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
16/11/15 18:55:30 INFO JobGenerator: Stopping JobGenerator immediately
16/11/15 18:55:30 INFO RecurringTimer: Stopped timer for JobGenerator after time -1
16/11/15 18:55:30 INFO JobGenerator: Stopped JobGenerator
16/11/15 18:55:30 INFO JobScheduler: Stopped JobScheduler
16/11/15 18:55:30 INFO StreamingContext: StreamingContext stopped successfully
16/11/15 18:55:30 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4045
16/11/15 18:55:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/11/15 18:55:30 INFO MemoryStore: MemoryStore cleared
16/11/15 18:55:30 INFO BlockManager: BlockManager stopped
16/11/15 18:55:30 INFO BlockManagerMaster: BlockManagerMaster stopped
16/11/15 18:55:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/11/15 18:55:30 INFO SparkContext: Successfully stopped SparkContext
16/11/15 18:55:30 WARN StreamingContext: StreamingContext has already been stopped
16/11/15 18:55:30 INFO SparkContext: SparkContext already stopped.
16/11/15 18:55:30 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/11/15 18:55:30 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/11/15 18:55:30 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

This was primarily due to different versions of spark modules. By fixing the version I was able to run the code.
Further I was able to do word count and save it to cassandra.

Related

Strimzi kafka - kafka connect not getting installed

I've Strimzi Kafka setup on GKE, and it is working fine.
I've a requirement to setup MirrorMaker2 to push data from source kafka topic to target Kafka topic.
From what i understand, MirrorMaker2 requires KafkaConnect.
I'm trying to install KafkaConnect on the GKE cluster in namespace kafka-connect, the yaml used is the following.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
# annotations:
# strimzi.io/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 1
bootstrapServers: versa-kafka-gke-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: versa-kafka-gke-cluster-ca-cert
certificate: ca.crt
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
Command run to install:
kubectl apply -f kafka-connect.yaml -n kafka-connect
Note : Strimzi kafka is installed in namespace 'kafka', version - 3.0.0
I was expecting the KafkaConnect to get installed & the pods to be installed as well, however KafkaConnect resource is created, but not showing as Ready.
Also, the pods are not getting created.
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc get kafkaconnect my-connect-cluster -n kafka-connect
NAME DESIRED REPLICAS READY
my-connect-cluster 1
On describing the my-connect-cluster, here is the output (not showing any errors)
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc describe kafkaconnect my-connect-cluster -n kafka-connect
Name: my-connect-cluster
Namespace: kafka-connect
Labels: <none>
Annotations: <none>
API Version: kafka.strimzi.io/v1beta2
Kind: KafkaConnect
Metadata:
Creation Timestamp: 2023-02-16T06:38:41Z
Generation: 1
Managed Fields:
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:bootstrapServers:
f:config:
.:
f:config.storage.replication.factor:
f:config.storage.topic:
f:group.id:
f:offset.storage.replication.factor:
f:offset.storage.topic:
f:status.storage.replication.factor:
f:status.storage.topic:
f:replicas:
f:tls:
.:
f:trustedCertificates:
f:version:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-02-16T06:38:41Z
Resource Version: 266457732
UID: bdd697f8-e38a-466b-8ddf-81ed1ae54efe
Spec:
Bootstrap Servers: versa-kafka-gke-kafka-bootstrap:9093
Config:
config.storage.replication.factor: -1
config.storage.topic: connect-cluster-configs
group.id: connect-cluster
offset.storage.replication.factor: -1
offset.storage.topic: connect-cluster-offsets
status.storage.replication.factor: -1
status.storage.topic: connect-cluster-status
Replicas: 1
Tls:
Trusted Certificates:
Certificate: ca.crt
Secret Name: versa-kafka-gke-cluster-ca-cert
Version: 3.0.0
Events: <none>
How do i debug/fix this ?
tia!
Update :
Based on note from Jakub, i re-installed kafkaconnect in the same namespace as Strimzi kafka (i.e. namespace - kafka), and pods are coming up now. However the logs show error as shown below :
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc logs -f pod/my-connect-cluster-connect-67f76f5d89-nv9sj -n kafka
Preparing truststore
Certificate was added to keystore
Preparing truststore is complete
Starting Kafka Connect with configuration:
# Bootstrap servers
bootstrap.servers=versa-kafka-gke-w-kafka-bootstrap:9093
# REST Listeners
rest.port=8083
rest.advertised.host.name=10.6.0.199
rest.advertised.port=8083
# Plugins
plugin.path=/opt/kafka/plugins
# Provided configuration
offset.storage.topic=connect-cluster-offsets
value.converter=org.apache.kafka.connect.json.JsonConverter
config.storage.topic=connect-cluster-configs
key.converter=org.apache.kafka.connect.json.JsonConverter
group.id=connect-cluster
status.storage.topic=connect-cluster-status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
security.protocol=SSL
producer.security.protocol=SSL
consumer.security.protocol=SSL
admin.security.protocol=SSL
# TLS / SSL
ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
ssl.truststore.password=[hidden]
ssl.truststore.type=PKCS12
producer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
producer.ssl.truststore.password=[hidden]
consumer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
consumer.ssl.truststore.password=[hidden]
admin.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
admin.ssl.truststore.password=[hidden]
# Additional configuration
client.rack=
2023-02-17 06:43:08,952 INFO WorkerInfo values:
jvm.args = -Xms128M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/opt/kafka, -Dlog4j.configuration=file:/opt/kafka/custom-config/log4j.properties
jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.12, 11.0.12+7-LTS
jvm.classpath = /opt/kafka/bin/../libs/accessors-smart-2.4.7.jar:/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/annotations-13.0.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/automaton-1.11-8.jar:/opt/kafka/bin/../libs/checker-qual-3.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang-2.6.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-3.0.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-3.0.0.jar:/opt/kafka/bin/../libs/connect-file-3.0.0.jar:/opt/kafka/bin/../libs/connect-json-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-3.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-3.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-
......
2023-02-17 06:43:24,944 INFO Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:25,347 INFO DistributedConfig values:
access.control.allow.methods =
access.control.allow.origin =
admin.listeners = null
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
config.providers = []
config.storage.replication.factor = -1
config.storage.topic = connect-cluster-configs
connect.protocol = sessioned
connections.max.idle.ms = 540000
connector.client.config.override.policy = All
group.id = connect-cluster
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
heartbeat.interval.ms = 3000
inter.worker.key.generation.algorithm = HmacSHA256
inter.worker.key.size = null
inter.worker.key.ttl.ms = 3600000
inter.worker.signature.algorithm = HmacSHA256
inter.worker.verification.algorithms = [HmacSHA256]
key.converter = class org.apache.kafka.connect.json.JsonConverter
listeners = [http://:8083]
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.partitions = 25
offset.storage.replication.factor = -1
offset.storage.topic = connect-cluster-offsets
plugin.path = [/opt/kafka/plugins]
rebalance.timeout.ms = 60000
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 40000
response.http.headers.config =
rest.advertised.host.name = 10.6.0.199
rest.advertised.listener = null
rest.advertised.port = 8083
rest.extension.classes = []
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
scheduled.rebalance.max.delay.ms = 300000
security.protocol = SSL
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
status.storage.partitions = 5
status.storage.replication.factor = -1
status.storage.topic = connect-cluster-status
task.shutdown.graceful.timeout.ms = 5000
topic.creation.enable = true
topic.tracking.allow.reset = true
topic.tracking.enable = true
value.converter = class org.apache.kafka.connect.json.JsonConverter
worker.sync.timeout.ms = 3000
worker.unsync.backoff.ms = 300000
(org.apache.kafka.connect.runtime.distributed.DistributedConfig) [main]
2023-02-17 06:43:25,355 INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils) [main]
2023-02-17 06:43:25,359 INFO AdminClientConfig values:
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
(org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,848 WARN The configuration 'producer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'rest.advertised.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'admin.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'producer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'admin.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'consumer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'producer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'client.rack' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,855 WARN The configuration 'admin.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,856 INFO Kafka version: 3.0.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka commitId: 8cb0a5e9d3441962 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka startTimeMs: 1676616207856 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:30,857 INFO [AdminClient clientId=adminclient-1] Failed re-authentication with versa-kafka-gke-w-kafka-bootstrap/10.6.131.57 (Failed to process post-handshake messages) (org.apache.kafka.common.network.Selector) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,867 ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (versa-kafka-gke-w-kafka-bootstrap/10.6.131.57:9093) failed authentication due to: Failed to process post-handshake messages (org.apache.kafka.clients.NetworkClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,870 WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
2023-02-17 06:43:30,946 INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,947 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,948 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,949 INFO [AdminClient clientId=adminclient-1] Timed out 2 remaining operation(s) during close. (org.apache.kafka.clients.admin.KafkaAdminClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,959 INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,960 INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main]
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:97)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
... 3 more
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
Pls note :
Strimzi Kafka version - 3.0.0, hence I've changed the version of kafkaconnect to - 3.0.0 as well.
You say you want to use MirrorMaker2? Then you should be using that kind, not KafkaConnect.
https://strimzi.io/blog/2020/03/30/introducing-mirrormaker2/
And, as commented, ensure the operator is watching the namespace where you install any resources. Just because you can get/describe the resource doesn't mean the operator knows about it, or is processing it (look at its logs)

SparkStream unable to read data from Kafka topic

I am a beginner in Kafka and I am trying to read the data from a Kafka topic using spark in scala.
My main function is:
def main(args : Array[String]) : Unit = {
val spark = SparkSession
.builder()
.appName("testKafka")
.master("local[*]")
.getOrCreate()
import spark.implicits._
val df = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "sample_topic")
.load()
df.printSchema()
df
.writeStream
.outputMode("append")
.format("com.databricks.spark.csv")
.option("checkpointLocation", "/home/wintersoldier/Desktop/checkpoint")
.option("path","/home/wintersoldier/Documents/tookitaki/sparkTest/src/main/scala/kafka_out/outCSV")
.start()
.awaitTermination()
spark.stop()
spark.close()
}
Then I am sending messages through the Kafka producer terminal by:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic sample_topic
The application is running without any error, but no csv file with the topic message data is being created.
Instead of writing to CSV I also tried printing on terminal by : df.format("console") still I was not able to get any output.
My kafka version is : kafka_2.11-0.9.0.0
My build.sbt contains:
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.4.5" ,
"org.apache.spark" %% "spark-mllib" % "2.4.5" ,
"org.apache.spark" %% "spark-sql" % "2.4.5" ,
"org.apache.spark" %% "spark-hive" % "2.4.5" ,
"org.apache.spark" %% "spark-streaming" % "2.4.5" ,
"org.apache.spark" %% "spark-graphx" % "2.4.5",
"org.apache.spark" %% "spark-streaming-kafka" % "1.6.3",
"org.apache.spark" %% "spark-sql-kafka-0-10" % "2.4.7",
)
UPDATE:
The terminal LOGS:
[info] Running com.test.kafka.testKafka
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/11/29 16:35:52 INFO SparkContext: Running Spark version 2.4.5
20/11/29 16:35:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/11/29 16:35:52 INFO SparkContext: Submitted application: testKafka
20/11/29 16:35:52 INFO SecurityManager: Changing view acls to: wintersoldier
20/11/29 16:35:52 INFO SecurityManager: Changing modify acls to: wintersoldier
20/11/29 16:35:52 INFO SecurityManager: Changing view acls groups to:
20/11/29 16:35:52 INFO SecurityManager: Changing modify acls groups to:
20/11/29 16:35:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(wintersoldier); groups with view permissions: Set(); users with modify permissions: Set(wintersoldier); groups with modify permissions: Set()
20/11/29 16:35:53 INFO Utils: Successfully started service 'sparkDriver' on port 46047.
20/11/29 16:35:53 INFO SparkEnv: Registering MapOutputTracker
20/11/29 16:35:53 INFO SparkEnv: Registering BlockManagerMaster
20/11/29 16:35:53 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/11/29 16:35:53 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/11/29 16:35:53 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-8579f1c4-621e-428c-ba2f-aaa457a9b1d4
20/11/29 16:35:53 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/11/29 16:35:53 INFO SparkEnv: Registering OutputCommitCoordinator
20/11/29 16:35:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/11/29 16:35:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://anonymouspirates:4040
20/11/29 16:35:53 INFO Executor: Starting executor ID driver on host localhost
20/11/29 16:35:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35529.
20/11/29 16:35:53 INFO NettyBlockTransferService: Server created on anonymouspirates:35529
20/11/29 16:35:53 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/11/29 16:35:53 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, anonymouspirates, 35529, None)
20/11/29 16:35:53 INFO BlockManagerMasterEndpoint: Registering block manager anonymouspirates:35529 with 366.3 MB RAM, BlockManagerId(driver, anonymouspirates, 35529, None)
20/11/29 16:35:53 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, anonymouspirates, 35529, None)
20/11/29 16:35:53 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, anonymouspirates, 35529, None)
20/11/29 16:35:54 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/wintersoldier/Documents/tookitaki/sparkTest/spark-warehouse').
20/11/29 16:35:54 INFO SharedState: Warehouse path is 'file:/home/wintersoldier/Documents/tookitaki/sparkTest/spark-warehouse'.
20/11/29 16:35:54 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
root
|-- key: binary (nullable = true)
|-- value: binary (nullable = true)
|-- topic: string (nullable = true)
|-- partition: integer (nullable = true)
|-- offset: long (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampType: integer (nullable = true)
20/11/29 16:35:56 INFO MicroBatchExecution: Starting [id = 3918834a-7b1d-41d6-9069-60bfb807019f, runId = 9a8756a8-de0e-4bc1-8e37-91fc02bfaeb0]. Use file:///home/wintersoldier/Desktop/checkpoint to store the query checkpoint.
20/11/29 16:35:56 INFO MicroBatchExecution: Using MicroBatchReader [KafkaV2[Subscribe[sample_topic]]] from DataSourceV2 named 'kafka' [org.apache.spark.sql.kafka010.KafkaSourceProvider#ebf74a2]
20/11/29 16:35:56 INFO MicroBatchExecution: Starting new streaming query.
20/11/29 16:35:56 INFO MicroBatchExecution: Stream started from {}
20/11/29 16:35:56 INFO ConsumerConfig: ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = spark-kafka-source-dba9525c-4dfe-4cde-bcc7-0d54fde3e897-810093942-driver-0
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 1
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
20/11/29 16:35:56 INFO AppInfoParser: Kafka version : 2.0.0
20/11/29 16:35:56 INFO AppInfoParser: Kafka commitId : 3402a8361b734732

Kafka Spark Structured Streaming with SASL_SSL authentication

I have been trying to use Spark Structured Streaming API to connect to Kafka cluster with SASL_SSL. I have passed the jaas.conf file to the executors. It seems I couldn't set the values of keystore and truststore authentications.
I tried passing the values as mentioned in thisspark link
Also, tried passing it through the code as in this link
Still no luck.
Here is the log
20/02/28 10:00:53 INFO streaming.StreamExecution: Starting [id = e176f5e7-7157-4df5-93ce-1e267bae6125, runId = 03225a69-ec00-45d9-8092-1467da34980f]. Use flight/checkpoint to store the query checkpoint.
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
20/02/28 10:00:53 INFO spark.SparkContext: Invoking stop() from shutdown hook
20/02/28 10:00:53 INFO server.AbstractConnector: Stopped Spark#46202f7b{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
20/02/28 10:00:53 INFO consumer.ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [broker1:9093, broker2:9093]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 1
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = spark-kafka-source-93d170e9-977c-40fc-9e5d-790d253fcff5-409016337-driver-0
retry.backoff.ms = 100
ssl.secure.random.implementation = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = SASL_SSL
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = earliest
20/02/28 10:00:53 INFO ui.SparkUI: Stopped Spark web UI at http://<Server>:41037
20/02/28 10:00:53 ERROR streaming.StreamExecution: Query [id = e176f5e7-7157-4df5-93ce-1e267bae6125, runId = 03225a69-ec00-45d9-8092-1467da34980f] terminated with error
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:702)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:557)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:540)
at org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:62)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.createConsumer(KafkaOffsetReader.scala:297)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.<init>(KafkaOffsetReader.scala:78)
at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:88)
at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:243)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:158)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:155)
at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:155)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:153)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:153)
at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:147)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:276)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:623)
... 22 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Unknown Source)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Unknown Source)
at com.sun.security.auth.module.Krb5LoginModule.login(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.access$000(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(Unknown Source)
at javax.security.auth.login.LoginContext.login(Unknown Source)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:69)
at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:110)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:46)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)
... 25 more
20/02/28 10:00:53 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
20/02/28 10:00:53 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
20/02/28 10:00:53 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
20/02/28 10:00:53 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/02/28 10:00:53 INFO memory.MemoryStore: MemoryStore cleared
20/02/28 10:00:53 INFO storage.BlockManager: BlockManager stopped
20/02/28 10:00:53 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
20/02/28 10:00:53 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/02/28 10:00:53 INFO spark.SparkContext: Successfully stopped SparkContext
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
20/02/28 10:00:53 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
20/02/28 10:00:53 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/hasif.subair/.sparkStaging/application_1582866369627_0029
20/02/28 10:00:53 INFO util.ShutdownHookManager: Shutdown hook called
20/02/28 10:00:53 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/hasif.subair/appcache/application_1582866369627_0029/spark-5addfec0-a99f-49e1-b9d1-671c331efb40
Code
val rawData = spark.readStream.format("kafka")
.option("kafka.bootstrap.servers", "broker1:9093, broker2:9093")
.option("subscribe", "hasif_test")
.option("spark.executor.extraJavaOptions", "-Djava.security.auth.login.config=jaas.conf")
.option("kafka.security.protocol", "SASL_SSL")
.option("ssl.truststore.location", "/etc/connect_ts/truststore.jks")
.option("ssl.truststore.password", "<PASSWORD>")
.option("ssl.keystore.location", "/etc/connect_ts/keystore.jks")
.option("ssl.keystore.password", "<PASSWORD>")
.option("ssl.key.password", "<PASSWORD>")
.load()
rawData.writeStream.option("path", "flight/output")
.option("checkpointLocation", "flight/checkpoint").format("csv").start()
spark-submit
spark2-submit --master yarn --deploy-mode cluster \
--conf spark.yarn.keytab=hasif.subair.keytab \
--conf spark.yarn.principal=hasif.subair#TEST.ABC \
--files /home/hasif.subair/jaas.conf \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./jaas.conf" \
--conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=./jaas.conf" \
--conf "spark.kafka.clusters.hasif.ssl.truststore.location=/etc/ts/truststore.jks" \
--conf "spark.kafka.clusters.hasif.ssl.truststore.password=testcluster" \
--conf "spark.kafka.clusters.hasif.ssl.keystore.location=/etc/ts/keystore.jks" \
--conf "spark.kafka.clusters.hasif.ssl.keystore.password=testcluster" \
--conf "spark.kafka.clusters.hasif.ssl.key.password=testcluster" \
--jars spark-sql-kafka-0-10_2.11-2.2.0.jar \
--class TestApp test_app_2.11-0.1.jar \
jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
principal="hasif.subair#TEST.ABC"
useKeyTab=true
serviceName="kafka"
keyTab="hasif.subair.keytab"
client=true;
};
Any help will be deeply appreciated.
Kafka’s own configurations can be set via DataStreamReader.option with kafka. prefix, e.g.
val clusterName = "hasif"
stream.option(s"spark.kafka.clusters.${clusterName}.kafka.ssl.keystore.location", "/etc/connect_ts/keystore.jks")
Use kafka.ssl.truststore.location instead of ssl.truststore.location.
Similarly, you can prefix kafka for other properties and try.
I suspect the values for SSL is not getting picked up. As you can notice in your log the values are shown as null.
ssl.truststore.location = null
ssl.truststore.password = null
ssl.keystore.password = null
ssl.keystore.location = null
If the values are set properly it would reflect as
ssl.truststore.location = /etc/connect_ts/truststore.jks
ssl.truststore.password = [hidden]
ssl.keystore.password = [hidden]
ssl.keystore.location = /etc/connect_ts/keystore.jks

Problem with sending message from flume to kafka

I have two hosts in two different VMs,
i have flume on a Centos 7 host and kafka on cloudera host.
So i connected the two of them and make kafka in the cloudera host as sink in flume as represent the code bellow :
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/Bureau/V1/outputCV
a1.sources.r1.fileHeader = true
a1.sources.r1.interceptors = timestampInterceptor
a1.sources.r1.interceptors.timestampInterceptor.type = timestamp
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = flume-topic
a1.sinks.k1.kafka.bootstrap.servers =
192.168.5.129:9090,192.168.5.129:9091,192.168.5.129:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
I got this error when flume try to send messages to kafka :
[root#localhost V1]# /root/flume/bin/flume-ng agent --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
Warning: No configuration directory set! Use --conf <dir> to override.
Warning: JAVA_HOME is not set!
Info: Including Hadoop libraries found via (/root/hadoop/bin/hadoop) for HDFS access
Info: Including Hive libraries found via () for Hive access
+ exec /usr/bin/java -Xmx20m -cp '/root/flume/lib/*:/root/hadoop/etc/hadoop:/root/hadoop/share/hadoop/common/lib/*:/root/hadoop/share/hadoop/common/*:/root/hadoop/share/hadoop/hdfs:/root/hadoop/share/hadoop/hdfs/lib/*:/root/hadoop/share/hadoop/hdfs/*:/root/hadoop/share/hadoop/mapreduce/lib/*:/root/hadoop/share/hadoop/mapreduce/*:/root/hadoop/share/hadoop/yarn:/root/hadoop/share/hadoop/yarn/lib/*:/root/hadoop/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib org.apache.flume.node.Application --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-05-07 19:04:41,363 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
2019-05-07 19:04:41,366 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/root/Bureau/V1/flumeconf/flumetest.conf
2019-05-07 19:04:41,374 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 WARN conf.FlumeConfiguration: Agent configuration for 'a1' has no configfilters.
2019-05-07 19:04:41,392 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
2019-05-07 19:04:41,392 INFO node.AbstractConfigurationProvider: Creating channels
2019-05-07 19:04:41,397 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
2019-05-07 19:04:41,400 INFO node.AbstractConfigurationProvider: Created channel c1
2019-05-07 19:04:41,401 INFO source.DefaultSourceFactory: Creating instance of source r1, type spooldir
2019-05-07 19:04:41,415 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: org.apache.flume.sink.kafka.KafkaSink
2019-05-07 19:04:41,420 INFO kafka.KafkaSink: Using the static topic flume-topic. This may be overridden by event headers
2019-05-07 19:04:41,426 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
2019-05-07 19:04:41,432 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:Spool Directory source r1: { spoolDir: /root/Bureau/V1/outputCV } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#492cf580 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
2019-05-07 19:04:41,432 INFO node.Application: Starting Channel c1
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
2019-05-07 19:04:41,492 INFO node.Application: Starting Sink k1
2019-05-07 19:04:41,494 INFO node.Application: Starting Source r1
2019-05-07 19:04:41,494 INFO source.SpoolDirectorySource: SpoolDirectorySource source starting with directory: /root/Bureau/V1/outputCV
2019-05-07 19:04:41,525 INFO producer.ProducerConfig: ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.5.129:9090, 192.168.5.129:9091, 192.168.5.129:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka version : 2.0.1
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka commitId : fa14705e51bd2ce5
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
2019-05-07 19:05:01,780 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2019-05-07 19:05:01,781 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /root/Bureau/V1/outputCV/cv.txt to /root/Bureau/V1/outputCV/cv.txt.COMPLETED
2019-05-07 19:05:03,673 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,776 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,856 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,910 INFO clients.Metadata: Cluster ID: F_Byx5toQju8jaLb3zFwAA
2019-05-07 19:05:04,084 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,140 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,231 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,393 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
Error connectiong to node shows up only after sending messages.
Can anyone helps me solve this problem.

What parameters should I pass for the schema-registry to run on non-master mode?

I want to run the schema-registry in non-master-mode in Kubernetes, I passed the environment variable master.eligibility=false, However, it's still electing the master.
Please point me where else I should change the configuration! There are no errors in the environment value being wrong.
cmd:
helm install helm-test-0.1.0.tgz --set env.name.SCHEMA_REGISTRY_KAFKASTORE_BOOTSERVERS="PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092" --set env.name.SCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8083" --set env.name.SCHEMA_REGISTRY_MASTER_ELIGIBILITY=false
Details:
replicaCount: 1
image:
repository: confluentinc/cp-schema-registry
tag: "5.0.0"
pullPolicy: IfNotPresent
env:
name:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8883"
SCHEMA_REGISTRY_HOST_NAME: localhost
SCHEMA_REGISTRY_MASTER_ELIGIBILITY: false
Pod - schema-registry properties:
root#test-app-788455bb47-tjlhw:/# cat /etc/schema-registry/schema-registry.properties
master.eligibility=false
listeners=http://0.0.0.0:8883
host.name=xx.xx.xxx.xx
kafkastore.bootstrap.servers=PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092
echo "===> Launching ... "
+ echo '===> Launching ... '
exec /etc/confluent/docker/launch
+ exec /etc/confluent/docker/launch
===> Launching ...
===> Launching schema-registry ...
[2018-10-15 18:52:45,993] INFO SchemaRegistryConfig values:
resource.extension.class = []
metric.reporters = []
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.ssl.trustmanager.algorithm = PKIX
inter.instance.protocol = http
authentication.realm =
ssl.keystore.type = JKS
kafkastore.topic = _schemas
metrics.jmx.prefix = kafka.schema.registry
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.topic.replication.factor = 3
ssl.truststore.password = [hidden]
kafkastore.timeout.ms = 500
host.name = xx.xxx.xx.xx
kafkastore.bootstrap.servers = [PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092]
schema.registry.zk.namespace = schema_registry
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.kerberos.service.name =
schema.registry.resource.extension.class = []
ssl.endpoint.identification.algorithm =
compression.enable = false
kafkastore.ssl.truststore.type = JKS
avro.compatibility.level = backward
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.truststore.location =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
kafkastore.ssl.keystore.type = JKS
authentication.skip.paths = []
ssl.truststore.type = JKS
kafkastore.ssl.truststore.password = [hidden]
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password = [hidden]
port = 8081
kafkastore.ssl.keystore.location =
metrics.tag.map = {}
master.eligibility = false
Logs of the schema-registry pod:
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2018-10-15 18:52:48,571] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,571] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,599] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,602] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,605] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,715] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.100.4.189-8083] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2018-10-15 18:52:48,721] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,775] INFO Wait to catch up until the offset of the last message at 228 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2018-10-15 18:52:49,831] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2018-10-15 18:52:49,852] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,852] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,909] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:49,915] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Discovered group coordinator ip-10-150-4-5.ec2.internal:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:49,919] INFO [Schema registry clientId=sr-1, groupId=schema-registry] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,975] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Successfully joined group with generation 92 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,980] INFO Finished rebalance with master election result: Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true} (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
[2018-10-15 18:52:53,088] INFO Adding listener: http://0.0.0.0:8083 (io.confluent.rest.Application)
[2018-10-15 18:52:53,347] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b01 (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:53,428] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,429] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,432] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.
[2018-10-15 18:52:54,364] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version)
[2018-10-15 18:52:54,587] INFO Started o.e.j.s.ServletContextHandler#764faa6{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,619] INFO Started o.e.j.s.ServletContextHandler#14a50707{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,642] INFO Started NetworkTrafficServerConnector#62656be4{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector)
[2018-10-15 18:52:54,644] INFO Started #9700ms (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:54,644] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
I checked and your configs look good. I believe, it is, in fact, starting as a follower and the logs are basically displaying who the master is in this case:
Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true}