Kafka JDBCSinkConnector Schema exception: JsonConverter with schemas.enable requires "schema" and "payload" - postgresql

I'm trying to tranfer data from Kafka topic to Postgres using JDBCSinkConnector. After all manipulations such as creating the topic, creating the stream, creating sink connector with configuration and produce data into topic throught python - connect logs returns the following result:
Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration
This is the code of JSON Schema (sch.json):
{
"schema":{
"type": "struct",
"fields": [
{
"type":"int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field":"url"
],
"optional":false,
"name": "test_data"
},
"payload":{
"id": 12,
"url":"some_url"
}
}
This is code for kafka-connect:
curl -X PUT http://localhost:8083/connectors/sink-jdbc-postgre-01/config \
-H "Content-Type: application/json" -d '{
"connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url" : "jdbc:postgresql://postgres:5432/",
"topics" : "test_topic06",
"key.converter" : "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable" : "true",
"value.converter" : "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable" : "true",
"connection.user" : "postgres",
"connection.password" : "*******",
"auto.create" : true,
"auto.evolve" : true,
"insert.mode" : "insert",
"pk.mode" : "record_key",
"pk.fields" : "MESSAGE_KEY"
}'
This is python code for producing data to Kafka:
from kafka import KafkaProducer
import json
producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'],value_serializer=lambda v: json.dumps(v).encode('utf-8'))
with open("../data/sch.json", 'r') as file:
read = file.read()
for i in range(1):
producer.send("test_topic06", value=read)
producer.close()
Then I tried to change "key.converter.schemas.enable" and "value.converter.schemas.enable" to false, but its all the same result in the log.
Full log:
2021-04-01 09:20:41,342] INFO MonitoringInterceptorConfig values:
connect | confluent.monitoring.interceptor.publishMs = 15000
connect | confluent.monitoring.interceptor.topic = _confluent-monitoring
connect | (io.confluent.monitoring.clients.interceptor.MonitoringInterceptorConfig)
connect | [2021-04-01 09:20:41,344] INFO ProducerConfig values:
connect | acks = -1
connect | batch.size = 16384
connect | bootstrap.servers = [broker:29092]
connect | buffer.memory = 33554432
connect | client.dns.lookup = default
connect | client.id = confluent.monitoring.interceptor.connector-consumer-sink-jdbc-postgre-01-0
connect | compression.type = lz4
connect | connections.max.idle.ms = 540000
connect | delivery.timeout.ms = 120000
connect | enable.idempotence = false
connect | interceptor.classes = []
connect | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
connect | linger.ms = 500
connect | max.block.ms = 60000
connect | max.in.flight.requests.per.connection = 1
connect | max.request.size = 10485760
connect | metadata.max.age.ms = 300000
connect | metadata.max.idle.ms = 300000
connect | metric.reporters = []
connect | metrics.num.samples = 2
connect | metrics.recording.level = INFO
connect | metrics.sample.window.ms = 30000
connect | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
connect | receive.buffer.bytes = 32768
connect | reconnect.backoff.max.ms = 1000
connect | reconnect.backoff.ms = 50
connect | request.timeout.ms = 30000
connect | retries = 10
connect | retry.backoff.ms = 500
connect | sasl.client.callback.handler.class = null
connect | sasl.jaas.config = null
connect | sasl.kerberos.kinit.cmd = /usr/bin/kinit
connect | sasl.kerberos.min.time.before.relogin = 60000
connect | sasl.kerberos.service.name = null
connect | sasl.kerberos.ticket.renew.jitter = 0.05
connect | sasl.kerberos.ticket.renew.window.factor = 0.8
connect | sasl.login.callback.handler.class = null
connect | sasl.login.class = null
connect | sasl.login.refresh.buffer.seconds = 300
connect | sasl.login.refresh.min.period.seconds = 60
connect | sasl.login.refresh.window.factor = 0.8
connect | sasl.login.refresh.window.jitter = 0.05
connect | sasl.mechanism = GSSAPI
connect | security.protocol = PLAINTEXT
connect | security.providers = null
connect | send.buffer.bytes = 131072
connect | ssl.cipher.suites = null
connect | ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
connect | ssl.endpoint.identification.algorithm = https
connect | ssl.key.password = null
connect | ssl.keymanager.algorithm = SunX509
connect | ssl.keystore.location = null
connect | ssl.keystore.password = null
connect | ssl.keystore.type = JKS
connect | ssl.protocol = TLS
connect | ssl.provider = null
connect | ssl.secure.random.implementation = null
connect | ssl.trustmanager.algorithm = PKIX
connect | ssl.truststore.location = null
connect | ssl.truststore.password = null
connect | ssl.truststore.type = JKS
connect | transaction.timeout.ms = 60000
connect | transactional.id = null
connect | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
connect | (org.apache.kafka.clients.producer.ProducerConfig)
connect | [2021-04-01 09:20:41,349] INFO Kafka version: 5.5.0-ce (org.apache.kafka.common.utils.AppInfoParser)
connect | [2021-04-01 09:20:41,349] INFO Kafka commitId: 6068e5d52c5e294e (org.apache.kafka.common.utils.AppInfoParser)
connect | [2021-04-01 09:20:41,349] INFO Kafka startTimeMs: 1617268841349 (org.apache.kafka.common.utils.AppInfoParser)
connect | [2021-04-01 09:20:41,349] INFO interceptor=confluent.monitoring.interceptor.connector-consumer-sink-jdbc-postgre-01-0 created for client_id=connector-consumer-sink-jdbc-postgre-01-0 client_type=CONSUMER session= cluster=K4nfs8sOSWCoI2_jEFzZ1Q group=connect-sink-jdbc-postgre-01 (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
connect | [2021-04-01 09:20:41,361] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
connect | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:492)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
connect | at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:359)
connect | at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$2(WorkerSinkTask.java:492)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
connect | ... 13 more
connect | [2021-04-01 09:20:41,363] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
connect | [2021-04-01 09:20:41,364] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask)
connect | [2021-04-01 09:20:41,366] INFO [Producer clientId=confluent.monitoring.interceptor.connector-consumer-sink-jdbc-postgre-01-0] Cluster ID: K4nfs8sOSWCoI2_jEFzZ1Q (org.apache.kafka.clients.Metadata)
connect | [2021-04-01 09:20:41,370] INFO [Consumer clientId=connector-consumer-sink-jdbc-postgre-01-0, groupId=connect-sink-jdbc-postgre-01] Revoke previously assigned partitions test_topic06-2, test_topic06-0, test_topic06-1 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
connect | [2021-04-01 09:20:41,370] INFO [Consumer clientId=connector-consumer-sink-jdbc-postgre-01-0, groupId=connect-sink-jdbc-postgre-01] Member connector-consumer-sink-jdbc-postgre-01-0-a6013ad5-a778-4372-a9ab-a0c77119150b sending LeaveGroup request to coordinator broker:29092 (id: 2147483646 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
connect | [2021-04-01 09:20:41,379] INFO Publish thread interrupted for client_id=connector-consumer-sink-jdbc-postgre-01-0 client_type=CONSUMER session= cluster=K4nfs8sOSWCoI2_jEFzZ1Q group=connect-sink-jdbc-postgre-01 (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
connect | [2021-04-01 09:20:41,396] INFO Publishing Monitoring Metrics stopped for client_id=connector-consumer-sink-jdbc-postgre-01-0 client_type=CONSUMER session= cluster=K4nfs8sOSWCoI2_jEFzZ1Q group=connect-sink-jdbc-postgre-01 (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)
connect | [2021-04-01 09:20:41,397] INFO [Producer clientId=confluent.monitoring.interceptor.connector-consumer-sink-jdbc-postgre-01-0] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
connect | [2021-04-01 09:20:41,403] INFO Closed monitoring interceptor for client_id=connector-consumer-sink-jdbc-postgre-01-0 client_type=CONSUMER session= cluster=K4nfs8sOSWCoI2_jEFzZ1Q group=connect-sink-jdbc-postgre-01 (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)

You are setting the Connector to parse a JSON key
"key.converter" : "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable" : "true",
but you're not pushing any key
producer.send("test_topic06", value=read)
Could you
set your key.converter to org.apache.kafka.connect.storage.StringConverter
or
Pass also the key with the same {schema: {}, payload:{}} structure with
producer.send("test_topic06", key=key_value, value=read)

Related

Spring Cloud Stream project with Failed to obtain partition information Error

When I use this configuration:
spring:
cloud:
stream:
kafka:
binder:
min-partition-count: 1
replication-factor: 1
kafka:
producer:
transaction-id-prefix: tx-
retries: 1
acks: all
My application start correctly, but the transactional.id that I see in console output show null.
I have applied this extra configuration(transaction) to spring-cloud-stream, in order to get the correct transactional.id:
spring:
cloud:
stream:
kafka:
binder:
min-partition-count: 1
replication-factor: 1
transaction:
transaction-id-prefix: txl-
kafka:
producer:
transaction-id-prefix: tx-
retries: 1
acks: all
But the service is not started successfuly and the console output show this:
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.435 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.437 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.437 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1606336069435
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.597 INFO [poc,,,] 1 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
app_poc.1.nqc57nvh0qhr#ms-poc-02 | acks = -1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | batch.size = 16384
app_poc.1.nqc57nvh0qhr#ms-poc-02 | bootstrap.servers = [kafka:29092]
app_poc.1.nqc57nvh0qhr#ms-poc-02 | buffer.memory = 33554432
app_poc.1.nqc57nvh0qhr#ms-poc-02 | client.dns.lookup = default
app_poc.1.nqc57nvh0qhr#ms-poc-02 | client.id = producer-txl-1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | compression.type = none
app_poc.1.nqc57nvh0qhr#ms-poc-02 | connections.max.idle.ms = 540000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | delivery.timeout.ms = 120000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | enable.idempotence = true
app_poc.1.nqc57nvh0qhr#ms-poc-02 | interceptor.classes = []
app_poc.1.nqc57nvh0qhr#ms-poc-02 | key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
app_poc.1.nqc57nvh0qhr#ms-poc-02 | linger.ms = 0
app_poc.1.nqc57nvh0qhr#ms-poc-02 | max.block.ms = 60000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | max.in.flight.requests.per.connection = 5
app_poc.1.nqc57nvh0qhr#ms-poc-02 | max.request.size = 1048576
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metadata.max.age.ms = 300000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metadata.max.idle.ms = 300000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metric.reporters = []
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metrics.num.samples = 2
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metrics.recording.level = INFO
app_poc.1.nqc57nvh0qhr#ms-poc-02 | metrics.sample.window.ms = 30000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
app_poc.1.nqc57nvh0qhr#ms-poc-02 | receive.buffer.bytes = 32768
app_poc.1.nqc57nvh0qhr#ms-poc-02 | reconnect.backoff.max.ms = 1000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | reconnect.backoff.ms = 50
app_poc.1.nqc57nvh0qhr#ms-poc-02 | request.timeout.ms = 30000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | retries = 1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | retry.backoff.ms = 100
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.client.callback.handler.class = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.jaas.config = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.kerberos.min.time.before.relogin = 60000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.kerberos.service.name = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.kerberos.ticket.renew.jitter = 0.05
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.kerberos.ticket.renew.window.factor = 0.8
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.callback.handler.class = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.class = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.refresh.buffer.seconds = 300
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.refresh.min.period.seconds = 60
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.refresh.window.factor = 0.8
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.login.refresh.window.jitter = 0.05
app_poc.1.nqc57nvh0qhr#ms-poc-02 | sasl.mechanism = GSSAPI
app_poc.1.nqc57nvh0qhr#ms-poc-02 | security.protocol = PLAINTEXT
app_poc.1.nqc57nvh0qhr#ms-poc-02 | security.providers = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | send.buffer.bytes = 131072
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.cipher.suites = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.enabled.protocols = [TLSv1.2]
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.endpoint.identification.algorithm = https
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.key.password = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.keymanager.algorithm = SunX509
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.keystore.location = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.keystore.password = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.keystore.type = JKS
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.protocol = TLSv1.2
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.provider = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.secure.random.implementation = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.trustmanager.algorithm = PKIX
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.truststore.location = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.truststore.password = null
app_poc.1.nqc57nvh0qhr#ms-poc-02 | ssl.truststore.type = JKS
app_poc.1.nqc57nvh0qhr#ms-poc-02 | transaction.timeout.ms = 60000
app_poc.1.nqc57nvh0qhr#ms-poc-02 | transactional.id = txl-1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
app_poc.1.nqc57nvh0qhr#ms-poc-02 |
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.599 INFO [poc,,,] 1 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-txl-1, transactionalId=txl-1] Instantiated a transactional producer.
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.623 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.1
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.624 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 0efa8fb0f4c73d92
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.624 INFO [poc,,,] 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1606336069623
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.626 INFO [poc,,,] 1 --- [ main] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-txl-1, transactionalId=txl-1] Invoking InitProducerId for the first time in order to acquire a producer ID
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:27:49.637 INFO [poc,,,] 1 --- [ producer-txl-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-txl-1, transactionalId=txl-1] Cluster ID: 3wV8FW9yTfKSVhNwNMoC2Q
app_poc.1.nqc57nvh0qhr#ms-poc-02 | 2020-11-25 20:28:49.630 ERROR [poc,,,] 1 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information
app_poc.1.nqc57nvh0qhr#ms-poc-02 |
app_poc.1.nqc57nvh0qhr#ms-poc-02 | org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId
Failed to obtain partition information
I think I am doing something wrong with my configuration(definitely)
My intention is to have Exactly Once, in order to avoid duplications. That's why I want to see that transactional.id
Extra info:
My consumer is transactional using JPA and Kafka transaction together(transaction syncronization using chainedKafkaTransactionManager)
EDITED:
In a #Configuration class I have these beans
#Bean
#Primary
fun transactionManager(em: EntityManagerFactory): JpaTransactionManager {
return JpaTransactionManager(em)
}
#Bean
fun kafkaTransactionManager(producerFactory: ProducerFactory<Any, Any>): KafkaTransactionManager<*, *> {
return KafkaTransactionManager(producerFactory)
}
#Bean
fun chainedTransactionManager(
kafkaTransactionManager: KafkaTransactionManager<String, String>,
transactionManager: JpaTransactionManager,
): ChainedKafkaTransactionManager<Any, Any> {
return ChainedKafkaTransactionManager(kafkaTransactionManager, transactionManager)
}
#Bean
fun kafkaListenerContainerFactory(
configurer: ConcurrentKafkaListenerContainerFactoryConfigurer,
kafkaConsumerFactory: ConsumerFactory<Any, Any>,
chainedKafkaTransactionManager: ChainedKafkaTransactionManager<Any, Any>,
): ConcurrentKafkaListenerContainerFactory<*, *> {
val factory = ConcurrentKafkaListenerContainerFactory<Any, Any>()
configurer.configure(factory, kafkaConsumerFactory)
factory.containerProperties.transactionManager = chainedKafkaTransactionManager
return factory
}
And my processor class with the corresponding #Transactional
#EnableKafka
#EnableBinding(Channels::class)
#Service
#Transactional
class EventProcessor()
...
According to me with the first configuration showed, transactional synchronizations works.
I used this logging configuration to confirm Initializing transaction synchronization and Clearing transaction synchronization of TransactionSynchronizationManager.
logging:
level:
org.springframework.kafka: trace
org.springframework.transaction: trace
See this answer.
You most likely don't have enough replicas or in-sync replicas for the transaction log topics.
using ChainedKafkaTransactionManager
That is only supported in spring-cloud-stream (out of the box) for producer-only transactions. For consume->produce->publishToKafka operations, you must use #Transactional on the listener, with just the JPA transaction manager; the result is similar to transaction synchronization.
Or, you must inject a properly configured CKTM into the binding's listener container.
You need to show your code and the rest of the configuration.

How can I enable SASL in Kafka-Connect (within Cluster)

I have downloaded cp-kafka-connect and deployed in my k8s cluster with a KafKa broker which accept secure connections. (SASL)
I would like to enable security(SASL) for Kafka Connect.
I am using ConfigMap to mount the configuration file named connect-distributed.properties into cp-kafka-connect container (in etc/kafka)
Here is the part of configuration file:
sasl.mechanism=SCRAM-SHA-256
# Configure SASL_SSL if SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin" password="password-secret";
But It is failing to start with an error.
Here are the logs:
kubectl logs test-cp-kafka-connect-846f4b745f-hx2mp
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka-connect
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.5.0
CONNECT_BOOTSTRAP_SERVERS=PLAINTEXT://test-kafka:9092
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=3
CONNECT_CONFIG_STORAGE_TOPIC=test-cp-kafka-connect-config
CONNECT_GROUP_ID=test
CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://test-cp-schema-registry:8081
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=3
CONNECT_OFFSET_STORAGE_TOPIC=test-cp-kafka-connect-offset
CONNECT_PLUGIN_PATH=/usr/share/java,/usr/share/confluent-hub-components
CONNECT_REST_ADVERTISED_HOST_NAME=10.233.85.127
CONNECT_REST_PORT=8083
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=3
CONNECT_STATUS_STORAGE_TOPIC=test-cp-kafka-connect-status
CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://test-cp-schema-registry:8081
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=test-cp-kafka-connect-846f4b745f-hx2mp
KAFKA_ADVERTISED_LISTENERS=
KAFKA_HEAP_OPTS=-Xms512M -Xmx512M
KAFKA_JMX_PORT=5555
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=
KUBERNETES_PORT=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.233.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
TEST_0_EXTERNAL_PORT=tcp://10.233.13.164:19092
TEST_0_EXTERNAL_PORT_19092_TCP=tcp://10.233.13.164:19092
TEST_0_EXTERNAL_PORT_19092_TCP_ADDR=10.233.13.164
TEST_0_EXTERNAL_PORT_19092_TCP_PORT=19092
TEST_0_EXTERNAL_PORT_19092_TCP_PROTO=tcp
TEST_0_EXTERNAL_SERVICE_HOST=10.233.13.164
TEST_0_EXTERNAL_SERVICE_PORT=19092
TEST_0_EXTERNAL_SERVICE_PORT_EXTERNAL_BROKER=19092
TEST_CP_KAFKA_CONNECT_PORT=tcp://10.233.38.137:8083
TEST_CP_KAFKA_CONNECT_PORT_8083_TCP=tcp://10.233.38.137:8083
TEST_CP_KAFKA_CONNECT_PORT_8083_TCP_ADDR=10.233.38.137
TEST_CP_KAFKA_CONNECT_PORT_8083_TCP_PORT=8083
TEST_CP_KAFKA_CONNECT_PORT_8083_TCP_PROTO=tcp
TEST_CP_KAFKA_CONNECT_SERVICE_HOST=10.233.38.137
TEST_CP_KAFKA_CONNECT_SERVICE_PORT=8083
TEST_CP_KAFKA_CONNECT_SERVICE_PORT_KAFKA_CONNECT=8083
TEST_KAFKA_EXPORTER_PORT=tcp://10.233.5.215:9308
TEST_KAFKA_EXPORTER_PORT_9308_TCP=tcp://10.233.5.215:9308
TEST_KAFKA_EXPORTER_PORT_9308_TCP_ADDR=10.233.5.215
TEST_KAFKA_EXPORTER_PORT_9308_TCP_PORT=9308
TEST_KAFKA_EXPORTER_PORT_9308_TCP_PROTO=tcp
TEST_KAFKA_EXPORTER_SERVICE_HOST=10.233.5.215
TEST_KAFKA_EXPORTER_SERVICE_PORT=9308
TEST_KAFKA_EXPORTER_SERVICE_PORT_KAFKA_EXPORTER=9308
TEST_KAFKA_MANAGER_PORT=tcp://10.233.7.186:9000
TEST_KAFKA_MANAGER_PORT_9000_TCP=tcp://10.233.7.186:9000
TEST_KAFKA_MANAGER_PORT_9000_TCP_ADDR=10.233.7.186
TEST_KAFKA_MANAGER_PORT_9000_TCP_PORT=9000
TEST_KAFKA_MANAGER_PORT_9000_TCP_PROTO=tcp
TEST_KAFKA_MANAGER_SERVICE_HOST=10.233.7.186
TEST_KAFKA_MANAGER_SERVICE_PORT=9000
TEST_KAFKA_MANAGER_SERVICE_PORT_KAFKA_MANAGER=9000
TEST_KAFKA_PORT=tcp://10.233.12.237:9092
TEST_KAFKA_PORT_8001_TCP=tcp://10.233.12.237:8001
TEST_KAFKA_PORT_8001_TCP_ADDR=10.233.12.237
TEST_KAFKA_PORT_8001_TCP_PORT=8001
TEST_KAFKA_PORT_8001_TCP_PROTO=tcp
TEST_KAFKA_PORT_9092_TCP=tcp://10.233.12.237:9092
TEST_KAFKA_PORT_9092_TCP_ADDR=10.233.12.237
TEST_KAFKA_PORT_9092_TCP_PORT=9092
TEST_KAFKA_PORT_9092_TCP_PROTO=tcp
TEST_KAFKA_SERVICE_HOST=10.233.12.237
TEST_KAFKA_SERVICE_PORT=9092
TEST_KAFKA_SERVICE_PORT_BROKER=9092
TEST_KAFKA_SERVICE_PORT_KAFKASHELL=8001
TEST_ZOOKEEPER_PORT=tcp://10.233.1.144:2181
TEST_ZOOKEEPER_PORT_2181_TCP=tcp://10.233.1.144:2181
TEST_ZOOKEEPER_PORT_2181_TCP_ADDR=10.233.1.144
TEST_ZOOKEEPER_PORT_2181_TCP_PORT=2181
TEST_ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
TEST_ZOOKEEPER_SERVICE_HOST=10.233.1.144
TEST_ZOOKEEPER_SERVICE_PORT=2181
TEST_ZOOKEEPER_SERVICE_PORT_CLIENT=2181
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
appID=dAi5R82Pf9xC38kHkGeAFaOknIUImdmS-1589882527
cluster=test
datacenter=testx
namespace=mynamespace
workspace=8334431b-ef82-414f-9348-a8de032dfca7
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ...
===> Check if Kafka is healthy ...
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [PLAINTEXT://test-kafka:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 5.5.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 785a156634af5f7e
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1589883940496
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1589883970509) timed out at 1589883970510 after 281 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
The error is:
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1589883970509) timed out at 1589883970510 after 281 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node
Refer to this approach:
sasl-scram-connect-workers
Can someone help me how to resolve this issue?
Change your boostrapServers parameter to point to the SASL listerner. For example:
SASL_SSL://test-kafka:9093

Trouble connecting to MSK over SSL using Kafka-Connect

I'm having trouble using the AWS MSK TLS endpoints in Confluent Kafka-Connect image as it times out creating/reading to the topics. Works totally fine when I pass the PlainText endpoints.
I tried referencing the jks store path available on the that docker image still doesn't work not quite sure if I'm missing any other configs. From what I read from AWS docs Amazon MSK brokers use public AWS Certificate Manager certificates therefore, any truststore that trusts Amazon Trust Services also trusts the certificates of Amazon MSK brokers.
**Error:**
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create.
Attaching the kafka-connect config I'm using any help would be great :)
INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [**.us-east-1.amazonaws.com:9094,*.us-east-1.amazonaws.com:9094]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = JKSStorePath
ssl.truststore.password = ***
ssl.truststore.type = JKS
I used the java cacerts in the docker image at /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts as the truststore. With keytool, if you look at the certs:
keytool --list -v -keystore /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts|grep Amazon
It will list out the Amazon CAs.
I then started the container using:
docker run -d \
--name=kafka-connect-avro-ssl \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=<msk_broker1>:9094,<msk_broker2>:9094,<msk_broker3>:9094 \
-e CONNECT_REST_PORT=28083 \
-e CONNECT_GROUP_ID="quickstart-avro" \
-e CONNECT_CONFIG_STORAGE_TOPIC="avro-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="avro-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="avro-status" \
-e CONNECT_KEY_CONVERTER="io.confluent.connect.avro.AvroConverter" \
-e CONNECT_VALUE_CONVERTER="io.confluent.connect.avro.AvroConverter" \
-e CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL="<hostname of EC2 instance>:8081" \
-e CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL="http://<hostname of EC2 instance>:8081" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="<hostname of EC2 instance>" \
-e CONNECT_LOG4J_ROOT_LOGLEVEL=DEBUG \
-e CONNECT_SECURITY_PROTOCOL=SSL \
-e CONNECT_SSL_TRUSTSTORE_LOCATION=/usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts \
-e CONNECT_SSL_TRUSTSTORE_PASSWORD=changeit \
confluentinc/cp-kafka-connect:latest
With that, it started successfully. I was also able to connect to the container, create topics, produce and consume from within the container. If you're unable to create topics, it could be a network connectivity issue, possibly a security group issue of the security group attached to the MSK cluster, blocking ports 2181 and TLS port 9094.

Why is camel kafka producer very slow?

I am using apache camel kafka as client for producing message, what I observed is kafka producer taking 1 ms to push a message, if I merge message into batch by using camel aggregation then it is taking 100ms to push a single message.
Brief description of installation
3 kafka clusther 16Core 32GB RAM
Sample Code
String endpoint="kafka:test?topic=test&brokers=nodekfa:9092,nodekfb:9092,nodekfc:9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536";
Message message = new Message();
String payload = new ObjectMapper().writeValueAsString(message);
StopWatch stopWatch = new StopWatch();
stopWatch.watch();
for (int i=0;i<size;i++)
{
producerTemplate.sendBody(endpoint,ExchangePattern.InOnly, payload);
}
logger.info("Time taken to push {} message is {}",size,stopWatch.getElasedTime());
camel producer endpoint
kafka:[topic]?topic=[topic]&brokers=[brokers]&maxInFlightRequest=1
I am getting throughput of 1000/s though kafka documentation brag producer tps around 100,000.
Let me know if there is any bug in camel-kafka or in kafka itself.
Producer config
acks = 1
batch.size = 65536
bootstrap.servers = [nodekfa:9092, nodekfb:9092, nodekfc:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 1
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Test Logs
DEBUG [2019-06-02 17:30:46,781] c.g.p.f.u.AuditEventNotifier: >>> Took 3 millis for the exchange on the route : null
DEBUG [2019-06-02 17:30:46,781] c.g.p.f.u.AuditEventNotifier: >>> Took 3 millis to send to external system : kafka://test?brokers=nodekfa%3A9092%2Cnodekfb%3A9092%2Cnodekfc%3A9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536&topic=test by thead http-nio-8551-exec-6
DEBUG [2019-06-02 17:30:46,783] c.g.p.f.u.AuditEventNotifier: >>> Took 2 millis for the exchange on the route : null
DEBUG [2019-06-02 17:30:46,783] c.g.p.f.u.AuditEventNotifier: >>> Took 2 millis to send to external system : kafka://test?brokers=nodekfa%3A9092%2Cnodekfb%3A9092%2Cnodekfc%3A9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536&topic=test by thead http-nio-8551-exec-6
DEBUG [2019-06-02 17:30:46,784] c.g.p.f.u.AuditEventNotifier: >>> Took 1 millis for the exchange on the route : null
DEBUG [2019-06-02 17:30:46,785] c.g.p.f.u.AuditEventNotifier: >>> Took 2 millis to send to external system : kafka://test?brokers=nodekfa%3A9092%2Cnodekfb%3A9092%2Cnodekfc%3A9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536&topic=test by thead http-nio-8551-exec-6
DEBUG [2019-06-02 17:30:46,786] c.g.p.f.u.AuditEventNotifier: >>> Took 1 millis for the exchange on the route : null
DEBUG [2019-06-02 17:30:46,786] c.g.p.f.u.AuditEventNotifier: >>> Took 1 millis to send to external system : kafka://test?brokers=nodekfa%3A9092%2Cnodekfb%3A9092%2Cnodekfc%3A9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536&topic=test by thead http-nio-8551-exec-6
DEBUG [2019-06-02 17:30:46,788] c.g.p.f.u.AuditEventNotifier: >>> Took 2 millis for the exchange on the route : null
DEBUG [2019-06-02 17:30:46,788] c.g.p.f.u.AuditEventNotifier: >>> Took 2 millis to send to external system : kafka://test?brokers=nodekfa%3A9092%2Cnodekfb%3A9092%2Cnodekfc%3A9092&lingerMs=0&maxInFlightRequest=1&producerBatchSize=65536&topic=test by thead http-nio-8551-exec-6
INFO [2019-06-02 17:30:46,788] c.g.p.f.a.MessageApiController: Time taken to push 5 message is 10ms
It is clearly taking minimum 1ms for message, default worker pool max size is 20 , if i set compression codec to snappy this will make performance worst.
Let me know what I am missing !!
I am facing the same issue, from this email https://camel.465427.n5.nabble.com/Kafka-Producer-Performance-tp5785767p5785860.html I used https://camel.apache.org/manual/latest/aggregate-eip.html to create batches and got better performance
from("direct:dp.events")
.aggregate(constant(true), new ArrayListAggregationStrategy())
.completionSize(3)
.to(kafkaUri)
.to("log:out?groupInterval=1000&groupDelay=500")
.end();
I get :
INFO Received: 1670 new messages, with total 13949 so far. Last group took: 998 millis which is: 1,673.347 messages per second. average: 1,262.696
This is using 1 Azure Event Hub using Kafka Protocol w/ one partition. The weird thing is that when I use another EH w/ 5 partitions I get bad performance compare to the 1 partition example...
Multiple partitions (UPDATE)
I was able to get 3K message per second by increasing the workerPoolCoreSize and the workerPoolMaxSize, in addition to adding partition keys to the messages and adding aggregation before sending to kafka endpoint

Confluent Schema Registry Kubernetess hangs

I am trying to run Schema-registry server using helm charts from github hangs during startup when I deploy the kubernetess, kafka and zookeeper is up. I tried to Add DEBUG=true for more info but nothing prints. It was working great but i dont know what is happening. After the hang kubernetess just restarts the application and same situation happens. Kinly asking for help, how can I get more logs or information.
Also if i run this stack using docker-compose there is no issue. I guess it is about kubernetess configuration issue.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
vaultify-trade-dev-v1-s-kafka-0 1/1 Running 0 5m
vaultify-trade-dev-v1-s-kafka-1 1/1 Running 0 4m
vaultify-trade-dev-v1-s-schema-registry-6b4c57f998-kq5vv 0/1 CrashLoopBackOff 5 5m
internal-controller-54cb494qdxg 1/1 Running 0 5m
internal-controller 1/1 Running 0 5m
vaultify-trade-dev-v1-s-zookeeper-0 1/1 Running 0 5m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d
vaultify-trade-dev-v1-s-kafka ClusterIP 10.109.226.220 <none> 9092/TCP 8m
vaultify-trade-dev-v1-s-kafka-headless ClusterIP None <none> 9092/TCP 8m
vaultify-trade-dev-v1-s-schema-registry ClusterIP 10.98.201.198 <none> 8081/TCP 8m
internal-controller LoadBalancer 10.100.119.227 localhost 80:31323/TCP,443:31073/TCP 8m
internal-backend ClusterIP 10.100.74.127 <none> 80/TCP 8m
vaultify-trade-dev-v1-s-zookeeper ClusterIP 10.109.184.236 <none> 2181/TCP 8m
vaultify-trade-dev-v1-s-zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 8m
https://github.com/helm/charts/tree/master/incubator/schema-registry
===> Launching ...
===> Launching schema-registry ...
[2019-02-27 09:59:25,341] INFO SchemaRegistryConfig values:
resource.extension.class = []
metric.reporters = []
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
response.mediatype.default = application/vnd.schemaregistry.v1+json
resource.extension.classes = []
kafkastore.ssl.trustmanager.algorithm = PKIX
inter.instance.protocol = http
authentication.realm =
ssl.keystore.type = JKS
kafkastore.topic = _schemas
metrics.jmx.prefix = kafka.schema.registry
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.topic.replication.factor = 3
ssl.truststore.password = [hidden]
kafkastore.timeout.ms = 500
host.name = 10.1.2.67
kafkastore.bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
schema.registry.zk.namespace = schema_registry
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.kerberos.service.name =
schema.registry.resource.extension.class = []
ssl.endpoint.identification.algorithm =
compression.enable = true
kafkastore.ssl.truststore.type = JKS
avro.compatibility.level = backward
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.truststore.location =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
kafkastore.ssl.keystore.type = JKS
authentication.skip.paths = []
ssl.truststore.type = JKS
websocket.servlet.initializor.classes = []
kafkastore.ssl.truststore.password = [hidden]
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password = [hidden]
port = 8081
access.control.allow.headers =
kafkastore.ssl.keystore.location =
metrics.tag.map = {}
master.eligibility = true
ssl.client.auth = false
kafkastore.ssl.keystore.password = [hidden]
rest.servlet.initializor.classes = []
websocket.path.prefix = /ws
kafkastore.security.protocol = PLAINTEXT
ssl.trustmanager.algorithm =
authentication.method = NONE
request.logger.name = io.confluent.rest-utils.requests
ssl.key.password = [hidden]
kafkastore.zk.session.timeout.ms = 30000
kafkastore.sasl.mechanism = GSSAPI
kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
kafkastore.ssl.key.password = [hidden]
zookeeper.set.acl = false
schema.registry.inter.instance.protocol =
authentication.roles = [*]
metrics.num.samples = 2
ssl.protocol = TLS
schema.registry.group.id = schema-registry
kafkastore.ssl.keymanager.algorithm = SunX509
kafkastore.connection.url =
debug = false
listeners = []
kafkastore.group.id = vaultify-trade-dev-v1-s
ssl.provider =
ssl.enabled.protocols = []
shutdown.graceful.ms = 1000
ssl.keystore.location =
ssl.cipher.suites = []
kafkastore.ssl.endpoint.identification.algorithm =
kafkastore.ssl.cipher.suites =
access.control.allow.methods =
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
ssl.keymanager.algorithm =
metrics.sample.window.ms = 30000
kafkastore.init.timeout.ms = 60000
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2019-02-27 09:59:25,379] INFO Logging initialized #381ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2019-02-27 09:59:25,614] WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
[2019-02-27 09:59:25,734] WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
[2019-02-27 09:59:25,734] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:25,750] INFO AdminClientConfig values:
bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
(org.apache.kafka.clients.admin.AdminClientConfig)
[2019-02-27 09:59:25,813] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
[2019-02-27 09:59:25,817] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:25,817] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:25,973] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:25,981] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:26,010] INFO ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2019-02-27 09:59:26,046] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2019-02-27 09:59:26,046] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,046] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,062] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-02-27 09:59:26,098] INFO Kafka store reader thread starting consumer (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,107] INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
check.crcs = true
client.dns.lookup = default
client.id = KafkaStore-reader-_schemas
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = vaultify-trade-dev-v1-s
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2019-02-27 09:59:26,154] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,154] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,164] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)
[2019-02-27 09:59:26,168] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,170] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,200] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=vaultify-trade-dev-v1-s] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2019-02-27 09:59:26,228] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)
[2019-02-27 09:59:26,304] INFO Wait to catch up until the offset of the last message at 17 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:26,359] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 09:59:26,366] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,366] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,377] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)
This is my kubernetess deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vaultify-trade-dev-v1-s-schema-registry
labels:
app: schema-registry
chart: schema-registry-1.1.2
release: vaultify-trade-dev-v1-s
heritage: Tiller
spec:
replicas: 1
template:
metadata:
labels:
app: schema-registry
release: vaultify-trade-dev-v1-s
spec:
containers:
- name: schema-registry
image: "confluentinc/cp-schema-registry:5.1.2"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
- containerPort: 5555
name: jmx
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: vaultify-trade-dev-v1-s
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
- name: JMX_PORT
value: "5555"
resources:
{}
volumeMounts:
volumes:
More..
If I tell kubernetess to not restart I get this error
[2019-02-27 10:29:07,601] INFO Wait to catch up until the offset of the last message at 8 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 10:29:07,675] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 10:29:07,681] INFO Kafka version : 2.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 10:29:07,681] INFO Kafka commitId : 815feb8a888d39d9 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 10:29:07,696] INFO Cluster ID: HoNdEGzXTCqHb_Ba6_toaA (org.apache.kafka.clients.Metadata)
.
[2019-02-27 10:30:07,681] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:220)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:63)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:41)
at io.confluent.rest.Application.createServer(Application.java:169)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete
at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector.init(KafkaGroupMasterElector.java:202)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:215)
... 4 more
[2019-02-27 10:30:07,682] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 10:30:07,685] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,687] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,688] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,692] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,692] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-02-27 10:30:07,710] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
org.apache.kafka.common.errors.WakeupException
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:498)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:284)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:218)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230)
at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.ensureCoordinatorReady(SchemaRegistryCoordinator.java:207)
at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:97)
at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:192)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
As tolga_kavukcu mentioned in the comments:
Default replication factor for topics is 3 in kafka helm chart.
In 1 node cluster schema-registry cannot initiate topic creation at kafka side error happens.
Just change default replication factor if this is the case