Strimzi kafka - kafka connect not getting installed - apache-kafka

I've Strimzi Kafka setup on GKE, and it is working fine.
I've a requirement to setup MirrorMaker2 to push data from source kafka topic to target Kafka topic.
From what i understand, MirrorMaker2 requires KafkaConnect.
I'm trying to install KafkaConnect on the GKE cluster in namespace kafka-connect, the yaml used is the following.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
# annotations:
# strimzi.io/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 1
bootstrapServers: versa-kafka-gke-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: versa-kafka-gke-cluster-ca-cert
certificate: ca.crt
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
Command run to install:
kubectl apply -f kafka-connect.yaml -n kafka-connect
Note : Strimzi kafka is installed in namespace 'kafka', version - 3.0.0
I was expecting the KafkaConnect to get installed & the pods to be installed as well, however KafkaConnect resource is created, but not showing as Ready.
Also, the pods are not getting created.
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc get kafkaconnect my-connect-cluster -n kafka-connect
NAME DESIRED REPLICAS READY
my-connect-cluster 1
On describing the my-connect-cluster, here is the output (not showing any errors)
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc describe kafkaconnect my-connect-cluster -n kafka-connect
Name: my-connect-cluster
Namespace: kafka-connect
Labels: <none>
Annotations: <none>
API Version: kafka.strimzi.io/v1beta2
Kind: KafkaConnect
Metadata:
Creation Timestamp: 2023-02-16T06:38:41Z
Generation: 1
Managed Fields:
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:bootstrapServers:
f:config:
.:
f:config.storage.replication.factor:
f:config.storage.topic:
f:group.id:
f:offset.storage.replication.factor:
f:offset.storage.topic:
f:status.storage.replication.factor:
f:status.storage.topic:
f:replicas:
f:tls:
.:
f:trustedCertificates:
f:version:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-02-16T06:38:41Z
Resource Version: 266457732
UID: bdd697f8-e38a-466b-8ddf-81ed1ae54efe
Spec:
Bootstrap Servers: versa-kafka-gke-kafka-bootstrap:9093
Config:
config.storage.replication.factor: -1
config.storage.topic: connect-cluster-configs
group.id: connect-cluster
offset.storage.replication.factor: -1
offset.storage.topic: connect-cluster-offsets
status.storage.replication.factor: -1
status.storage.topic: connect-cluster-status
Replicas: 1
Tls:
Trusted Certificates:
Certificate: ca.crt
Secret Name: versa-kafka-gke-cluster-ca-cert
Version: 3.0.0
Events: <none>
How do i debug/fix this ?
tia!
Update :
Based on note from Jakub, i re-installed kafkaconnect in the same namespace as Strimzi kafka (i.e. namespace - kafka), and pods are coming up now. However the logs show error as shown below :
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc logs -f pod/my-connect-cluster-connect-67f76f5d89-nv9sj -n kafka
Preparing truststore
Certificate was added to keystore
Preparing truststore is complete
Starting Kafka Connect with configuration:
# Bootstrap servers
bootstrap.servers=versa-kafka-gke-w-kafka-bootstrap:9093
# REST Listeners
rest.port=8083
rest.advertised.host.name=10.6.0.199
rest.advertised.port=8083
# Plugins
plugin.path=/opt/kafka/plugins
# Provided configuration
offset.storage.topic=connect-cluster-offsets
value.converter=org.apache.kafka.connect.json.JsonConverter
config.storage.topic=connect-cluster-configs
key.converter=org.apache.kafka.connect.json.JsonConverter
group.id=connect-cluster
status.storage.topic=connect-cluster-status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
security.protocol=SSL
producer.security.protocol=SSL
consumer.security.protocol=SSL
admin.security.protocol=SSL
# TLS / SSL
ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
ssl.truststore.password=[hidden]
ssl.truststore.type=PKCS12
producer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
producer.ssl.truststore.password=[hidden]
consumer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
consumer.ssl.truststore.password=[hidden]
admin.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
admin.ssl.truststore.password=[hidden]
# Additional configuration
client.rack=
2023-02-17 06:43:08,952 INFO WorkerInfo values:
jvm.args = -Xms128M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/opt/kafka, -Dlog4j.configuration=file:/opt/kafka/custom-config/log4j.properties
jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.12, 11.0.12+7-LTS
jvm.classpath = /opt/kafka/bin/../libs/accessors-smart-2.4.7.jar:/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/annotations-13.0.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/automaton-1.11-8.jar:/opt/kafka/bin/../libs/checker-qual-3.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang-2.6.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-3.0.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-3.0.0.jar:/opt/kafka/bin/../libs/connect-file-3.0.0.jar:/opt/kafka/bin/../libs/connect-json-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-3.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-3.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-
......
2023-02-17 06:43:24,944 INFO Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:25,347 INFO DistributedConfig values:
access.control.allow.methods =
access.control.allow.origin =
admin.listeners = null
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
config.providers = []
config.storage.replication.factor = -1
config.storage.topic = connect-cluster-configs
connect.protocol = sessioned
connections.max.idle.ms = 540000
connector.client.config.override.policy = All
group.id = connect-cluster
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
heartbeat.interval.ms = 3000
inter.worker.key.generation.algorithm = HmacSHA256
inter.worker.key.size = null
inter.worker.key.ttl.ms = 3600000
inter.worker.signature.algorithm = HmacSHA256
inter.worker.verification.algorithms = [HmacSHA256]
key.converter = class org.apache.kafka.connect.json.JsonConverter
listeners = [http://:8083]
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.partitions = 25
offset.storage.replication.factor = -1
offset.storage.topic = connect-cluster-offsets
plugin.path = [/opt/kafka/plugins]
rebalance.timeout.ms = 60000
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 40000
response.http.headers.config =
rest.advertised.host.name = 10.6.0.199
rest.advertised.listener = null
rest.advertised.port = 8083
rest.extension.classes = []
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
scheduled.rebalance.max.delay.ms = 300000
security.protocol = SSL
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
status.storage.partitions = 5
status.storage.replication.factor = -1
status.storage.topic = connect-cluster-status
task.shutdown.graceful.timeout.ms = 5000
topic.creation.enable = true
topic.tracking.allow.reset = true
topic.tracking.enable = true
value.converter = class org.apache.kafka.connect.json.JsonConverter
worker.sync.timeout.ms = 3000
worker.unsync.backoff.ms = 300000
(org.apache.kafka.connect.runtime.distributed.DistributedConfig) [main]
2023-02-17 06:43:25,355 INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils) [main]
2023-02-17 06:43:25,359 INFO AdminClientConfig values:
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
(org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,848 WARN The configuration 'producer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'rest.advertised.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'admin.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'producer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'admin.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'consumer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'producer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'client.rack' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,855 WARN The configuration 'admin.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,856 INFO Kafka version: 3.0.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka commitId: 8cb0a5e9d3441962 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka startTimeMs: 1676616207856 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:30,857 INFO [AdminClient clientId=adminclient-1] Failed re-authentication with versa-kafka-gke-w-kafka-bootstrap/10.6.131.57 (Failed to process post-handshake messages) (org.apache.kafka.common.network.Selector) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,867 ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (versa-kafka-gke-w-kafka-bootstrap/10.6.131.57:9093) failed authentication due to: Failed to process post-handshake messages (org.apache.kafka.clients.NetworkClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,870 WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
2023-02-17 06:43:30,946 INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,947 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,948 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,949 INFO [AdminClient clientId=adminclient-1] Timed out 2 remaining operation(s) during close. (org.apache.kafka.clients.admin.KafkaAdminClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,959 INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,960 INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main]
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:97)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
... 3 more
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
Pls note :
Strimzi Kafka version - 3.0.0, hence I've changed the version of kafkaconnect to - 3.0.0 as well.

You say you want to use MirrorMaker2? Then you should be using that kind, not KafkaConnect.
https://strimzi.io/blog/2020/03/30/introducing-mirrormaker2/
And, as commented, ensure the operator is watching the namespace where you install any resources. Just because you can get/describe the resource doesn't mean the operator knows about it, or is processing it (look at its logs)

Related

java.nio.file.InvalidPathException : "Illegal char <:> at index 2" error with Kafka Connector in Mule 4 Munits

I am facing a very tough situation running MUnit on my project. I am using Mule 4.3.0 and Anypoint Studio 7.4.
Apparently, the point of error is occuring while loading a certs/cacerts file as a property used in some Apache Kakfa Connector TLS Configuration.
Its working absolutely fine when running the normal Mule code (having the TLS Context).
But fails to work, when running MUnit.
I have tried many ways to get this resolved with my team, but couldn't fix it. Although similar errors been reported by other developers occasionally, I couldn't concluded that this as a possible bug with the mule runtime or especially a Kafka Connector problem.
Finally, once removing the TLS context in Kafka Config, MUnit is working fine. But without TLS enabled, my project is basically useless
I need your help in resolving this and making my test work, but with TLS configurations present in its rightful place. Also, please take a look at this question in the forums: Mule Kafka Problem
Given below are the two snapshots of the Configuration used, and the error reported in the MUnit console:
Kafka Consumer TLS Configuration Snapshot:
Error Snapshot:
Compete error report given below:
INFO 2020-10-21 18:44:12,077 [main] org.mule.munit.remote.container.SuiteRunDispatcher: Suite errortopic-db-test-suite.xml will not be deployed: Suite was filtered from running
INFO 2020-10-21 18:44:12,078 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Waiting for client connection
INFO 2020-10-21 18:44:12,086 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Client connection received from 127.0.0.1 - true
WARN 2020-10-21 18:44:19,766 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:19,767 [munit.01] org.mule.runtime.api.tls.AbstractTlsContextFactoryBuilderFactory: Loaded TlsContextFactoryBuilderFactory implementation 'org.mule.runtime.module.tls.api.DefaultTlsContextFactoryBuilderFactory' from classloader 'java.net.URLClassLoader#7fd8c559'
INFO 2020-10-21 18:44:21,684 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration_oauth
INFO 2020-10-21 18:44:21,736 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration-By
WARN 2020-10-21 18:44:21,800 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,802 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Consumer_configuration
WARN 2020-10-21 18:44:21,810 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,872 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Producer_configuration
WARN 2020-10-21 18:44:21,879 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,917 [munit.01] org.apache.kafka.clients.producer.ProducerConfig: ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [hiding intentional]
buffer.memory = 1024000
client.dns.lookup = default
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = PLAIN
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = SunJSSE
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
ssl.truststore.password = [hidden]
ssl.truststore.type = jks
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
INFO 2020-10-21 18:44:21,985 [munit.01] org.apache.kafka.common.security.authenticator.AbstractLogin: Successfully logged in.
INFO 2020-10-21 18:44:21,996 [munit.01] org.apache.kafka.clients.producer.KafkaProducer: [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.
org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:434)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at com.mulesoft.connectors.kafka.internal.connection.provider.ProducerConnectionProvider.initialise(ProducerConnectionProvider.java:437)
at com.mulesoft.connectors.kafka.internal.connection.provider.KafkaConnectionProvider.initialise(KafkaConnectionProvider.java:129)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.module.extension.internal.runtime.config.ClassLoaderConnectionProviderWrapper.initialise(ClassLoaderConnectionProviderWrapper.java:96)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.doInitialise(LifecycleAwareConfigurationInstance.java:297)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.initialise(LifecycleAwareConfigurationInstance.java:145)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$null$0(LifecycleAwareConfigurationProvider.java:83)
at org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager.invokePhase(AbstractLifecycleManager.java:132)
at org.mule.runtime.core.internal.lifecycle.DefaultLifecycleManager.fireInitialisePhase(DefaultLifecycleManager.java:46)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$initialise$1(LifecycleAwareConfigurationProvider.java:81)
at org.mule.runtime.core.api.util.ExceptionUtils.tryExpecting(ExceptionUtils.java:224)
at org.mule.runtime.core.api.util.ClassUtils.withContextClassLoader(ClassUtils.java:966)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.initialise(LifecycleAwareConfigurationProvider.java:80)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.util.func.CheckedConsumer.accept(CheckedConsumer.java:19)
at org.mule.runtime.core.internal.lifecycle.phases.DefaultLifecyclePhase.applyLifecycle(DefaultLifecyclePhase.java:115)
at org.mule.runtime.core.internal.lifecycle.phases.MuleContextInitialisePhase.applyLifecycle(MuleContextInitialisePhase.java:73)
at org.mule.runtime.config.internal.SpringRegistryLifecycleManager$SpringContextInitialisePhase.applyLifecycle(SpringRegistryLifecycleManager.java:128)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.doApplyLifecycle(RegistryLifecycleManager.java:175)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.applyPhase(RegistryLifecycleManager.java:146)
at org.mule.runtime.config.internal.SpringRegistry.applyLifecycle(SpringRegistry.java:289)
at org.mule.runtime.core.internal.registry.MuleRegistryHelper.applyLifecycle(MuleRegistryHelper.java:339)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:287)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.lambda$applyLifecycle$4(LazyMuleArtifactContext.java:250)
at org.mule.runtime.core.internal.context.DefaultMuleContext.withLifecycleLock(DefaultMuleContext.java:531)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.applyLifecycle(LazyMuleArtifactContext.java:248)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:329)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:317)
at org.mule.munit.runner.config.TestComponentLocator.initializeComponents(TestComponentLocator.java:63)
at org.mule.munit.runner.model.builders.SuiteBuilder.build(SuiteBuilder.java:78)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.buildSuite(RunMessageHandler.java:108)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseSuiteMessage(RunMessageHandler.java:94)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseAndRun(RunMessageHandler.java:81)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.handle(RunMessageHandler.java:75)
at org.mule.munit.runner.remote.api.server.RunnerServer.handleClientMessage(RunnerServer.java:145)
at org.mule.munit.runner.remote.api.server.RunnerServer.run(RunnerServer.java:91)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:172)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:73)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:442)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:423)
... 56 more
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.lastModifiedMs(SslEngineBuilder.java:298)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.<init>(SslEngineBuilder.java:275)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.createTruststore(SslEngineBuilder.java:182)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.<init>(SslEngineBuilder.java:100)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:95)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:168)
... 61 more
This has been identified as a bug with the Anypoint Studio.
MuleSoft Support has suggested the below.
Upgrade the studio version to the latest patch 4.3.0-20200925 which has all the fixes included.
The patches are cumulative so all fixes from the august release will be available in the latest too.
The distributions are stored in the Mulesoft Nexus EE Repo repo so you will need to ensure you have the credentials configured in your settings.xml.
Follow this documentation Patching, which explains how to patch MUNITS using maven or studio.
Note: The first time execution may take a while because the Runtime artifacts will be downloaded to your local repo.

Problem with sending message from flume to kafka

I have two hosts in two different VMs,
i have flume on a Centos 7 host and kafka on cloudera host.
So i connected the two of them and make kafka in the cloudera host as sink in flume as represent the code bellow :
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/Bureau/V1/outputCV
a1.sources.r1.fileHeader = true
a1.sources.r1.interceptors = timestampInterceptor
a1.sources.r1.interceptors.timestampInterceptor.type = timestamp
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = flume-topic
a1.sinks.k1.kafka.bootstrap.servers =
192.168.5.129:9090,192.168.5.129:9091,192.168.5.129:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
I got this error when flume try to send messages to kafka :
[root#localhost V1]# /root/flume/bin/flume-ng agent --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
Warning: No configuration directory set! Use --conf <dir> to override.
Warning: JAVA_HOME is not set!
Info: Including Hadoop libraries found via (/root/hadoop/bin/hadoop) for HDFS access
Info: Including Hive libraries found via () for Hive access
+ exec /usr/bin/java -Xmx20m -cp '/root/flume/lib/*:/root/hadoop/etc/hadoop:/root/hadoop/share/hadoop/common/lib/*:/root/hadoop/share/hadoop/common/*:/root/hadoop/share/hadoop/hdfs:/root/hadoop/share/hadoop/hdfs/lib/*:/root/hadoop/share/hadoop/hdfs/*:/root/hadoop/share/hadoop/mapreduce/lib/*:/root/hadoop/share/hadoop/mapreduce/*:/root/hadoop/share/hadoop/yarn:/root/hadoop/share/hadoop/yarn/lib/*:/root/hadoop/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib org.apache.flume.node.Application --name a1 --conf-file /root/Bureau/V1/flumeconf/flumetest.conf
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-05-07 19:04:41,363 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
2019-05-07 19:04:41,366 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/root/Bureau/V1/flumeconf/flumetest.conf
2019-05-07 19:04:41,374 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:r1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:c1
2019-05-07 19:04:41,375 INFO conf.FlumeConfiguration: Processing:k1
2019-05-07 19:04:41,375 WARN conf.FlumeConfiguration: Agent configuration for 'a1' has no configfilters.
2019-05-07 19:04:41,392 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
2019-05-07 19:04:41,392 INFO node.AbstractConfigurationProvider: Creating channels
2019-05-07 19:04:41,397 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
2019-05-07 19:04:41,400 INFO node.AbstractConfigurationProvider: Created channel c1
2019-05-07 19:04:41,401 INFO source.DefaultSourceFactory: Creating instance of source r1, type spooldir
2019-05-07 19:04:41,415 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: org.apache.flume.sink.kafka.KafkaSink
2019-05-07 19:04:41,420 INFO kafka.KafkaSink: Using the static topic flume-topic. This may be overridden by event headers
2019-05-07 19:04:41,426 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
2019-05-07 19:04:41,432 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:Spool Directory source r1: { spoolDir: /root/Bureau/V1/outputCV } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#492cf580 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
2019-05-07 19:04:41,432 INFO node.Application: Starting Channel c1
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
2019-05-07 19:04:41,492 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
2019-05-07 19:04:41,492 INFO node.Application: Starting Sink k1
2019-05-07 19:04:41,494 INFO node.Application: Starting Source r1
2019-05-07 19:04:41,494 INFO source.SpoolDirectorySource: SpoolDirectorySource source starting with directory: /root/Bureau/V1/outputCV
2019-05-07 19:04:41,525 INFO producer.ProducerConfig: ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.5.129:9090, 192.168.5.129:9091, 192.168.5.129:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
2019-05-07 19:04:41,536 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka version : 2.0.1
2019-05-07 19:04:41,591 INFO utils.AppInfoParser: Kafka commitId : fa14705e51bd2ce5
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
2019-05-07 19:04:41,592 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
2019-05-07 19:05:01,780 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2019-05-07 19:05:01,781 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /root/Bureau/V1/outputCV/cv.txt to /root/Bureau/V1/outputCV/cv.txt.COMPLETED
2019-05-07 19:05:03,673 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,776 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,856 WARN clients.NetworkClient: [Producer clientId=producer-1] Connection to node -2 could not be established. Broker may not be available.
2019-05-07 19:05:03,910 INFO clients.Metadata: Cluster ID: F_Byx5toQju8jaLb3zFwAA
2019-05-07 19:05:04,084 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,140 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,231 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
java.io.IOException: Can't resolve address: quickstart.cloudera:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 7 more
2019-05-07 19:05:04,393 WARN clients.NetworkClient: [Producer clientId=producer-1] Error connecting to node quickstart.cloudera:9092 (id: 0 rack: null)
Error connectiong to node shows up only after sending messages.
Can anyone helps me solve this problem.

Kafka-Spark Batch Streaming: WARN clients.NetworkClient: Bootstrap broker disconnected

I am trying to write the rows of a Dataframe into a Kafka topic. The kafka cluster is Kerberized and I am providing the jaas.conf in the --conf arguments to be able to authenticate and connect to the cluster. Below is my code:
object app {
val conf = new SparkConf().setAppName("Kerberos kafka ")
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
System.setProperty("java.security.auth.login.config", "path to jaas.conf")
spark.sparkContext.setLogLevel("ERROR")
def main(args: Array[String]): Unit = {
val test= spark.sql("select * from testing.test")
test.show()
println("publishing to kafka...")
val test_final = test.selectExpr("cast(to_json(struct(*)) as string) AS value")
test_final .show()
test_final.write.format("kafka")
.option("kafka.bootstrap.servers","XXXXXXXXX:9093")
.option("topic", "test")
.option("security.protocol", "SASL_SSL")
.option("sasl.kerberos.service.name","kafka")
.save()
}
}
When I run the above code, it fails with this error:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
When I looked into the error logs on executors, I see this:
18/08/20 22:06:05 INFO producer.ProducerConfig: ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [xxxxxxxxx:9093]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
**security.protocol = PLAINTEXT**
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
18/08/20 22:06:05 **INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2**
18/08/20 22:06:05 INFO utils.AppInfoParser: Kafka commitId : unknown
18/08/20 22:06:05 INFO datasources.FileScanRDD: Reading File path: hdfs://nameservice1/user/test5/dt=2017-08-04/5a8bb121-3cab-4bed-a32b-9d0fae4a4e8b.parquet, range: 0-142192, partition values: [2017-08-04]
18/08/20 22:06:05 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 4
18/08/20 22:06:05 INFO memory.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 33.9 KB, free 5.2 GB)
18/08/20 22:06:05 INFO broadcast.TorrentBroadcast: Reading broadcast variable 4 took 224 ms
18/08/20 22:06:05 INFO memory.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 472.2 KB, free 5.2 GB)
18/08/20 22:06:06 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:06 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:07 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 4
18/08/20 22:06:08 INFO executor.Executor: Running task 1.0 in stage 2.0 (TID 4)
18/08/20 22:06:08 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:08 INFO datasources.FileScanRDD: Reading File path: hdfs://nameservice1/user/test5/dt=2017-08-10/2175e5d9-e969-41e9-8aa2-f329b5df06bf.parquet, range: 0-77484, partition values: [2017-08-10]
18/08/20 22:06:08 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:09 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:09 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:10 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
18/08/20 22:06:10 WARN clients.NetworkClient: Bootstrap broker xxxxxxxxx:9093:9093 disconnected
In this above log, I see three entries that are conflicting:
security.protocol = PLAINTEXT
sasl.kerberos.service.name = null
INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2
I am setting the security.protocol and sasl.kerberos.service.name values in my test_final.write..... Does that mean the configs are not being passed? The Kafka dependency I am using in my jar is:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.2.1</version>
</dependency>
Does the 0.10.2.1 version conflict with the 0.9.0-kafka-2.0.2 ? and could this be causing the issue?
Here is my jaas.conf:
/* $Id$ */
kinit {
com.sun.security.auth.module.Krb5LoginModule required;
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
useKeyTab=true
principal="user#CORP.COM"
serviceName="kafka"
keyTab="/data/home/keytabs/user.keytab"
client=true;
};
Here is my spark-submit command:
spark2-submit --master yarn --class app --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=path to jaas.conf" --conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=path to jaas.conf" --files path to jaas.conf --conf "spark.driver.extraClassPath=path to spark-sql-kafka-0-10_2.11-2.2.0.jar" --conf "spark.executor.extraClassPath=path to spark-sql-kafka-0-10_2.11-2.2.0.jar" --num-executors 2 --executor-cores 4 --executor-memory 10g --driver-memory 5g ./KerberizedKafkaConnect-1.0-SNAPSHOT-shaded.jar
Any help would be appreciated. Thank you!
I am not completely sure if this will fix your specific issue, but in spark structured streaming, the security options must be prefixed with kafka.
So you will have the following:
import org.apache.kafka.clients.CommonClientConfigs
import org.apache.kafka.common.config.SslConfigs
val security = Map(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG -> security.securityProtocol,
SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG -> security.sslTrustStoreLocation,
SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG -> security.sslTrustStorePassword,
SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG -> security.sslKeyStoreLocation,
SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG -> security.sslKeyStorePassword,
SslConfigs.SSL_KEY_PASSWORD_CONFIG -> security.sslKeyPassword
).map(x => "kafka." + x._1 -> x._2)
test_final.write.format("kafka")
.option("kafka.bootstrap.servers","XXXXXXXXX:9093")
.option("topic", "test")
.options(security)
.save()

Kafka Connect Distributed is not working

I am trying to run kafka connect in distributed mode to read from topic and write to HDFS.
The process is starting successfully with the required properties files given, but I am not able to commit the events to HDFS.
Please see the full logs below.
Note: Broker details have been masked and all other details are mentioned in the logs.
[2018-06-26 11:49:48,474] INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [****:****, ****:****]
check.crcs = true
client.id = consumer-3
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = connect-cluster
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
(org.apache.kafka.clients.consumer.ConsumerConfig:180)
[2018-06-26 11:49:48,476] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,476] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,477] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:188)
[2018-06-26 11:49:48,477] INFO Kafka version : 0.10.1.0-IBM-8 (org.apache.kafka.common.utils.AppInfoParser:83)
[2018-06-26 11:49:48,477] INFO Kafka commitId : unknown (org.apache.kafka.common.utils.AppInfoParser:84)
[2018-06-26 11:49:48,682] INFO Discovered coordinator ****:**** (id: 2147483644 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555)
[2018-06-26 11:49:48,686] INFO Finished reading KafkaBasedLog for topic connect-test-configs (org.apache.kafka.connect.util.KafkaBasedLog:148)
[2018-06-26 11:49:48,686] INFO Started KafkaBasedLog for topic connect-test-configs (org.apache.kafka.connect.util.KafkaBasedLog:150)
[2018-06-26 11:49:48,686] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:261)
[2018-06-26 11:49:48,687] INFO Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:171)
[2018-06-26 11:49:48,689] INFO Discovered coordinator ****:**** (id: 2147483644 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555)
[2018-06-26 11:49:48,691] INFO (Re-)joining group connect-cluster (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2018-06-26 11:49:48,702] INFO Successfully joined group connect-cluster with generation 3 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2018-06-26 11:49:48,702] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-67ad9053-4b20-41dd-9a6a-ec8e4f4d622e', leaderUrl='http://****:****/', offset=-1, connectorIds=[], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2018-06-26 11:49:48,702] INFO Starting connectors and tasks using config offset -1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:745)
[2018-06-26 11:49:48,702] INFO Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:752)
[2018-06-26 11:49:55,748] INFO Reflections took 7022 ms to scan 225 urls, producing 15521 keys and 98977 values (org.reflections.Reflections:229)
but I am not able to commit the events to HDFS
When you start distributed mode, it doesn't take any individual connector properties, you load those later
Just want to know where should we give the properties of the topics,hdfs url,task in a distributed mode?
You POST JSON into it.
Make a connect-hdfs.json file, for example
{
"name": "quickstart-hdfs-sink"
"config": {
"store.url": "hdfs:///apps/kafka-connect",
"hadoop.conf.dir": "/etc/hadoop/conf",
"topics" : ...
}
}
Send it to Connect
curl -XPOST -d#connect-hdfs.json http://connect-server:8083/connectors

Spring Cloud Data Flow > Stream > Source throws UNKNOWN_TOPIC_OR_PARTITION when deploying in K8s but working in local deployment

I'm trying to run a "Hello, world" Spring Cloud Data Flow stream based on the very simple example explained at http://cloud.spring.io/spring-cloud-dataflow/. I'm able to create a simple source and sink and run it on my local SCDF server using Kafka, so until here everything is correct.
Now, I'm trying to deploy it in my private cloud based on the instructions listed at http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_getting_started. Using this deployment I'm able to deploy a simple "time | log" out-of-the-box stream with no problems, but my example fails.
Specific versions are:
Docker version 1.13.1, build 092cba3
Hyperkube 1.5.5
SCDF 1.2.0.M2
zookeeper 3.4.9-1757313, built on 08/23/2016 06:50 GMT
Kafka 0.10.1.1
Source artifact logs are:
2017-04-06T11:05:07.429204866Z 2017-04-06 11:05:07,428 INFO main-SendThread(10.0.0.181:2181) o.a.z.ClientCnxn:876 - Socket connection established to 10.0.0.181/10.0.0.181:2181, initiating session
2017-04-06T11:05:07.440381666Z 2017-04-06 11:05:07,439 INFO main-SendThread(10.0.0.181:2181) o.a.z.ClientCnxn:1299 - Session establishment complete on server 10.0.0.181/10.0.0.181:2181, sessionid = 0x15b155ef61e014a, negotiated timeout = 10000
2017-04-06T11:05:07.740130495Z 2017-04-06 11:05:07,737 INFO main o.a.k.c.p.ProducerConfig:180 - ProducerConfig values:
2017-04-06T11:05:07.740160464Z acks = 1
2017-04-06T11:05:07.740163408Z batch.size = 16384
2017-04-06T11:05:07.740165226Z block.on.buffer.full = false
2017-04-06T11:05:07.740166942Z bootstrap.servers = [10.0.0.213:9092]
2017-04-06T11:05:07.740168741Z buffer.memory = 33554432
2017-04-06T11:05:07.740170545Z client.id =
2017-04-06T11:05:07.740172245Z compression.type = none
2017-04-06T11:05:07.740173971Z connections.max.idle.ms = 540000
2017-04-06T11:05:07.740175706Z interceptor.classes = null
2017-04-06T11:05:07.744179899Z reconnect.backoff.ms = 50
2017-04-06T11:05:07.744181600Z request.timeout.ms = 30000
2017-04-06T11:05:07.744183356Z retries = 0
2017-04-06T11:05:07.744185083Z retry.backoff.ms = 100
2017-04-06T11:05:07.744186754Z sasl.kerberos.kinit.cmd = /usr/bin/kinit
2017-04-06T11:05:07.744188494Z sasl.kerberos.min.time.before.relogin = 60000
2017-04-06T11:05:07.744190205Z sasl.kerberos.service.name = null
2017-04-06T11:05:07.744191916Z sasl.kerberos.ticket.renew.jitter = 0.05
2017-04-06T11:05:07.744193763Z sasl.kerberos.ticket.renew.window.factor = 0.8
2017-04-06T11:05:07.744195432Z sasl.mechanism = GSSAPI
2017-04-06T11:05:07.744197163Z security.protocol = PLAINTEXT
2017-04-06T11:05:07.744198789Z send.buffer.bytes = 131072
2017-04-06T11:05:07.744200522Z ssl.cipher.suites = null
2017-04-06T11:05:07.744202328Z ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
2017-04-06T11:05:07.744204161Z ssl.endpoint.identification.algorithm = null
2017-04-06T11:05:07.744205837Z ssl.key.password = null
2017-04-06T11:05:07.744207544Z ssl.keymanager.algorithm = SunX509
2017-04-06T11:05:07.744212464Z ssl.keystore.location = null
2017-04-06T11:05:07.744214272Z ssl.keystore.password = null
2017-04-06T11:05:07.744216025Z ssl.keystore.type = JKS
2017-04-06T11:05:07.744217647Z ssl.protocol = TLS
2017-04-06T11:05:07.744219234Z ssl.provider = null
2017-04-06T11:05:07.744220987Z ssl.secure.random.implementation = null
2017-04-06T11:05:07.744222666Z ssl.trustmanager.algorithm = PKIX
2017-04-06T11:05:07.744224359Z ssl.truststore.location = null
2017-04-06T11:05:07.744226022Z ssl.truststore.password = null
2017-04-06T11:05:07.744228171Z ssl.truststore.type = JKS
2017-04-06T11:05:07.744230006Z timeout.ms = 30000
2017-04-06T11:05:07.744231705Z value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2017-04-06T11:05:07.744233544Z
2017-04-06T11:05:07.837193978Z 2017-04-06 11:05:07,834 WARN main o.a.k.c.p.ProducerConfig:188 - The configuration 'key.deserializer' was supplied but isn't a known config.
2017-04-06T11:05:07.837221870Z 2017-04-06 11:05:07,835 WARN main o.a.k.c.p.ProducerConfig:188 - The configuration 'value.deserializer' was supplied but isn't a known config.
2017-04-06T11:05:07.929207703Z 2017-04-06 11:05:07,926 INFO main o.a.k.c.u.AppInfoParser:83 - Kafka version : 0.10.1.1
2017-04-06T11:05:07.929239636Z 2017-04-06 11:05:07,927 INFO main o.a.k.c.u.AppInfoParser:84 - Kafka commitId : f10ef2720b03b247
2017-04-06T11:05:08.228817026Z 2017-04-06 11:05:08,228 WARN kafka-producer-network-thread | producer-1 o.a.k.c.NetworkClient:600 - Error while fetching metadata with correlation id 0 : {output=UNKNOWN_TOPIC_OR_PARTITION}
2017-04-06T11:05:08.436574800Z 2017-04-06 11:05:08,435 WARN kafka-producer-network-thread | producer-1 o.a.k.c.NetworkClient:600 - Error while fetching metadata with correlation id 1 : {output=UNKNOWN_TOPIC_OR_PARTITION}
And Zookepeer logs are:
2017-04-06T11:04:38.000953447Z 2017-04-06 11:04:38,000 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#487] - Processed session termination for sessionid: 0x15b155ef61e0148
2017-04-06T11:05:04.939356606Z 2017-04-06 11:05:04,938 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /10.1.98.5:48180
2017-04-06T11:05:04.940666418Z 2017-04-06 11:05:04,939 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#928] - Client attempting to establish new session at /10.1.98.5:48180
2017-04-06T11:05:04.943859474Z 2017-04-06 11:05:04,943 [myid:] - INFO [SyncThread:0:ZooKeeperServer#673] - Established session 0x15b155ef61e0149 with negotiated timeout 10000 for client /10.1.98.5:48180
2017-04-06T11:05:07.325929074Z 2017-04-06 11:05:07,325 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#487] - Processed session termination for sessionid: 0x15b155ef61e0149
2017-04-06T11:05:07.342876962Z 2017-04-06 11:05:07,341 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1008] - Closed socket connection for client /10.1.98.5:48180 which had sessionid 0x15b155ef61e0149
2017-04-06T11:05:07.429909440Z 2017-04-06 11:05:07,429 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /10.1.98.5:48182
2017-04-06T11:05:07.429933377Z 2017-04-06 11:05:07,429 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#928] - Client attempting to establish new session at /10.1.98.5:48182
2017-04-06T11:05:07.441158222Z 2017-04-06 11:05:07,439 [myid:] - INFO [SyncThread:0:ZooKeeperServer#673] - Established session 0x15b155ef61e014a with negotiated timeout 10000 for client /10.1.98.5:48182
2017-04-06T11:05:29.695276997Z 2017-04-06 11:05:29,694 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
2017-04-06T11:05:29.695325790Z EndOfStreamException: Unable to read additional data from client sessionid 0x15b155ef61e014a, likely client has closed socket
2017-04-06T11:05:29.695328912Z at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
2017-04-06T11:05:29.695331119Z at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
2017-04-06T11:05:29.695333009Z at java.lang.Thread.run(Thread.java:745)
2017-04-06T11:05:29.696333706Z 2017-04-06 11:05:29,696 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1008] - Closed socket connection for client
I dont find any log in Kafka at the moment of the exception.
Code snippet for the source class is
package xxxx;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.core.MessageSource;
import org.springframework.messaging.support.GenericMessage;
#SpringBootApplication
#EnableBinding(Source.class)
public class HelloNitesApplication
{
public static void main(String[] args)
{
SpringApplication.run(HelloNitesApplication.class, args);
}
#Bean
#InboundChannelAdapter(value = Source.OUTPUT)
public MessageSource<String> timerMessageSource()
{
return () -> new GenericMessage<>("Hello " + new SimpleDateFormat().format(new Date()));
}
So, the pod containing the stream source keeps crashing in a loop.
The problem seems to be the fact that the property "spring.cloud.stream.bindings.output.destination=XXX" is ignored by my implementation and I deleted the topic "output" before the execution as I expected it to write in the topic specified by the property.
After I redeployed everything the source works as the topic is created properly, although inserting the messages in "output" topic instead the one specified by he property I defined.