Kafka: SASL_SSL Cluster authorization failed - apache-kafka

We are using Kafka SASL_SSL authentication
And able to make connections using User Principal Name.
But while sending the data to the topic getting below error
Executing step
Autopopulated Producer.Connection with Connection (Kafka Connection)
Adjusting Runtime Scopes
Adjusting Runtime Scopes
Creating SSL Context with protocol: TLSv1.2
Opening producer on sdr01kbr01.uscc.com:9093,sdr01kbr02.uscc.com:9093,sdr02kbr03.uscc.com:9093,sdr02kbr04.uscc.com:9093
Sending message to topic dev01-oms
Creating SSL Context with protocol: TLSv1.2
Error: Error waiting for acknowledgement after sending message to topic dev01-oms: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
============================================================================
| Exception:
============================================================================
| Message:     Error waiting for acknowledgement after sending message to topic dev01-oms: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
----------------------------------------------------------------------------
| Trapped Exception: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
| Trapped Message:   java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
----------------------------------------------------------------------------
STACK TRACE
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
               at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:97)
               at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:65)
               at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
               at java.util.concurrent.FutureTask.run(FutureTask.java:266)
               at javax.swing.SwingWorker.run(SwingWorker.java:334)
               at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
               at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
               at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.

Related

MongoDB Debezium Fails to connect due to ssl handshake failure

I'm running a MongoDB Debezium Kafka Connector on AWS MSK, and the connector goes to the failed status with this error on the MongoDB server Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections and com.mongodb.MongoSocketReadException: Prematurely reached end of stream in the debezium logs.
Below is my debezium configuration, and I have enabled mongodb.ssl.enabled=true.
Does anybody know if I'm missing something from the configuration?
I also enabled the mongodb.ssl.invalid.hostname.allowed but that didn't fix the issue
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.ssl.enabled=true
collection.include.list=***
mongodb.password=***
tasks.max=2
mongodb.user=***
mongodb.ssl.invalid.hostname.allowed=true
mongodb.hosts=***
database.include.list=***
Debezium stack trace:
at
com.mongodb.connection.BaseCluster.getDescription(BaseCluster.java:160)
at com.mongodb.Mongo.getClusterDescription(Mongo.java:378) at
com.mongodb.Mongo.getReplicaSetStatus(Mongo.java:414) at
io.debezium.connector.mongodb.ConnectionContext.clientForPrimary(ConnectionContext.java:335)
at
io.debezium.connector.mongodb.ConnectionContext.lambda$primaryClientFor$1(ConnectionContext.java:179)
at
io.debezium.connector.mongodb.ConnectionContext.lambda$primaryClientFor$2(ConnectionContext.java:188)
at
io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.execute(ConnectionContext.java:258)
at
io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.databaseNames(ConnectionContext.java:296)
at
io.debezium.connector.mongodb.MongoDbConnectorConfig$DatabaseRecommender.lambda$validValues$1(MongoDbConnectorConfig.java:239)
at java.base/java.util.HashMap$Values.forEach(HashMap.java:977) at
io.debezium.connector.mongodb.ReplicaSets.onEachReplicaSet(ReplicaSets.java:102)
at
io.debezium.connector.mongodb.MongoDbConnectorConfig$DatabaseRecommender.validValues(MongoDbConnectorConfig.java:236)
at io.debezium.config.Field.validate(Field.java:567) at
io.debezium.config.Field.lambda$validate$7(Field.java:583) at
java.base/java.util.Arrays$ArrayList.forEach(Arrays.java:4390) at
io.debezium.config.Field.validate(Field.java:580) at
io.debezium.config.Configuration.lambda$validate$25(Configuration.java:1653)
at
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at
java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at io.debezium.config.Field$Set.forEachTopLevelField(Field.java:127)
at io.debezium.config.Configuration.validate(Configuration.java:1652)
at
io.debezium.connector.mongodb.MongoDbConnector.validate(MongoDbConnector.java:194)
at
org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:375)
at
org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829) [2022-04-14
03:41:56,279] INFO Closing all connections to
(io.debezium.connector.mongodb.ConnectionContext:75) [2022-04-14 03:41:56,280] ERROR Uncaught exception in REST call to /connectors
(org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
org.apache.kafka.connect.errors.ConnectException: Unable to connect to
primary node of 'atlas-:27017' after 2 failed attempts

Failed to connect to kafka cluster When running Kafka and Kafka-connect on different servers

I run Kafka and Kafka-connect on different servers (Let`s say serverA and serverB)
serverA for kafka connect
# vi /home/kafka/config/connect-distributed.properties
bootstrap.servers=serverB:9092
rest.host.name=localhost
rest.port=8083
serverB for kafka
# vi server.properties
broker.id=1
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://serverA:9092
delete.topic.enable = true
But, when i run kafka connect on serverA, i got an error.
[2020-04-30 16:59:37,053] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:84)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1588233577048) timed out at 1588233577049 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
... 3 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1588233577048) timed out at 1588233577049 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
FYI, If i run kafka connect on kafka server(serverB), It worked. but, I want to run these on different server.
How can I connect kafka-connect to kafka?
In your server.properties you have
advertised.listeners=PLAINTEXT://serverA:9092
but Kafka connnects uses
bootstrap.servers=serverB:9092
instead of
bootstrap.servers=serverA:9092

Connection refused to Schema Registry

I have installed the new version of confluent i.e 5.4 and since after that I am unable to connect to the confluent, my schema registry also gets terminated untimely.
Today when I started the confluent and tried to produce the data, I recieved the following error:
2020-03-05 12:25:00,453] ERROR Failed to send HTTP request to endpoint: http://localhost:8081/subjects/avro-key/versions (io.confluent.kafka.schemaregistry.client.rest.RestService:245)
java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1248)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1337)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:241)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:322)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:422)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:414)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:400)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:140)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:196)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:172)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:71)
at io.confluent.kafka.formatter.AvroMessageReader.readMessage(AvroMessageReader.java:199)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:55)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)'
Updated the question with the Schema-registry logs:
INFO Logging initialized #865ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:169)
[2020-03-09 12:35:51,851] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer:316)
[2020-03-09 12:35:52,366] INFO Created schema registry namespace localhost:2181 /schema_registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:709)
[2020-03-09 12:35:53,329] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://LAP-LIN-897:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore:108)
[2020-03-09 12:38:03,215] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:77)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:248)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:75)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:217)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:185)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:177)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:119)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:246)
... 6 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:170)
... 8 more
'

Confluent Replicator Failed to Reconfigure Connectors Task?

I have used mirror maker in the past and not Replicator and am getting an error but now sure where to start to debug it.
Here is the error:
[2019-08-12 18:04:09,672] ERROR Failed to reconfigure connector's
tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:958)
org.apache.kafka.connect.errors.ConnectException: Could not obtain timely topic metadata update from source cluster
at io.confluent.connect.replicator.TopicMonitorThreadWithZk.assignments(TopicMonitorThreadWithZk.java:138)
at io.confluent.connect.replicator.ReplicatorSourceConnector.taskConfigs(ReplicatorSourceConnector.java:99)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:317)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:997)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:950)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:914)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Login Failure: Pool is empty and connection creation failed

when I tried to SSO using Shibboleth IDP, a login Error occured, when username and password was submitted as, Login Failure: Pool is empty and connection creation failed.
My error logs are as follows
==> /opt/shibboleth-idp/logs/idp-warn.log <==
at org.ldaptive.provider.jndi.JndiConnectionFactory.createInternal(JndiConnectionFactory.java:102)
Caused by: javax.naming.CommunicationException: localhost:10389
at com.sun.jndi.ldap.Connection.<init>(Connection.java:216)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
2018-08-13 09:32:53,752 - WARN [org.ldaptive.pool.BlockingConnectionPool:600] - unable to create active connection
2018-08-13 09:32:53,753 - ERROR [org.ldaptive.pool.BlockingConnectionPool:197] - Could not service check out request
2018-08-13 09:32:53,754 - WARN [net.shibboleth.idp.authn.impl.ValidateUsernamePasswordAgainstLDAP:192] - Profile Action ValidateUsernamePasswordAgainstLDAP: Login by admin produced exception
org.ldaptive.pool.PoolExhaustedException: Pool is empty and connection creation failed
at org.ldaptive.pool.BlockingConnectionPool.getConnection(BlockingConnectionPool.java:198)
Can anyone suggest me a way to solve this?
Old question, answer for google.
Check /opt/shibboleth-idp/conf/ldap.properties if your domain/IP and port are correct.
In my case i missed out that the image bitnami/openldap uses port 1389 by default.