Debezium connector fails after "Searching for WAL resume position" - apache-kafka

I'm using debezium postgres connector to capture few tables' record in kafka. The debezium connector closed after "Searching for WAL resume position". I've give the error below. Any help would be greatly appreciated.
2022-12-12 07:46:51,834 INFO Postgres|postgres|streaming Searching for WAL resume position [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource]
2022-12-12 07:46:56,548 ERROR || WorkerSourceTask{id=ksqldb-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:223)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:149)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.convertTransformedRecord(AbstractWorkerSourceTask.java:474)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.sendRecords(AbstractWorkerSourceTask.java:387)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:354)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:244)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:72)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: io.apicurio.registry.rest.client.exception.RestClientException
at io.apicurio.registry.rest.client.impl.ErrorHandler.parseError(ErrorHandler.java:95)
at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:205)
at io.apicurio.registry.rest.client.impl.RegistryClientImpl.createArtifact(RegistryClientImpl.java:240)
at io.apicurio.registry.rest.client.RegistryClient.createArtifact(RegistryClient.java:143)
at io.apicurio.registry.resolver.DefaultSchemaResolver.lambda$handleAutoCreateArtifact$2(DefaultSchemaResolver.java:236)
at io.apicurio.registry.resolver.ERCache.lambda$getValue$0(ERCache.java:142)
at io.apicurio.registry.resolver.ERCache.retry(ERCache.java:181)
at io.apicurio.registry.resolver.ERCache.getValue(ERCache.java:141)
at io.apicurio.registry.resolver.ERCache.getByContent(ERCache.java:121)
at io.apicurio.registry.resolver.DefaultSchemaResolver.handleAutoCreateArtifact(DefaultSchemaResolver.java:234)
at io.apicurio.registry.resolver.DefaultSchemaResolver.getSchemaFromRegistry(DefaultSchemaResolver.java:115)
at io.apicurio.registry.resolver.DefaultSchemaResolver.resolveSchema(DefaultSchemaResolver.java:88)
at io.apicurio.registry.utils.converter.ExtJsonConverter.fromConnectData(ExtJsonConverter.java:97)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.lambda$convertTransformedRecord$5(AbstractWorkerSourceTask.java:474)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:173)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:207)
... 12 more
2022-12-12 07:46:56,548 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask]
2022-12-12 07:46:56,585 INFO Postgres|postgres|streaming WAL resume position 'null' discovered [io.debezium.connector.postgresql.PostgresStreamingChangeEventSource]
2022-12-12 07:46:56,588 INFO Postgres|postgres|streaming Connection gracefully closed [io.debezium.jdbc.JdbcConnection]
2022-12-12 07:46:56,691 INFO Postgres|postgres|streaming Connection gracefully closed [io.debezium.jdbc.JdbcConnection]

Actually it was Apicurio error. The error was all the container was running in one network in which Apicurio was not added. I added Apicurio in the network , then it got resolved.

Related

wildfly failed to start up due to exception in infinispan

We are seeing the following error that is only happening few testbeds:
18-Jan-2023 15:26:15,846 CST WARN [TCP] (TQ-Bundler-7,ejb,cdada7bd-7d38-41d0-afa9-9b820c587a29) JGRP000032: cdada7bd-7d38-41d0-afa9-9b820c587a29: no physical address for f4315409-4d0f-148d-8d7d-8fbc74f11179, dropping message
18-Jan-2023 15:26:20,774 CST WARN [ClusterTopologyManagerImpl] (MSC service thread 1-5) ISPN000329: Unable to read rebalancing status from coordinator 8096d1bd-2e66-4fd0-9a98-ac78dfd9d171: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 10 from 8096d1bd-2e66-4fd0-9a98-ac78dfd9d171
at org.inf...#9.4.18.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onTimeout(SingleTargetRequest.java:65)
at org.inf...#9.4.18.Final//org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
at org.inf...#9.4.18.Final//org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
we also noticed that sometimes this issue went away after reboot. But other times it stayed the same.
The wildfly version is 19.1.0.
Can someone shed some light on this issue?
Thanks.

Kafka S3 sink connector failing with 22 topics

I am trying to use Kafka S3 sink connector to push the data from 22 topics to S3 bucket.
While doing this, I am getting an error saying
`
ERROR [prod-partnerbilling-sink-v3|task-2] WorkerSinkTask{id=prod-partnerbilling-sink-v3-2} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:193)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:609)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:241)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.OutOfMemoryError: Java heap space
`
But when I use the same connector config for 2 batches of 11 topics each out of the 22 topics, it works fine.
I want to know the root cause of this error.
The issue was resolved after I lowered my s3.part.size from 50MB to 25MB.

Kafka-Kinesis-Connector Commit of offsets threw an unexpected exception for sequence number: null

I have a java project which is using Kafka-Kinesis-Connector as a connector used with Kafka Connect to publish messages from Kafka to Amazon Kinesis Stream which in turn triggers Lambda. The service we had was using kafka kinesis client 1.7.3 lib and amazon kinesis producer v 0.14.7. In order to migrate to latest version of AWS SDK Java V1 , these two libs were updated to versions 1.14.8 and
[2022-10-16 21:59:15,481] ERROR WorkerSinkTask{id=fm-sync-loads-analytics-nprod-test-01-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: The child process has been shutdown and can no longer accept messages. (org.apache.kafka.connect.runtime.WorkerSinkTask)
com.amazonaws.services.kinesis.producer.DaemonException: The child process has been shutdown and can no longer accept messages.
at com.amazonaws.services.kinesis.producer.Daemon.add(Daemon.java:173)
at com.amazonaws.services.kinesis.producer.KinesisProducer.addUserRecord(KinesisProducer.java:625)
at com.amazonaws.services.kinesis.producer.KinesisProducer.addUserRecord(KinesisProducer.java:535)
at com.amazonaws.services.kinesis.producer.KinesisProducer.addUserRecord(KinesisProducer.java:411)
at com.amazon.kinesis.kafka.AmazonKinesisSinkTask.addUserRecord(AmazonKinesisSinkTask.java:235)
at com.amazon.kinesis.kafka.AmazonKinesisSinkTask.put(AmazonKinesisSinkTask.java:143)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:545)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
[2022-10-16 21:59:15,483] WARN WorkerSinkTask{id=fm-sync-loads-analytics-nprod-test-01-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2022-10-16 21:59:15,483] ERROR WorkerSinkTask{id=fm-sync-loads-analytics-nprod-test-01-0} Commit of offsets threw an unexpected exception for sequence number 1: null (org.apache.kafka.connect.runtime.WorkerSinkTask)
com.amazonaws.services.kinesis.producer.DaemonException: The child process has been shutdown and can no longer accept messages.
at com.amazonaws.services.kinesis.producer.Daemon.add(Daemon.java:173)
at com.amazonaws.services.kinesis.producer.KinesisProducer.flush(KinesisProducer.java:916)
at com.amazonaws.services.kinesis.producer.KinesisProducer.flush(KinesisProducer.java:936)
at com.amazonaws.services.kinesis.producer.KinesisProducer.flushSync(KinesisProducer.java:962)
at com.amazon.kinesis.kafka.AmazonKinesisSinkTask.lambda$flush$0(AmazonKinesisSinkTask.java:108)
at java.base/java.util.HashMap$Values.forEach(HashMap.java:981)
at com.amazon.kinesis.kafka.AmazonKinesisSinkTask.flush(AmazonKinesisSinkTask.java:106)
at org.apache.kafka.connect.sink.SinkTask.preCommit(SinkTask.java:125)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:382)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:597)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
[2022-10-16 21:59:16,452] ERROR WorkerSinkTask{id=fm-sync-loads-analytics-nprod-test-01-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:567)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: com.amazonaws.services.kinesis.producer.DaemonException: The child process has been shutdown and can no longer accept messages.

MongoDB Debezium Fails to connect due to ssl handshake failure

I'm running a MongoDB Debezium Kafka Connector on AWS MSK, and the connector goes to the failed status with this error on the MongoDB server Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections and com.mongodb.MongoSocketReadException: Prematurely reached end of stream in the debezium logs.
Below is my debezium configuration, and I have enabled mongodb.ssl.enabled=true.
Does anybody know if I'm missing something from the configuration?
I also enabled the mongodb.ssl.invalid.hostname.allowed but that didn't fix the issue
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.ssl.enabled=true
collection.include.list=***
mongodb.password=***
tasks.max=2
mongodb.user=***
mongodb.ssl.invalid.hostname.allowed=true
mongodb.hosts=***
database.include.list=***
Debezium stack trace:
at
com.mongodb.connection.BaseCluster.getDescription(BaseCluster.java:160)
at com.mongodb.Mongo.getClusterDescription(Mongo.java:378) at
com.mongodb.Mongo.getReplicaSetStatus(Mongo.java:414) at
io.debezium.connector.mongodb.ConnectionContext.clientForPrimary(ConnectionContext.java:335)
at
io.debezium.connector.mongodb.ConnectionContext.lambda$primaryClientFor$1(ConnectionContext.java:179)
at
io.debezium.connector.mongodb.ConnectionContext.lambda$primaryClientFor$2(ConnectionContext.java:188)
at
io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.execute(ConnectionContext.java:258)
at
io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.databaseNames(ConnectionContext.java:296)
at
io.debezium.connector.mongodb.MongoDbConnectorConfig$DatabaseRecommender.lambda$validValues$1(MongoDbConnectorConfig.java:239)
at java.base/java.util.HashMap$Values.forEach(HashMap.java:977) at
io.debezium.connector.mongodb.ReplicaSets.onEachReplicaSet(ReplicaSets.java:102)
at
io.debezium.connector.mongodb.MongoDbConnectorConfig$DatabaseRecommender.validValues(MongoDbConnectorConfig.java:236)
at io.debezium.config.Field.validate(Field.java:567) at
io.debezium.config.Field.lambda$validate$7(Field.java:583) at
java.base/java.util.Arrays$ArrayList.forEach(Arrays.java:4390) at
io.debezium.config.Field.validate(Field.java:580) at
io.debezium.config.Configuration.lambda$validate$25(Configuration.java:1653)
at
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at
java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at io.debezium.config.Field$Set.forEachTopLevelField(Field.java:127)
at io.debezium.config.Configuration.validate(Configuration.java:1652)
at
io.debezium.connector.mongodb.MongoDbConnector.validate(MongoDbConnector.java:194)
at
org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:375)
at
org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829) [2022-04-14
03:41:56,279] INFO Closing all connections to
(io.debezium.connector.mongodb.ConnectionContext:75) [2022-04-14 03:41:56,280] ERROR Uncaught exception in REST call to /connectors
(org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
org.apache.kafka.connect.errors.ConnectException: Unable to connect to
primary node of 'atlas-:27017' after 2 failed attempts

windows kafka java.nio.file.FileSystemException

Very frequently error in windows server 2012
kafka verson 2.3.1
the error log
[2019-12-05 03:57:51,567] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler)
org.apache.kafka.common.errors.KafkaStorageException: Error while deleting segments for MetadataLog-0 in dir D:\GpsPlatform\kafka\.\tmp\kafka-logs
Caused by: java.nio.file.FileSystemException: D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index -> D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index.deleted: 另一个程序正在使用此文件,进程无法访问。
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395)
at java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:815)
at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:509)
at kafka.log.Log.asyncDeleteSegment(Log.scala:1982)
at kafka.log.Log.deleteSegment(Log.scala:1967)
at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1493)
at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1493)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1493)
at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23)
at kafka.log.Log.maybeHandleIOException(Log.scala:2085)
at kafka.log.Log.deleteSegments(Log.scala:1484)
at kafka.log.Log.deleteOldSegments(Log.scala:1479)
at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1557)
at kafka.log.Log.deleteOldSegments(Log.scala:1547)
at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:914)
at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:911)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:911)
at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)
Suppressed: java.nio.file.FileSystemException: D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index -> D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index.deleted: 另一个程序正在使用此文件,进程无法访问。
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:309)
at java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:812)
... 29 more
after running for a period of time, a similar exception will be reported, causing Kafka to crash. How to completely resolve this exception?
If you have to use kafka in windows environment. You have to disable log retention.
In Kafka server.properties
log.retention.hours=-1
log.cleaner.enable=false
# Remove any other rows start from log.retention.*
To run Kafka on Windows it's recommended to do so using WSL2 as detailed here. Otherwise you encounter the kind of problems described above.