A Kafka instance in another process or thread is using this directory - apache-kafka

T get the following error when running kafka connect distributed :
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /home/hadoop/kafka/bin/../logs/server.log (Permission denied)
[2022-09-27 14:03:29,076] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /home/hadoop/kafka/kafka-data/kafka_logs. A Kafka instance in another process or thread is using this directory.
log4j:ERROR Either File or DatePattern options are not set for appender [kafkaAppender].
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /home/hadoop/kafka/bin/../logs/controller.log (Permission denied)

deleting the log directory is not a solution ;
when kafka broker is already run we get this error
for my case i just had to use the right cmd to start distributed

Related

KAFKA - ERROR Disk error while locking directory

I got this error ERROR Disk error while locking directory while trying to start kafka-server-start.sh config/server.properties
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,324] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,340] ERROR Disk error while locking directory /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/.lock
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at kafka.utils.FileLock.<init>(FileLock.scala:31)
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:235)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:118)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:105)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:38)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:233)
at kafka.log.LogManager.<init>(LogManager.scala:104)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.OffsetCheckpointFile.<init>(OffsetCheckpointFile.scala:57)
at kafka.log.LogManager.$anonfun$recoveryPointCheckpoints$1(LogManager.scala:106)
at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
at scala.collection.mutable.ArraySeq.map(ArraySeq.scala:38)
at kafka.log.LogManager.<init>(LogManager.scala:105)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
This is known issue with Kafka distribution for windows. Refer: https://issues.apache.org/jira/browse/KAFKA-13391
Either use Kafka 2.8 (kafka_2.12-2.8.1.tgz) or wait for Kafka 3.0.1 or Kafka 3.1.0
For people using Kafka on windows and having a related error to
java.nio.file.AccessDeniedException:
This is a common error when log retention happens.
Kafka doesn’t have good support for windows filesystem.
You can use WSL2 or Docker to work around these limitations
Try to use kafka_2.12-2.8.1.tgz. This resolved the issue for me.
As the error states, the user that starts the Kafka Server process does not have access to your log.dirs:
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
You can either:
Change log.dirs (Make sure NOT to use /tmp/)
Or grant read/write access for /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/
If none of the above options works for you, then probably it might be worth checking if the directory actually exists. If not, simply create it by running
mkdir -p /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data
As a side note, I wouldn't say that /opt/ is the best place to store data.

Kafka Error : Kafka process is going down frequently. Showing below error when trying to restart this

Thanks in advance. Please help me to resolve this below mentioned kafka error.
00000000000000.txnindex and rebuilding index... (kafka.log.Log)
[2018-09-25 12:48:05,462] ERROR There was an error in one of the threads during logs loading: java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code (kafka.log.LogManager)
[2018-09-25 12:48:05,469] FATAL [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.loadBatchWithSize(FileLogInputStream.java:209)
at org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.loadFullBatch(FileLogInputStream.java:192)
at org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.ensureValid(FileLogInputStream.java:164)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:263)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
My guess is that your Kafka log directory (log.dirs in server.properties) is out of memory. When Kafka attempts to rebuild the index, there is insufficient memory and therefore Kafka broker cannot be started.
Assuming log.dirs=/var/lib/kafka
df -hT /var/lib/kafka
will display storage usage for your log directory.

Kafka can not access the file because it is being used by another process

I have a problem installing Kafka on windows.
I installed kafka cluster of 3 instances in 3 differents servers (each server contain one kafka and zookeeper )
all work well, but when an instance kafka stop (or fail ) , when i try to restart this instance .
I have this error message:
`[2017-11-10 15:17:53,999] INFO [ThrottledRequestReaper-Fetch]: Starting
(kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-11-10 15:17:53,999] INFO [ThrottledRequestReaper-Produce]: Starting
(kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-11-10 15:17:53,999] INFO [ThrottledRequestReaper-Request]: Starting
(kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-11-10 15:17:54,109] INFO Loading logs. (kafka.log.LogManager)
[2017-11-10 15:17:54,171] WARN Found a corrupted index file due to
requirement failed: Corrupt index found, index file (C:\Tools\Kafka\kafka-
logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\00000000000000000015.index)
has non-zero size but the last offset is 15 which is no larger than the base
offset 15.}. deleting C:\Tools\Kafka\kafka-
logs\hubone.dataCollect.orbiwise.Arch`iveQueue-
0\00000000000000000015.timeindex, C:\Tools\Kafka\kafka-
logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\00000000000000000015.index,
and C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-
0\00000000000000000015.txnindex and rebuilding index... (kafka.log.Log)
[2017-11-10 15:17:54,171] ERROR There was an error in one of the threads
**during logs loading: java.nio.file.FileSystemException:
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-
0\00000000000000000015.timeindex: The process cannot access the file because
it is being used by another process.**
(kafka.log.LogManager)
[2017-11-10 15:17:54,171] FATAL [Kafka Server 1], Fatal error during
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.FileSystemException: C:\Tools\Kafka\kafka-
logs\hubone.dataCollect.orbiwise.ArchiveQueue-
0\00000000000000000015.timeindex: The process cannot access the file because
it is being used by another process.
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:318)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:279)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:279)
at kafka.log.Log.loadSegments(Log.scala:383)
at kafka.log.Log.<init>(Log.scala:186)
at kafka.log.Log$.apply(Log.scala:1609)
at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
when i delete kafka-logs , the offsets are corrupted .
could you help me to fix this problem ?
i opened an issue in kafka :
https://issues.apache.org/jira/browse/KAFKA-6200
I had a similar issue.
I simply deleted the files in ..\tmp**kafka-logs**\ directory.
Then i restarted the services and it worked like a charm.
In my case the issue was resolved after emptying the recycle bin on windows after deleting logs from tmp and log directories

Unable to start zookeeper services exiting with "Severe unrecoverable error"

I'm unable to start zookeeper services. Please see the stack traces.
Trace 1 : org.apache.zookeeper.server.ZooKeeperServer:
Severe unrecoverable error, exiting
java.io.FileNotFoundException: /var/lib/zookeeper/version-2/snapshot.40003a3c3 (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
at org.apache.zookeeper.server.persistence.FileSnap.serialize(FileSnap.java:225)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.save(FileTxnSnapLog.java:275)
at org.apache.zookeeper.server.ZooKeeperServer.takeSnapshot(ZooKeeperServer.java:270)
at org.apache.zookeeper.server.SyncRequestProcessor$1.run(SyncRequestProcessor.java:123)
Trace 2: org.apache.zookeeper.server.SyncRequestProcessor:
Severe unrecoverable error, exiting
java.io.FileNotFoundException: /var/lib/zookeeper/version-2/log.40003a3c5 (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
at org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:205)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:347)
at org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:476)
at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:110)
I've tried -
Changing ownership to zookeeper:zookeeper.
Changing permission level to 755 and at last 777 for /var/lib/zookeeper.
Override default dataLogDir and dataDir to /var/lib/zookeeper2
Removing the zookeeper role services from the server and adding back again.
Decommissioning of full server and added again in the cluster server.
Do you need to use '/var/lib/ folder as dataDir? '/var/lib' folder is a special folder and generally used by os packages.
If you change dataDir and dataLogDir to a user specific location it will work.

zookeeper Supervisor returned FATAL. Please check the role log file, stderr, or stdout

I try to run zookeeper, but I have an error :Failed to start role.
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:196)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:156)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
2016-03-01 10:55:38,873 ERROR org.apache.zookeeper.server.quorum.QuorumPeerMain: Unexpected exception, exiting abnormally
java.lang.RuntimeException: Unable to run quorum server
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:454)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:156)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
Caused by: java.io.FileNotFoundException: /var/lib/zookeeper/version-2/log.d00015690 (Permission denied)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:574)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:196)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
any help please !
Looks like you don't have premission to open this file, have you tried running zookeeper as root/sudo?
Caused by: java.io.FileNotFoundException: /var/lib/zookeeper/version-2/log.d00015690 (Permission denied)