Kafka server hanging with Java 17 and security.inter.broker.protocol=SSL - apache-kafka

Kafka server is hanging with Java 17 and security.inter.broker.protocol=SSL property. Below is the line it shows and hangs
[2023-02-07 21:35:47,883] INFO Awaiting socket connections on localhost:9093. (kafka.network.DataPlaneAcceptor)
Running with latest kafka 3.3.2 binary and Java 17.0.6. It works fine when I change to Java 11.
Broker Config:
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093
ssl.keystore.type=PKCS12
ssl.keystore.location=/config/kafka-local-test.local.pfx
ssl.keystore.password=hello
ssl.key.password=hello
ssl.client.auth=required
security.inter.broker.protocol=SSL
Logs:
hello#world bin % ./kafka-server-start.sh ../config/server.properties
[2023-02-07 21:35:46,981] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2023-02-07 21:35:47,130] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2023-02-07 21:35:47,179] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2023-02-07 21:35:47,180] INFO starting (kafka.server.KafkaServer)
[2023-02-07 21:35:47,180] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2023-02-07 21:35:47,187] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2023-02-07 21:35:47,193] INFO Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:host.name=hello.com (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.version=17.0.6 (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.vendor=Homebrew (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.home=/opt/homebrew/Cellar/openjdk#17/17.0.6/libexec/openjdk.jdk/Contents/Home (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.class.path=/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/activation-1.1.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/argparse4j-0.7.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/audience-annotations-0.5.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/commons-cli-1.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/commons-lang3-3.12.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/commons-lang3-3.8.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-api-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-basic-auth-extension-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-json-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-mirror-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-mirror-client-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-runtime-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/connect-transforms-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/hk2-api-2.6.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/hk2-locator-2.6.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/hk2-utils-2.6.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-annotations-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-core-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-databind-2.13.4.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-dataformat-csv-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-datatype-jdk8-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-jaxrs-base-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-jaxrs-json-provider-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-module-jaxb-annotations-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jackson-module-scala_2.13-2.13.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.activation-api-1.2.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.inject-2.6.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/javassist-3.27.0-GA.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jaxb-api-2.3.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-client-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-common-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-container-servlet-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-container-servlet-core-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-hk2-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jersey-server-2.34.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-client-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-continuation-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-http-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-io-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-security-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-server-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-servlet-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-servlets-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-util-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jetty-util-ajax-9.4.48.v20220622.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jline-3.21.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jopt-simple-5.0.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/jose4j-0.7.9.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-clients-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-log4j-appender-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-metadata-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-raft-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-server-common-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-shell-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-storage-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-storage-api-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-streams-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-streams-examples-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-streams-scala_2.13-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-streams-test-utils-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka-tools-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/kafka_2.13-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/lz4-java-1.8.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/maven-artifact-3.8.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/metrics-core-2.2.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/metrics-core-4.1.12.1.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-buffer-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-codec-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-common-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-handler-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-resolver-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-transport-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-transport-classes-epoll-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-transport-native-epoll-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/netty-transport-native-unix-common-4.1.78.Final.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/paranamer-2.8.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/plexus-utils-3.3.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/reflections-0.9.12.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/reload4j-1.2.19.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/rocksdbjni-7.1.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/scala-collection-compat_2.13-2.6.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/scala-library-2.13.8.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/scala-logging_2.13-3.9.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/scala-reflect-2.13.8.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/slf4j-api-1.7.36.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/slf4j-reload4j-1.7.36.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/snappy-java-1.1.8.4.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/swagger-annotations-2.2.0.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/trogdor-3.3.2.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/zookeeper-3.6.3.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/zookeeper-jute-3.6.3.jar:/Users/hello/Downloads/kafka_2.13-3.3.2/bin/../libs/zstd-jni-1.5.2-1.jar (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.library.path=/Users/hello/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.io.tmpdir=/var/folders/3k/xxhlnq1j0y3_d6pvvtyy3_m80000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,193] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:os.arch=aarch64 (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:os.version=13.1 (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:user.name=hello (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:user.home=/Users/hello (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:user.dir=/Users/hello/Downloads/kafka_2.13-3.3.2/bin (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:os.memory.free=987MB (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,194] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,195] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6b0d80ed (org.apache.zookeeper.ZooKeeper)
[2023-02-07 21:35:47,200] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2023-02-07 21:35:47,202] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
[2023-02-07 21:35:47,203] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2023-02-07 21:35:47,210] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2023-02-07 21:35:47,211] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2023-02-07 21:35:47,223] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2023-02-07 21:35:47,223] INFO SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2023-02-07 21:35:47,228] INFO Socket connection established, initiating session, client: /127.0.0.1:49225, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2023-02-07 21:35:47,313] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x3000405075a003f, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2023-02-07 21:35:47,315] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2023-02-07 21:35:47,472] INFO Cluster ID = YZq8o9BBQ4y0HWo2u1vpGQ (kafka.server.KafkaServer)
[2023-02-07 21:35:47,502] INFO KafkaConfig values:
advertised.listeners = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = []
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = null
inter.broker.protocol.version = 3.3-IV3
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://localhost:9092, SSL://localhost:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = -1
metadata.max.retention.ms = 604800000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = []
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.manager.class.name = null
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = null
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = null
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = SSL
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = required
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = /config/kafka-local-test.local.pfx
ssl.keystore.password = [hidden]
ssl.keystore.type = PKCS12
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
[2023-02-07 21:35:47,520] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-02-07 21:35:47,520] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-02-07 21:35:47,521] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-02-07 21:35:47,522] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-02-07 21:35:47,554] INFO Loading logs from log dirs ArraySeq(/tmp/kafka-logs) (kafka.log.LogManager)
[2023-02-07 21:35:47,555] INFO Skipping recovery for all logs in /tmp/kafka-logs since clean shutdown file was found (kafka.log.LogManager)
[2023-02-07 21:35:47,564] INFO Loaded 0 logs in 10ms. (kafka.log.LogManager)
[2023-02-07 21:35:47,564] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2023-02-07 21:35:47,565] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2023-02-07 21:35:47,625] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2023-02-07 21:35:47,635] INFO [MetadataCache brokerId=0] Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=1). (kafka.server.metadata.ZkMetadataCache)
[2023-02-07 21:35:47,737] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-02-07 21:35:47,869] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-02-07 21:35:47,872] INFO Awaiting socket connections on localhost:9092. (kafka.network.DataPlaneAcceptor)
[2023-02-07 21:35:47,883] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2023-02-07 21:35:47,883] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
########### HANGS HERE [2023-02-07 21:35:47,883] INFO Awaiting socket connections on localhost:9093. (kafka.network.DataPlaneAcceptor)
^C[2023-02-07 21:36:34,604] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2023-02-07 21:36:34,605] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2023-02-07 21:36:34,607] ERROR [KafkaServer id=0] Fatal error during KafkaServer shutdown. (kafka.server.KafkaServer)
java.lang.IllegalStateException: Kafka server is still starting up, cannot shut down!
at kafka.server.KafkaServer.shutdown(KafkaServer.scala:708)
at kafka.Kafka$.$anonfun$main$3(Kafka.scala:100)
at kafka.utils.Exit$.$anonfun$addShutdownHook$1(Exit.scala:38)
at java.base/java.lang.Thread.run(Thread.java:833)
[2023-02-07 21:36:34,609] ERROR Halting Kafka. (kafka.Kafka$)
Does anyone know what I need to set to make it work?

Related

Error connecting to node olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local:9092

I am trying to run kafka using helm bitnami/kafka version 7.0.2
I am able to run all commands from inside the container but when i try to access the container using port forwarding or NodePort it's giving the error as mentioned:
[2022-04-13 13:07:51,461] WARN [AdminClient clientId=adminclient-1] Error connecting to node olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local:9092 (id: 1001 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local: Name or service not known
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:511)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:468)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:173)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:984)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1128)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1388)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1331)
at java.base/java.lang.Thread.run(Thread.java:829)
on running kafka-topics.sh --bootstrap-server 192.168.49.2:30009 --list
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h54m
olfkafkademo NodePort 10.98.75.114 <none> 9092:30009/TCP 3h41m
olfkafkademo-headless ClusterIP None <none> 9092/TCP 3h41m
olfkafkademo-zookeeper ClusterIP 10.108.34.205 <none> 2181/TCP,2888/TCP,3888/TCP 3h41m
olfkafkademo-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 3h41m
(base) fa059037#aadarsh-ubuntu:~/Documents/intern/kafka/project$ kubectl get pods
NAME READY STATUS RESTARTS AGE
olfkafkademo-0 1/1 Running 1 (3h41m ago) 3h41m
olfkafkademo-zookeeper-0 1/1 Running 0 3h41m
[2022-04-13 07:34:28,460] INFO Opening socket connection to server olfkafkademo-zookeeper/10.108.34.205:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2022-04-13 07:34:28,463] INFO Socket connection established to olfkafkademo-zookeeper/10.108.34.205:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2022-04-13 07:34:28,490] INFO Session establishment complete on server olfkafkademo-zookeeper/10.108.34.205:2181, sessionid = 0x10000656a410000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2022-04-13 07:34:28,494] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-04-13 07:34:28,778] INFO Cluster ID = _pi14GQ-RpKLqpG0HqVW9A (kafka.server.KafkaServer)
[2022-04-13 07:34:28,780] WARN No meta.properties file under dir /bitnami/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2022-04-13 07:34:28,821] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = PLAINTEXT://olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local:9092
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /bitnami/kafka/data
log.flush.interval.messages = 10000
log.flush.interval.ms = 1000
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = 1073741824
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = olfkafkademo-zookeeper
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-04-13 07:34:28,828] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = PLAINTEXT://olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local:9092
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /bitnami/kafka/data
log.flush.interval.messages = 10000
log.flush.interval.ms = 1000
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = 1073741824
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = olfkafkademo-zookeeper
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-04-13 07:34:28,846] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-04-13 07:34:28,846] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-04-13 07:34:28,847] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2022-04-13 07:34:28,867] INFO Loading logs. (kafka.log.LogManager)
[2022-04-13 07:34:28,871] INFO Logs loading complete in 4 ms. (kafka.log.LogManager)
[2022-04-13 07:34:28,880] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2022-04-13 07:34:28,881] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2022-04-13 07:34:29,146] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2022-04-13 07:34:29,213] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2022-04-13 07:34:29,215] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2022-04-13 07:34:29,235] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,236] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,237] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,237] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,246] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2022-04-13 07:34:29,266] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2022-04-13 07:34:29,279] INFO Stat of the created znode at /brokers/ids/1001 is: 25,25,1649835269274,1649835269274,1,0,0,72058029612269568,294,0,25
(kafka.zk.KafkaZkClient)
[2022-04-13 07:34:29,280] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2022-04-13 07:34:29,294] WARN No meta.properties file under dir /bitnami/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2022-04-13 07:34:29,332] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,336] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,336] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2022-04-13 07:34:29,346] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2022-04-13 07:34:29,348] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2022-04-13 07:34:29,349] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2022-04-13 07:34:29,351] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2022-04-13 07:34:29,360] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2022-04-13 07:34:29,418] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-04-13 07:34:29,420] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2022-04-13 07:34:29,420] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-04-13 07:34:29,531] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2022-04-13 07:34:29,540] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2022-04-13 07:34:29,542] INFO Kafka version: 2.3.1 (org.apache.kafka.common.utils.AppInfoParser)
[2022-04-13 07:34:29,542] INFO Kafka commitId: 18a913733fb71c01 (org.apache.kafka.common.utils.AppInfoParser)
[2022-04-13 07:34:29,542] INFO Kafka startTimeMs: 1649835269540 (org.apache.kafka.common.utils.AppInfoParser)
[2022-04-13 07:34:29,544] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
[2022-04-13 07:36:36,478] INFO Creating topic first with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
[2022-04-13 07:36:36,495] INFO [KafkaApi-1001] Auto creation of topic first with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2022-04-13 07:36:36,570] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(first-0) (kafka.server.ReplicaFetcherManager)
[2022-04-13 07:36:36,637] INFO [Log partition=first-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-04-13 07:36:36,641] INFO [Log partition=first-0, dir=/bitnami/kafka/data] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 42 ms (kafka.log.Log)
[2022-04-13 07:36:36,642] INFO Created log for partition first-0 in /bitnami/kafka/data with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 1000, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 10000, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-04-13 07:36:36,643] INFO [Partition first-0 broker=1001] No checkpointed highwatermark is found for partition first-0 (kafka.cluster.Partition)
[2022-04-13 07:36:36,644] INFO Replica loaded for partition first-0 with initial high watermark 0 (kafka.cluster.Replica)
[2022-04-13 07:36:36,646] INFO [Partition first-0 broker=1001] first-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2022-04-13 07:44:29,349] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2022-04-13 07:54:29,349] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
The problem seems similiar to issue
StackOverFlowLink
On running dnslookup olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local
It's getting resolved correctly as the same as above with an ip address.
The same issue pops up while running all of bitnami/kafka helm charts with versions below 10.
advertised.listeners = PLAINTEXT://olfkafkademo-0.olfkafkademo-headless.default.svc.cluster.local:9092
This is the only address that clients will be able to resolve.
try to access the container using port forwarding or NodePort
Your host machine will not be able to resolve the cluster's service DNS names. Either you need an Ingress or update the advertised listeners to include the clusterIp + NodePort on each Kafka pod. The headless service is only meant to be used internally to the cluster.
Seems like you didn't set externalAccess.enabled=true, which defaults to use port 9094 outside of the cluster and should not require any kubectl port-forward
https://github.com/bitnami/charts/tree/master/bitnami/kafka#traffic-exposure-parameters
I'd suggest using an operator rather than just deploying a Helm chart - https://strimzi.io, and following it's own documentation for setting up external access

Runninng new kafka server throws exception saying port 9092 alrady in use even when I have changed port in porperties file

In server-1.properties I changed the port, log and broker id but when I start this server-1 it throws
KafkaException : Socket server failed to bind to 0.0.0.0:9092: Address already in use.
And logs I see other properties are changed but port is 9092
This is server-1.properties file :
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9093
#listeners=PLAINTEXT://:9093
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9093
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs-1
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
And this is what happens on terminal
webwerks#webwerks-H81M-S:~/kafka_2.12-2.2.0$ bin/kafka-server-start.sh config/server-1.properties &
[3] 16578
[2] Exit 1 bin/kafka-server-start.sh config/server-1.properties
webwerks#webwerks-H81M-S:~/kafka_2.12-2.2.0$
[2019-07-05 14:42:15,874] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-07-05 14:42:16,188] INFO Cluster ID = JNX6PuhSQqWjkA785Kl5tQ (kafka.server.KafkaServer)
[2019-07-05 14:42:16,190] WARN No meta.properties file under dir /tmp/kafka-logs-1/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-07-05 14:42:16,249] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs-1
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-07-05 14:42:16,258] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs-1
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-07-05 14:42:16,278] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:16,278] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:16,279] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:16,305] INFO Loading logs. (kafka.log.LogManager)
[2019-07-05 14:42:16,312] INFO Logs loading complete in 6 ms. (kafka.log.LogManager)
[2019-07-05 14:42:16,327] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-07-05 14:42:16,331] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-07-05 14:42:16,573] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Socket server failed to bind to 0.0.0.0:9092: Address already in use.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:573)
at kafka.network.Acceptor.<init>(SocketServer.scala:451)
at kafka.network.SocketServer.createAcceptor(SocketServer.scala:245)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
at kafka.network.SocketServer.startup(SocketServer.scala:114)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:569)
... 13 more
[2019-07-05 14:42:16,575] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)
[2019-07-05 14:42:16,576] INFO [SocketServer brokerId=1] Stopping socket server request processors (kafka.network.SocketServer)
[2019-07-05 14:42:16,578] INFO [SocketServer brokerId=1] Stopped socket server request processors (kafka.network.SocketServer)
[2019-07-05 14:42:16,580] INFO Shutting down. (kafka.log.LogManager)
[2019-07-05 14:42:16,601] INFO Shutdown complete. (kafka.log.LogManager)
[2019-07-05 14:42:16,602] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-07-05 14:42:16,625] INFO Session: 0x10000131261000d closed (org.apache.zookeeper.ZooKeeper)
[2019-07-05 14:42:16,626] INFO EventThread shut down for session: 0x10000131261000d (org.apache.zookeeper.ClientCnxn)
[2019-07-05 14:42:16,626] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-07-05 14:42:16,626] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:17,278] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:17,278] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:17,279] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:18,279] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:18,279] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:18,279] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:19,279] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:19,279] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-07-05 14:42:19,282] INFO [SocketServer brokerId=1] Shutting down socket server (kafka.network.SocketServer)
[2019-07-05 14:42:19,320] INFO [SocketServer brokerId=1] Shutdown completed (kafka.network.SocketServer)
[2019-07-05 14:42:19,324] INFO [KafkaServer id=1] shut down completed (kafka.server.KafkaServer)
[2019-07-05 14:42:19,324] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-07-05 14:42:19,325] INFO [KafkaServer id=1] shutting down (kafka.server.Ka
You need to remove the hash (#) at the beginning of the line, otherwise it's a comment and the default value is picked!
Change:
#listeners=PLAINTEXT://:9093
#advertised.listeners=PLAINTEXT://your.host.name:9093
To:
listeners=PLAINTEXT://:9093
advertised.listeners=PLAINTEXT://your.host.name:9093

Kafka logs deleted even after setting log.retention.ms = -1

I'm running a hyperledger fabric 1.2.0 network with 5 Kafkas and 3 Zookeepers.
The issue I'm facing is even after setting log.retention.ms = -1, Kafka deleted few initial logs. To my knowledge, after setting the above value to -1 guarantees forever persistence of logs.
I set the below config for Kafkas in docker-compose.yaml.
kafka0:
image: hyperledger/fabric-kafka:amd64-0.4.13
container_name: kafka0
environment:
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_BROKER_ID=0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_MIN_INSYNC_REPLICAS=2
ports:
- 9000:9092
networks:
- byfn
Similar config for other Kafka containers. Below is the config which I get when I restart any of my Kafka containers.
background.threads = 10
broker.id = 3
broker.id.generation.enable = true
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 3
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 1.0-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 1.0-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = -1
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 103809024
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 2
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 103809024
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
My kafka version is 1.13.0. Below are the logs that you may find helpful and relevant to the question.
INFO Incrementing log start offset of partition mychannel-0 to 30457 in dir /tmp/kafka-logs (kafka.log.Log)
INFO Cleared earliest 0 entries from epoch cache based on passed offset 30457 leaving 6 in EpochFile for partition mychannel-0 (kafka.server.epoch.LeaderEpochFileCache)
INFO Updated PartitionLeaderEpoch. New: {epoch:13, offset:8621}, Current: {epoch:12, offset8618} for Partition: mychannel1-0. Cache now contains 5 entries. (kafka.server.epoch.LeaderEpochFileCache)
INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
INFO Found deletable segments with base offsets [0,4883,7464,22368,25071,26892] due to log start offset 30457 breach (kafka.log.Log)
INFO Scheduling log segment 0 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 4883 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 7464 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 22368 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 25071 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 26892 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Deleting segment 0 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 4883 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 7464 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 22368 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 25071 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 26892 from log mychannel-0. (kafka.log.Log)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000026892.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000022368.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000026892.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000022368.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000025071.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000007464.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000004883.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000025071.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000000000.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000007464.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000004883.timeindex.deleted (kafka.log.TimeIndex)
According to the Kafka Docs,
log.cleaner.enable Enable the log cleaner process to run on the server. Should be enabled if using any topics with a
cleanup.policy=compact including the internal offsets topic. If
disabled those topics will not be compacted and continually grow in
size.
I would suggest to set
log.cleaner.enable=false
and restart your Kafka cluster.
However, this will have an effect even on internal Kafka topics. Alternatively, you could try to disable retention only for the topic of interest:
$ bin/kafka-topics.sh --zookeeper zookeeperHost:2181 --alter --topic mytopic --config retention.ms=-1
$ bin/kafka-topics.sh --zookeeper zookeeperHost:2181 --alter --topic mytopic --config retention.hours=-1
$ bin/kafka-topics.sh --zookeeper zookeeperHost:2181 --alter --topic mytopic --config retention.bytes=-1
PS: In your logs I can see that log.retention.hours is set to 168 and maybe this is the reason why log.retention.ms=-1 is not effective.

Kafka server does not start

I installed Kafka and Zookeeper on my OSX machine using Homebrew, and I'm trying to launch Zookeeper and Kafka-server following this blog post.
zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties works fine, as confirmed using telnet localhost 2181. Launching kafka-server-start /usr/local/etc/kafka/server.properties results in the following output (error at the end). What should I do to launch the Kafka server effectively?
$ kafka-server-start /usr/local/etc/kafka/server.properties
[2018-11-16 13:58:53,513] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-11-16 13:58:54,002] INFO starting (kafka.server.KafkaServer)
[2018-11-16 13:58:54,003] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-11-16 13:58:54,024] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,034] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:host.name=martinas-mbp.fritz.box (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:java.version=1.8.0_192 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_192.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/commons-lang3-3.5.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-api-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-file-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-json-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-runtime-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-transforms-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-core-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-databind-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-client-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-common-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-hk2-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-server-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-client-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-http-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-io-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-security-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-server-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-util-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-clients-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-log4j-appender-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-examples-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-tools-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0-sources.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/lz4-java-1.4.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/maven-artifact-3.5.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-library-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-logging_2.12-3.9.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-reflect-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.library.path=/Users/michelangelo/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.io.tmpdir=/var/folders/s_/_q9gnhkn0816xyzxh3sd7vdh0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.version=10.12.6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.name=michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.home=/Users/michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.dir=/bin (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,038] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6ef888f6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,055] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,055] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,069] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,078] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000041838000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,082] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,277] INFO Cluster ID = 8TON7fHXTUuVjzYM9iHZHQ (kafka.server.KafkaServer)
[2018-11-16 13:58:54,352] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,361] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,385] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,411] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:968)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2018-11-16 13:58:54,413] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2018-11-16 13:58:54,417] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,420] INFO Session: 0x10000041838000b closed (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,421] INFO EventThread shut down for session: 0x10000041838000b (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,422] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,423] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,407] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-11-16 13:58:57,408] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-11-16 13:58:57,411] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
This is the issue:
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
There's another instance of kafka running. Kill it first.
You should be able to identify it with
lsof /usr/local/var/lib/kafka-logs/.lock
EDIT:
try brew services stop kafka first.

kafka starts, but won't re-start after shutdown

Under ubuntu desktop 16.04, I installed confluent open source using apt.
I run zookeeper.
Then I run kafka:
idf#DESKTOP-QVGBOPK:~$ sudo kafka-server-start /etc/kafka/server.properties
[2018-10-07 01:48:12,378] INFO KafkaConfig values:
advertised.host.name = null
metric.reporters = []
quota.producer.default = 9223372036854775807
offsets.topic.num.partitions = 50
log.flush.interval.messages = 9223372036854775807
auto.create.topics.enable = true
controller.socket.timeout.ms = 30000
log.flush.interval.ms = null
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
replica.socket.receive.buffer.bytes = 65536
min.insync.replicas = 1
replica.fetch.wait.max.ms = 500
num.recovery.threads.per.data.dir = 1
ssl.keystore.type = JKS
sasl.mechanism.inter.broker.protocol = GSSAPI
default.replication.factor = 1
ssl.truststore.password = null
log.preallocate = false
sasl.kerberos.principal.to.local.rules = [DEFAULT]
fetch.purgatory.purge.interval.requests = 1000
ssl.endpoint.identification.algorithm = null
replica.socket.timeout.ms = 30000
message.max.bytes = 1000012
num.io.threads = 8
offsets.commit.required.acks = -1
log.flush.offset.checkpoint.interval.ms = 60000
delete.topic.enable = false
quota.window.size.seconds = 1
ssl.truststore.type = JKS
offsets.commit.timeout.ms = 5000
quota.window.num = 11
zookeeper.connect = localhost:2181
authorizer.class.name =
num.replica.fetchers = 1
log.retention.ms = null
log.roll.jitter.hours = 0
log.cleaner.enable = true
offsets.load.buffer.size = 5242880
log.cleaner.delete.retention.ms = 86400000
ssl.client.auth = none
controlled.shutdown.max.retries = 3
queued.max.requests = 500
offsets.topic.replication.factor = 3
log.cleaner.threads = 1
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
socket.request.max.bytes = 104857600
ssl.trustmanager.algorithm = PKIX
zookeeper.session.timeout.ms = 6000
log.retention.bytes = -1
log.message.timestamp.type = CreateTime
sasl.kerberos.min.time.before.relogin = 60000
zookeeper.set.acl = false
connections.max.idle.ms = 600000
offsets.retention.minutes = 1440
replica.fetch.backoff.ms = 1000
inter.broker.protocol.version = 0.10.0-IV1
log.retention.hours = 168
num.partitions = 1
broker.id.generation.enable = true
listeners = null
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
log.roll.ms = null
log.flush.scheduler.interval.ms = 9223372036854775807
ssl.cipher.suites = null
log.index.size.max.bytes = 10485760
ssl.keymanager.algorithm = SunX509
security.inter.broker.protocol = PLAINTEXT
replica.fetch.max.bytes = 1048576
advertised.port = null
log.cleaner.dedupe.buffer.size = 134217728
replica.high.watermark.checkpoint.interval.ms = 5000
log.cleaner.io.buffer.size = 524288
sasl.kerberos.ticket.renew.window.factor = 0.8
zookeeper.connection.timeout.ms = 6000
controlled.shutdown.retry.backoff.ms = 5000
log.roll.hours = 168
log.cleanup.policy = delete
host.name =
log.roll.jitter.ms = null
max.connections.per.ip = 2147483647
offsets.topic.segment.bytes = 104857600
background.threads = 10
quota.consumer.default = 9223372036854775807
request.timeout.ms = 30000
log.message.format.version = 0.10.0-IV1
log.index.interval.bytes = 4096
log.dir = /tmp/kafka-logs
log.segment.bytes = 1073741824
log.cleaner.backoff.ms = 15000
offset.metadata.max.bytes = 4096
ssl.truststore.location = null
group.max.session.timeout.ms = 300000
ssl.keystore.password = null
zookeeper.sync.time.ms = 2000
port = 9092
log.retention.minutes = null
log.segment.delete.delay.ms = 60000
log.dirs = /var/lib/kafka
controlled.shutdown.enable = true
compression.type = producer
max.connections.per.ip.overrides =
log.message.timestamp.difference.max.ms = 9223372036854775807
sasl.kerberos.kinit.cmd = /usr/bin/kinit
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
auto.leader.rebalance.enable = true
leader.imbalance.check.interval.seconds = 300
log.cleaner.min.cleanable.ratio = 0.5
replica.lag.time.max.ms = 10000
num.network.threads = 3
ssl.key.password = null
reserved.broker.max.id = 1000
metrics.num.samples = 2
socket.send.buffer.bytes = 102400
ssl.protocol = TLS
socket.receive.buffer.bytes = 102400
ssl.keystore.location = null
replica.fetch.min.bytes = 1
broker.rack = null
unclean.leader.election.enable = true
sasl.enabled.mechanisms = [GSSAPI]
group.min.session.timeout.ms = 6000
log.cleaner.io.buffer.load.factor = 0.9
offsets.retention.check.interval.ms = 600000
producer.purgatory.purge.interval.requests = 1000
metrics.sample.window.ms = 30000
broker.id = 0
offsets.topic.compression.codec = 0
log.retention.check.interval.ms = 300000
advertised.listeners = null
leader.imbalance.per.broker.percentage = 10
(kafka.server.KafkaConfig)
[2018-10-07 01:48:12,540] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled. With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform 2.0 (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours. This Metadata may be transferred to any country in which Confluent maintains facilities. For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent. You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker. See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable)
[2018-10-07 01:48:12,543] INFO starting (kafka.server.KafkaServer)
[2018-10-07 01:48:12,554] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-10-07 01:48:12,569] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-10-07 01:48:12,577] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,578] INFO Client environment:host.name=DESKTOP-QVGBOPK.localdomain (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,580] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,581] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,584] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,585] INFO Client environment:java.class.path=:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.4.0-b34.jar:/usr/bin/../share/java/kafka/argparse4j-0.5.0.jar:/usr/bin/../share/java/kafka/avro-1.7.7.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.1.jar:/usr/bin/../share/java/kafka/commons-compress-1.4.1.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/commons-validator-1.4.1.jar:/usr/bin/../share/java/kafka/connect-api-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-file-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-json-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-runtime-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/guava-18.0.jar:/usr/bin/../share/java/kafka/hk2-api-2.4.0-b34.jar:/usr/bin/../share/java/kafka/hk2-locator-2.4.0-b34.jar:/usr/bin/../share/java/kafka/hk2-utils-2.4.0-b34.jar:/usr/bin/../share/java/kafka/httpclient-4.5.1.jar:/usr/bin/../share/java/kafka/httpcore-4.4.3.jar:/usr/bin/../share/java/kafka/httpmime-4.5.1.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.6.0.jar:/usr/bin/../share/java/kafka/jackson-core-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/jackson-databind-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.6.3.jar:/usr/bin/../share/java/kafka/javassist-3.18.2-GA.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/javax.inject-2.4.0-b34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.0.1.jar:/usr/bin/../share/java/kafka/jersey-client-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-common-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-guava-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.22.2.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-http-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-io-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-security-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-server-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-util-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jopt-simple-4.9.jar:/usr/bin/../share/java/kafka/kafka-clients-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-streams-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-tools-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-javadoc.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-scaladoc.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-sources.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-test-sources.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-test.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/lz4-1.3.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/paranamer-2.3.jar:/usr/bin/../share/java/kafka/reflections-0.9.10.jar:/usr/bin/../share/java/kafka/rocksdbjni-4.8.0.jar:/usr/bin/../share/java/kafka/scala-library-2.11.8.jar:/usr/bin/../share/java/kafka/scala-parser-combinators_2.11-1.0.4.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.21.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.21.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.2.6.jar:/usr/bin/../share/java/kafka/support-metrics-client-3.0.1.jar:/usr/bin/../share/java/kafka/support-metrics-common-3.0.1.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/xz-1.0.jar:/usr/bin/../share/java/kafka/zkclient-0.8.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.6.jar:/usr/bin/../share/java/confluent-support-metrics/support-metrics-fullcollector-3.0.1.jar:/usr/share/java/confluent-support-metrics/support-metrics-fullcollector-3.0.1.jar (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,587] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,588] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,592] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,593] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,595] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,596] INFO Client environment:os.version=4.4.0-17134-Microsoft (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,597] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,597] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,598] INFO Client environment:user.dir=/home/idf (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,600] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#35cabb2a (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,618] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2018-10-07 01:48:12,621] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,629] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,643] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1664b569420002b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,647] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2018-10-07 01:48:12,831] INFO Loading logs. (kafka.log.LogManager)
[2018-10-07 01:48:12,838] INFO Logs loading complete. (kafka.log.LogManager)
[2018-10-07 01:48:13,015] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2018-10-07 01:48:13,018] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2018-10-07 01:48:13,025] WARN No meta.properties file under dir /var/lib/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-10-07 01:48:13,068] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2018-10-07 01:48:13,073] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
[2018-10-07 01:48:13,094] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,096] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,136] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,150] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,152] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2018-10-07 01:48:13,318] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2018-10-07 01:48:13,319] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,321] INFO [ExpirationReaper-0], Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,332] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.GroupCoordinator)
[2018-10-07 01:48:13,334] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2018-10-07 01:48:13,339] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 9 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2018-10-07 01:48:13,362] INFO [ThrottledRequestReaper-Produce], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-10-07 01:48:13,364] INFO [ThrottledRequestReaper-Fetch], Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-10-07 01:48:13,371] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2018-10-07 01:48:13,396] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,410] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,412] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(DESKTOP-QVGBOPK.localdomain,9092,PLAINTEXT) (kafka.utils.ZkUtils)
[2018-10-07 01:48:13,415] WARN No meta.properties file under dir /var/lib/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-10-07 01:48:13,436] INFO Kafka version : 0.10.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-07 01:48:13,437] INFO Kafka commitId : e7288edd541cee03 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-07 01:48:13,441] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2018-10-07 01:48:13,446] INFO Waiting 10064 ms for the monitored broker to finish starting up... (io.confluent.support.metrics.MetricsReporter)
[2018-10-07 01:48:13,588] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [_schemas,0],[test,0],[TutorialTopic,0],[__confluent.support.metrics,0] (kafka.server.ReplicaFetcherManager)
[2018-10-07 01:48:13,615] INFO Completed load of log _schemas-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,618] INFO Created log for partition [_schemas,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,620] INFO Partition [_schemas,0] on broker 0: No checkpointed highwatermark is found for partition [_schemas,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,645] INFO Completed load of log test-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,648] INFO Created log for partition [test,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,649] INFO Partition [test,0] on broker 0: No checkpointed highwatermark is found for partition [test,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,655] INFO Completed load of log TutorialTopic-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,657] INFO Created log for partition [TutorialTopic,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,659] INFO Partition [TutorialTopic,0] on broker 0: No checkpointed highwatermark is found for partition [TutorialTopic,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,665] INFO Completed load of log __confluent.support.metrics-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,667] INFO Created log for partition [__confluent.support.metrics,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 31536000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,670] INFO Partition [__confluent.support.metrics,0] on broker 0: No checkpointed highwatermark is found for partition [__confluent.support.metrics,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,686] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [_schemas,0],[test,0],[TutorialTopic,0],[__confluent.support.metrics,0] (kafka.server.ReplicaFetcherManager)
[2018-10-07 01:48:23,519] INFO Monitored broker is now ready (io.confluent.support.metrics.MetricsReporter)
[2018-10-07 01:48:23,525] INFO Starting metrics collection from monitored broker... (io.confluent.support.metrics.MetricsReporter)
If I now control-c out of kafka, and try to start it again, I get:
idf#DESKTOP-QVGBOPK:~$ sudo kafka-server-start /etc/kafka/server.properties
....
(kafka.server.KafkaConfig)
[2018-10-07 01:52:44,565] INFO Loading logs. (kafka.log.LogManager)
[2018-10-07 01:52:44,596] WARN Found a corrupted index file, /var/lib/kafka/TutorialTopic-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,611] ERROR There was an error in one of the threads during logs loading: java.io.IOException: Invalid argument (kafka.log.LogManager)
[2018-10-07 01:52:44,614] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.io.IOException: Invalid argument
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:294)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
at kafka.log.LogSegment.recover(LogSegment.scala:202)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:199)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:171)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegments(Log.scala:171)
at kafka.log.Log.<init>(Log.scala:101)
at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:152)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:56)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-10-07 01:52:44,618] WARN Found a corrupted index file, /var/lib/kafka/__confluent.support.metrics-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,647] INFO shutting down (kafka.server.KafkaServer)
[2018-10-07 01:52:44,654] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-10-07 01:52:44,663] INFO Session: 0x1664b569420002c closed (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:52:44,663] WARN Found a corrupted index file, /var/lib/kafka/_schemas-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,663] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:52:44,666] INFO shut down completed (kafka.server.KafkaServer)
[2018-10-07 01:52:44,671] INFO shutting down (kafka.server.KafkaServer)
idf#DESKTOP-QVGBOPK:~$
If I remove the files [including hidden files] in /var/lib/kafka/, and remove the files from /tmp, and restart kafka again, it seems to be ok. But no matter what, the above problem returns.
Here is the output of kafka-topics
idf#DESKTOP-QVGBOPK:/etc/kafka$ kafka-topics --zookeeper 127.0.0.1:2181 --describe
Topic:TutorialTopic PartitionCount:1 ReplicationFactor:1 Configs:
Topic: TutorialTopic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic:__confluent.support.metrics PartitionCount:1 ReplicationFactor:1 Configs:retention.ms=31536000000
Topic: __confluent.support.metrics Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic:_schemas PartitionCount:1 ReplicationFactor:1 Configs:cleanup.policy=compact
Topic: _schemas Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
idf#DESKTOP-QVGBOPK:/etc/kafka$