The Cluster ID XXXXXXXXXXXX doesn't match stored clusterId Some - apache-kafka

On two nodes Kafka cluster after restart one node I have get error:
ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID hJGLXQwERc2rTXI_dWLwcg doesn't match stored clusterId Some(9VcFGDwSfke5JWuzA6f9mw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:223)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
Where hJGLXQwERc2rTXI_dWLwcg - ID of worked cluster from meta.properties
9VcFGDwSfke5JWuzA6f9mw - new generated ID.
Kafka version 3.0.0
On Node1 :
server.properties:
broker.id=1
listeners=SASL_PLAINTEXT://:9092
advertised.listeners=SASL_PLAINTEXT://loc-kafka1:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=loc-kafka1:2181, loc-kafka2:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
compression.type=gzip
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
ssl.endpoint.identification.algorithm=
super.users=User:admin
sasl.jaas.config=/opt/kafka/config/kafka_server_jaas.conf
delete.topic.enable=true
zookeeper.properties
dataDir=/opt/kafka/zookeeper/data
clientPort=2181
maxClientCnxns=10
admin.enableServer=false
server1=loc-kafka1:2888:3888
server2=loc-kafka2:2888:3888
On Node2 the same config except broker.id=2 and myid file in dataDir.
All information what I can find sad : you should place log.dirs and dataDir not in /tmp folder. And all will done. But it's not like that.
One solution is clear log.dirs and dataDir folders allow this node connect to cluster. But this doesn't seem good solution.
If I just delete meta.properties, new cluster.id is plased in it and node don't connect to cluster. At the same time, in server.logs, the following lines show that the node is connecting to another cluster node:
INFO Opening socket connection to server loc-kafka1/10.1.1.3:2181. (org.apache.zookeeper.ClientCnxn)
INFO SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
INFO Socket connection established, initiating session, client: loc-kafka2/10.1.1.4:38672, server: loc-kafka1/10.1.1.3:2181 (org.apache.zookeeper.ClientCnxn)
INFO Session establishment complete on server loc-kafka1/10.0.0.3:3001, session id = 0x100000043bb0006, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
What I can do to fix it?

Related

org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /consumers

i have three servers to run zookeeper and kafka clusters. one day one of kafka app didn't open port 9092,and failed to restart. this is the error log:
[2022-06-30 11:50:42,459] INFO Opening socket connection to server nv54/192.168.**.**:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2022-06-30 11:50:42,463] INFO Socket connection established to nv54/192.168.**.**:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2022-06-30 11:50:42,469] INFO Session establishment complete on server nv54/192.168.179.54:2181, sessionid = 0x303a1d787d1000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2022-06-30 11:50:42,472] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-06-30 11:50:42,519] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:116)
...
server.properties:
broker.id=52
listeners=PLAINTEXT://192.168.**.**:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/my_own_dir/data_kafka
num.partitions=1
……
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
zookeeper.connect=192.168.**.*1:2181,192.168.**.*2:2181,192.168.**.*3:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
delete.topic.enable=true
getAcl /consumers
how to solve this error?

Kafka SASL_SSL No JAAS configuration section named 'Client' was found in specified JAAS configuration file

I'm trying to activate authentication using SASL/PLAIN in my kafka broker.
the JAAS configuration file is as the following
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
I launch kafka service using the following commands
export KAFKA_OPTS="-Djava.security.auth.login.config=<PATH>kafka_server_jaas.conf
/bin/kafka-server-start.sh /config/server.properties
The kafka service is not started properly and I got these errors in the log
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/kafka/kafka/config/kafka_server_jaas.conf'.
at org.apache.zookeeper.client.ZooKeeperSaslClient.<init>(ZooKeeperSaslClient.java:189)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,612] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,752] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2022-03-16 12:13:16,786] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2022-03-16 12:13:16,788] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2022-03-16 12:13:16,957] INFO Cluster ID = 6WTadNCMRAW4dHoc_JUnIg (kafka.server.KafkaServer)
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
I already added the following lines to server.properties
listeners=SASL_SSL://localhost:9092
security.protocol=SASL_SSL
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
This issue occurs when there is a mismatch in cluster ID stored in Zookeeper and Kafka data directories for the broker.
In this case, cluster ID of the broker stored in
Zookeeper data is 6WTadNCMRAW4dHoc_JUnIg
Kafka meta.properties is RJXzPwJeRfawIa_yA0B26A
Reason:
Zookeeper data directory got deleted.
Deleting Zookeeper dataDir and restarting both Zookeeper and Kafka service will not work. Because Zookeeper creates a new Cluster ID and assigns it to the broker when it registers and if there is no entry already. This new cluster ID will be different from the one in meta.properties.
This issue can be fixed by following below steps
delete both Kafka log.dirs and Zookeeper dataDir - results in data loss; Both Kafka and Zookeeper service needs to be restarted
delete meta.properties in Kafka log.dirs directory - no data loss; Kafka service needs to be started anyway
update cluster ID in meta.properties with the value stored in Zookeeper data; In this case, replace RJXzPwJeRfawIa_yA0B26A with 6WTadNCMRAW4dHoc_JUnIg - no data loss; Kafka service needs to be started anyway
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file
The Client section is used to authenticate a SASL connection with ZooKeeper. Above error javax.security.auth.login.LoginException is a warning and Kafka will connect to Zookeeper server without SASL authentication if Zookeeper allows it.
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
The KafkaServer section is used by the Broker and provides SASL configuration options for inter broker connection. The username and password are used by the broker to initiate connections to other brokers. The set of properties user_username defines the passwords for all users to connect to the broker.

Why kafka cluster did error "Number of alive brokers '0' does not meet the required replication factor"?

I have 2 kafka brokers and 1 zookeeper. Brokers config: server.properties file:
1 broker:
auto.create.topics.enable=true
broker.id=1
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://5.1.2.3:9092
log.dirs=/opt/kafka_2.12-2.1.0/logs
log.retention.check.interval.ms=300000
log.retention.hours=168
log.segment.bytes=1073741824
max.message.bytes=105906176
message.max.bytes=105906176
num.io.threads=8
num.network.threads=3
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
replica.fetch.max.bytes=105906176
socket.receive.buffer.bytes=102400
socket.request.max.bytes=105906176
socket.send.buffer.bytes=102400
transaction.state.log.min.isr=1
transaction.state.log.replication.factor=1
zookeeper.connect=5.1.3.6:2181
zookeeper.connection.timeout.ms=6000
2 broker:
auto.create.topics.enable=true
broker.id=2
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://18.4.6.6:9092
log.dirs=/opt/kafka_2.12-2.1.0/logs
log.retention.check.interval.ms=300000
log.retention.hours=168
log.segment.bytes=1073741824
max.message.bytes=105906176
message.max.bytes=105906176
num.io.threads=8
num.network.threads=3
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
replica.fetch.max.bytes=105906176
socket.receive.buffer.bytes=102400
socket.request.max.bytes=105906176
socket.send.buffer.bytes=102400
transaction.state.log.min.isr=1
transaction.state.log.replication.factor=1
zookeeper.connect=5.1.3.6:2181
zookeeper.connection.timeout.ms=6000
if i ask zookeeper like this:
echo dump | nc zook_IP 2181
i got:
SessionTracker dump:
Session Sets (3):
0 expire at Sun Jan 04 03:40:27 MSK 1970:
1 expire at Sun Jan 04 03:40:30 MSK 1970:
0x1000bef9152000b
1 expire at Sun Jan 04 03:40:33 MSK 1970:
0x1000147d4b40003
ephemeral nodes dump:
Sessions with Ephemerals (2):
0x1000147d4b40003:
/controller
/brokers/ids/2
0x1000bef9152000b:
/brokers/ids/1
looke fine, but not works :(. Zookeeper see 2 brokers, but in first kafka broker we have error:
ERROR [KafkaApi-1] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
also we use kafka_exporter for prometheus, and he log this error:
Cannot get oldest offset of topic Some.TOPIC partition 9: kafka server: Request was for a topic or partition that does not exist on this broker." source="kafka_exporter.go:296
pls help ! were i mistake in config ?
Are your clocks working? Zookeeper thinks it's 1970
Sun Jan 04 03:40:27 MSK 1970
You may want to look at the rest of the logs or see if Kafka and Zookeeper are actively running and ports are open.
In your first message, after starting a fresh cluster you see this, so it's not a true error
This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
The properties you show, though, have listeners on entirely different subnets and you're not using advertised.listeners
Kafka broker.id changes maybe cause this problem. Clean up the kafka metadata under zk, note: kafka data will be lost
I got this error message in this situation :
Cluster talking in SSL
Every broker is a container
Updated the certificate with new password inside ALL brokers
Rolling update
After the first broker reboot, it spammed this error message and the broker controller talked about "a new broker connected but password verification failed".
Solutions :
Set the new certificate password with the old password
Down then Up your entire cluster at once
(not tested yet) Change the certificate on one broker, reboot it then move to the next broker until you reboot all of them (ending with the controller)

Invalid credentials with SASL_PLAINTEXT mechanism SCRAM-SHA-256 - InvalidACL for /config/users/admin

I am using kafka_2.3.0, Ubuntu 16.04
Below are the configurations for Kafka broker and the zookeeper nodes. Currently i am testing this on single machine so the IP shall remain same all over and port shall differ.
kafka-broker 1 configuration.
broker.id=1
listeners=SASL_PLAINTEXT://192.168.1.172:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9092
log.dirs=/home/emgda/data/kafka/1/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 2 configuration.
broker.id=2
listeners=SASL_PLAINTEXT://192.168.1.172:9093
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9093
log.dirs=/home/emgda/data/kafka/2/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 3 configuration.
broker.id=3
listeners=SASL_PLAINTEXT://192.168.1.172:9094
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9094
log.dirs=/home/emgda/data/kafka/3/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka_jass.config
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
zookeeper_jass.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret";
};
Zookeeper-node 1 configuration
dataDir=/home/emgda/data/zookeeper/1/
clientPort=2181
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 2 configuration
dataDir=/home/emgda/data/zookeeper/2/
clientPort=2182
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 3 configuration
dataDir=/home/emgda/data/zookeeper/3/
clientPort=2183
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper nodes in cluster start properly and the kafka is also able to authenticate to zookeeper as below zookeeper logs will help understand what heppens when first kafka broker comes up,
[2019-12-30 13:35:29,465] INFO Accepted socket connection from /192.168.1.172:42362 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-12-30 13:35:29,480] INFO Client attempting to establish new session at /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,487] INFO Established session 0x10000d285210003 with negotiated timeout 6000 for client /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,529] INFO Successfully authenticated client: authenticationID=super; authorizationID=super. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,529] INFO Setting authorizedID: super (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,530] INFO adding SASL authorization for authorizationID: super (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:36:54,999] INFO Closed socket connection for client /192.168.1.172:42362 which had sessionid 0x10000d285210003 (org.apache.zookeeper.server.NIOServerCnxn)
Error while starting first Kafka broker as below,
[2019-12-30 13:35:58,417] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (192.168.1.172/192.168.1.172:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2019-12-30 13:35:58,421] INFO [SocketServer brokerId=1] Failed authentication with /192.168.1.172 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
While I am trying to create a Kakfa-broker user using below command, I get below error
emgda#ubuntu:~/softwares/kafka_2.12-2.3.0$ ./bin/kafka-configs.sh --zookeeper 192.168.1.172:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Error while executing config command with args '--zookeeper 192.168.1.172:2181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret] --entity-type users --entity-name admin'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/users/admin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:124)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.createOrSet$1(KafkaZkClient.scala:357)
at kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:367)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:378)
at kafka.zk.AdminZkClient.changeUserOrUserClientIdConfig(AdminZkClient.scala:312)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:276)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:153)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:104)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Question:
What am I missing so that kafka-broker authenticates in a proper way?

Can't Start Kafka Server on Windows 10 - Kafka's log directories (and children) should only contain Kafka topic data

Following instructions in lesson 28 of Learn Apache Kafka for Beginners Udemy course to start zookeeper and then start a kafka server broker on Windows 10. Zookeeper runs fine on port 2181:
C:\kafka_2.12-2.3.1> zookeeper-server-start.bat config/zookeeper.properties
...
INFO binding to port 0.0.0.0/0.0.0.0:2181
But after adding the bat files to path, running kafka server does not work:
C:\kafka_2.12-2.3.1> kafka-server-start.bat config/server.properties
...
ERROR There was an error in one of the threads during logs loading: org.apache.kafka.common.KafkaException: Found directory C:\kafka_2.12-2.3.1\data\kafka, 'kafka' is not in the form of topic-partition or topic-partition.uniqueId-delete (if marked for deletion).
Kafka's log directories (and children) should only contain Kafka topic data. (kafka.log.LogManager)
Some of the stdout logging in zookeeper looks informative:
Accepted socket connection from /127.0.0.1:49439 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-11-03 17:22:42,278] INFO Client attempting to establish new session at /127.0.0.1:49439 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-11-03 17:22:42,286] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
...
INFO Processed session termination for sessionid: 0x1007b0044a40000 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-11-03 17:22:42,987] INFO Closed socket connection for client /127.0.0.1:49439 which had sessionid 0x1007b0044a40000 (org.apache.zookeeper.server.NIOServerCnxn)
Under the data folder I created, there are two folders I created, the second of which was filled after I tried running the kafka broker:
kafka/
|-- empty
zookeeper/
|-- version-2/
|--log.1
Why does this error happen and how can I start a Kafka server on Windows 10?
EDIT:
Contents of config/server.properties:
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=C:/kafka_2.12-2.3.1/data/
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
The data/ folder contains empty zookeeper/ and kafka/ directories
Did you create those folders? Kafka's log.dirs can only contain folders in the format of topic-partition or topic-partition.uniqueId-delete (if marked for deletion), as mentioned by the error. That folder should be empty (no subdriectories) if starting fresh installation.
Plus, Kafka data directory should not hold Zookeeper data as Zookeeper should be independent of Kafka