I have a problem with using Debezium. I searched on the internet but i cant find solution.
I'm using Windows 11 and Kafka 3.1
Here is my config values:
Zookeepers.properties:
dataDir=C:/debezium/kafka/data/zookeper
clientPort=2181
maxClientCnxns=0
admin.enableServer=false
server.properties
broker.id=0
listeners=PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
connect-standalone.properties
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=C:/debezium/kafka/connect/connect.offsets
offset.flush.interval.ms=10000
offset.reset=latest
plugin.path=C:/debezium/kafka/connect
and transaction_connector.properties
name=wallet-transaction-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname= {MY_HOSTNAME}
database.port=1433
database.user=sa
database.password= {SQL_PASSWORD}
database.server.name= {REMOTE_SQL_SERVER}
database.dbname=WalletDB
table.include.list=dbo.TxOpenProvision
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=dbhistory.TxOpenProvision
include.schema.changes=true
I run zookeeper, kafka and connect command below:
Zookeper: .\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
Kafka: .\bin\windows\kafka-server-start.bat .\config\server.properties
Connect: .\bin\windows\connect-standalone.bat .\config\connect-standalone.properties .\config\wallet_connector.properties
My SQL Server is remote server.
I'm getting this error and i cant resolve it. How can i solve this?
ERROR [wallet-transaction-connector|task-0]
WorkerSourceTask{id=wallet-transaction-connector-0} Task threw an
uncaught and unrecoverable exception. Task is being killed and will
not recover until manually restarted
(org.apache.kafka.connect.runtime.WorkerTask:195)
org.apache.kafka.common.config.ConfigException: Invalid value earl²est
for configuration auto.offset.reset: String must be one of: latest,
earliest, none
at org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:961)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:499)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:483)
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:113)
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:133)
at org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:630)
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:664)
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:645)
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:625)
at io.debezium.relational.history.KafkaDatabaseHistory.storageExists(KafkaDatabaseHistory.java:356)
at io.debezium.relational.HistorizedRelationalDatabaseSchema.initializeStorage(HistorizedRelationalDatabaseSchema.java:80)
at io.debezium.connector.sqlserver.SqlServerConnectorTask.start(SqlServerConnectorTask.java:81)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)
at org.apache.kafka.connect.runtime.WorkerSourceTask.initializeAndStart(WorkerSourceTask.java:225)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
As you can see in the logs, you have a special character ² - Invalid value earl²est
In connect-standalone.properties, the config offset.reset is not a valid config...
Debezium is a producer (source connector), so setting auto.offset.reset doesn't make sense for it.
Also worth pointing out that Windows support for Kafka is very lacking; try using WSL2 instead.
Related
We have 8 nodes kafka cluster and kafka manager installed.
We are monitoring via new relic.
new Relic and kafka manager both are reporting kafka is rejecting bytes. I am not able to find the cause.
In broker logs there are no error lines.
JMS BEAN - JMX/kafka.server/BrokerTopicMetrics/BytesRejectedPerSec/OneMinuteRate
Kafka Config -
auto.create.topics.enable=false
auto.leader.rebalance.enable=true
broker.id=180
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
default.replication.factor=1
delete.topic.enable=true
kafka.http.metrics.host=0.0.0.0
kafka.http.metrics.port=24042
kafka.log4j.dir=/logs/kafka
kerberos.auth.enable=false
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
log.cleaner.dedupe.buffer.size=134217728
log.cleaner.delete.retention.ms=604800000
log.cleaner.enable=true
log.cleaner.min.cleanable.ratio=0.5
log.cleaner.threads=1
log.dirs=/kafka/data
log.retention.bytes=5368709120
log.retention.check.interval.ms=300000
log.retention.hours=72
log.retention.ms=259200000
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=3145728
min.insync.replicas=1
num.io.threads=8
num.partitions=1
num.replica.fetchers=6
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
port=9092
quota.consumer.default=52428800
quota.consumer.default=52428800
quota.producer.default=26214400
quota.producer.default=26214400
replica.fetch.max.bytes=4194304
replica.lag.max.messages=6000
replica.lag.time.max.ms=60000
unclean.leader.election.enable=false
zookeeper.session.timeout.ms=6000
zookeeper.connect=zookeeper01.prod.***.com:2181,zookeeper02.prod.***.com:2181,zookeeper03.prod.***.com:2181
security.inter.broker.protocol=PLAINTEXT
listeners=PLAINTEXT://kafka01.prod.***.com:9092,
broker.id.generation.enable=false
sasl.kerberos.service.name=kafka
listeners=PLAINTEXT://:9092
num.network.threads=8
By examining Kafka sources (ref1, ref2), it seems that the only reason counted into BytesRejectedPerSec (bytesRejectedRate) is the message size exceeding config.maxMessageSize.
Note: recompression and message format conversion may also affect the message size beyond what's being sent by the producer.
I am trying to do a POC for Kafka and debezium.
I have started kafka and zookeeper and they are working... now when I try to load kafka-connect (I am kind of new to this...) I get this error that I just can't understand what I am doing wrong.
Note: I have tested all of this with the Debezium tutorial docker images, but I would like to connect from a remote server and I thought it would be easier to install everything without docker to play with the configuration
starting the connect with the following command
./connect-standalone.sh ~/kafka/config/connect-standalone.properties ~/kafka/config/connect-standalone-worker.properties ~/kafka/config/debezium-connector.properties
connect-standalone.properties
bootstrap.servers=localhost:9092
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.flush.interval.ms=10000
plugin.path=/home/ubuntu/kafka/plugins
connect-standalone-worker.properties
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/home/user/offest
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
debezium-connector.properties
name=my-connector
connector.class=io.debezium.connector.mongodb.MongoDbConnector
include.schema.changes=false
mongodb.name=mymongo
collection.whitelist=my.collection
tasks.max=1
mongodb.hosts=A.B.C.D:27017
I get the following when running the connect:
[2018-12-27 15:31:41,995] ERROR Failed to create job for /home/ubuntu/kafka/config/connect-standalone-worker.properties (org.apache.kafka.connect.cli.ConnectStandalone:102)
[2018-12-27 15:31:41,996] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {internal.key.converter=org.apache.kafka.connect.json.JsonConverter, offset.storage.file.filename=/home/user/offest, internal.value.converter.schemas.enable=false, internal.value.converter=org.apache.kafka.connect.json.JsonConverter, value.converter=org.apache.kafka.connect.json.JsonConverter, internal.key.converter.schemas.enable=false, key.converter=org.apache.kafka.connect.json.JsonConverter} contains no connector type
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {internal.key.converter=org.apache.kafka.connect.json.JsonConverter, offset.storage.file.filename=/home/user/offest, internal.value.converter.schemas.enable=false, internal.value.converter=org.apache.kafka.connect.json.JsonConverter, value.converter=org.apache.kafka.connect.json.JsonConverter, internal.key.converter.schemas.enable=false, key.converter=org.apache.kafka.connect.json.JsonConverter} contains no connector type
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:259)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2018-12-27 15:31:41,997] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
connect-standalone.properties and connect-standalone-worker.properties need to be one file.
The error is saying that connect-standalone-worker.properties has no connector.class value (which it shouldn't because it's the worker properties, not a connector)
The command you're trying to run should look like
connect-standalone worker.properties connector1.properties [connector2.properties ... ]
I am currently studying kafka and new , I am trying to start the kafka-server-start.sh config/server.properties but getting the below error message, I searched stackoverflow and i am unable to get the solution. Could anyone please advise how to fix this.
Error Message:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default
configuration: logging only errors to the console.
21:48:52.090 [main] FATAL kafka.Kafka$ - null
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;
at kafka.utils.CoreUtils$.parseCsvList(CoreUtils.scala:213) ~[kafka_2.11-0.9.0.0.jar:?]
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:742) ~[kafka_2.11-0.9.0.0.jar:?]
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:691) ~[kafka_2.11-0.9.0.0.jar:?]
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28) ~[kafka_2.11-0.9.0.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:58) [kafka_2.11-0.9.0.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.11-0.9.0.0.jar:?]
I am using Ubuntu 14.04, Java 1.8 build 101, zookeeper version 3.4 and kafka version 2.11-0.9
Zookeeper properties (zoo.cfg):
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.10/data
clientPort=2181
kafka properties (server.properties):
broker.id=0
listeners=PLAINTEXT://:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka/kafka-log-1
num.partitions=2
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
This is generally a sign of a Scala version issue; the software being ran is trying to find a Scala internal method that is not available in the version you have installed
NoSuchMethodError: scala.Predef$
According to your comment
scala I have 2.12 version.
I don't think Scala 2.12 existed when Kafka 0.9 was even released, but if you don't plan on downgrading Scala, then you must use a Kafka version built with 2.12.
On the Apache Kafka site, you can find Scala 2.12 - kafka_2.12-2.0.0.tgz links.
Or you can use apt-get to install Kafka via Confluent Platform, then you can use something like sudo service start confluent-kafka
we have a Kafka cluster (as a 3rd party hosted service), which has SSL enabled. We are now trying to setup Kafka Connect (Confluent 5.0) with a 3rd party Sink (WePay BigQuery connector). When starting Kafka connect in standalone mode, everything works like a charm. Unfortunately when enabling distributed mode, Kafka Connect suddenly fails with the following:
[2018-09-25 15:01:46,248] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-09-25 15:01:46,248] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-09-25 15:01:46,667] INFO Kafka cluster ID: Q9PaAEeWSbOavVmHTQS5sA (org.apache.kafka.connect.util.ConnectUtils:59)
[2018-09-25 15:01:46,685] INFO Logging initialized #10512ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:193)
[2018-09-25 15:01:46,726] INFO Added connector for http://:8083 (org.apache.kafka.connect.runtime.rest.RestServer:119)
[2018-09-25 15:01:46,760] INFO Advertised URI: http://192.168.4.207:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2018-09-25 15:01:46,796] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-09-25 15:01:46,796] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser:110)
ERROR Stopping due to error
(org.apache.kafka.connect.cli.ConnectDistributed:117)
java.lang.NoSuchMethodError:
org.apache.kafka.common.metrics.Sensor.add
(Lorg/apache/kafka/common/metrics/CompoundStat;)Z
at org.apache.kafka.connect.runtime.Worker$WorkerMetricsGroup.<init> .
(Worker.java:731)
at org.apache.kafka.connect.runtime.Worker.<init>(Worker.java:112)
at
org.apache.kafka.connect.cli.ConnectDistributed.main
(ConnectDistributed.java:88)
Tried to Google for the specific error, but couldn't find anything. It looks like a version issue somewhere (hence the NoSuchMethodError), but have no clue where to start.
When used with Confluent 4.1.2 there's a different error:
[2018-09-26 15:14:05,498] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:112)
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:144)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:182)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:159)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:95)
Caused by: java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.<init>(Lorg/apache/kafka/common/utils/LogContext;Lorg/apache/kafka/clients/KafkaClient;Lorg/apache/kafka/clients/Metadata;Lorg/apache/kafka/common/utils/Time;JJI)V
at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:114)
... 3 more
When we use the same but with Kafka Connect (Confluent 3.0), there's a different error:
[2018-09-26 10:04:24,588] INFO AvroDataConfig values:
schemas.cache.config = 1000
enhanced.avro.schema.support = false
connect.meta.data = true
(io.confluent.connect.avro.AvroDataConfig:169)
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.common.utils.AppInfoParser.unregisterAppInfo(Ljava/lang/String;Ljava/lang/String;)V
at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.stop(WorkerGroupMember.java:194)
at org.apache.kafka.connect.runtime.distributed.WorkerGroupMember.<init>(WorkerGroupMember.java:122)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:150)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.<init>(DistributedHerder.java:132)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:82)
Here's the distributed.properties:
bootstrap.servers=*****
group.id=testGroup
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=****
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=****
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
config.storage.topic=connect-configs
offset.storage.topic=connect-offsets
status.storage.topic=connect-statuses
security.protocol=SSL
ssl.truststore.location=truststore.jks
ssl.truststore.password=****
ssl.keystore.type=PKCS12
ssl.keystore.location=keystore.p12
ssl.keystore.password=****
ssl.key.password=****
plugin.path=/*/confluent-5.0.0/share/java
And for reference the standalone.properties:
bootstrap.servers=***
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=***
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=***
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offsets
consumer.security.protocol=SSL
consumer.ssl.truststore.location=truststore.jks
consumer.ssl.truststore.password=***
consumer.ssl.keystore.type=PKCS12
consumer.ssl.keystore.location=keystore.p12
consumer.ssl.keystore.password=***
consumer.ssl.key.password=***
Any help would be much appreciated.
I just discover that you have to prefix kafka client configs in kafka connect properties files :
https://docs.confluent.io/current/connect/userguide.html#overriding-producer-and-consumer-settings
Your standalone.properties does prefix configs with consumer. :
consumer.security.protocol=SSL
But your distributed.properties doesn't :
security.protocol=SSL
I have a kafka cluster of 3 kafka brokers on 3 different servers.
Lets assume the three servers are .
99.99.99.1
99.99.99.2
99.99.99.3
All 3 servers have a shared path on which kafka is residing.
I have created 3 server.properties with name
server1.properties
server2.properties
server3.properties
The server1.properties look like below:
broker.id=1
port=9094
listeners=SSL://99.99.99.1:9094
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
zookeeper.connect=99.99.99.1:2181,99.99.99.2:2182,99.99.99.3:2183
ssl.keystore.location=xyz.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=xyz.jks
ssl.truststore.password=password
ssl.client.auth=required
security.inter.broker.protocol=SSL
Similarly, the other two server properties look.
Issues/Query:
I need the consumer and producer should connect using SSL and even all the brokers should connect to each other using SSL. Is my configuration right for this?
I keep on getting below error is this usual?
WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe