use kafka connect mongoDB debezium souce connector on a remote MSK kafka cluster - mongodb

I want to read data from MongoDB into Kafka's topic. I managed to get this work locally by using the following connector properties file:
name=mongodb-source-connectorszes
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.hosts=test/localhost:27017
database.history.kafka.bootstrap.servers=kafka:9092
mongodb.name=mongo_conn
database.whitelist=test
initial.sync.max.threads=1
tasks.max=1
the connect worker has the following conf:
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
zookeeper.connect=localhost:2181
rest.port=18083
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java/test
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
bootstrap.servers=localhost:9092
This works flawlessly in my local kafka. I want to run it on a remote MSK Kafka cluster.
As there is no built-in support of new kafka connect plugins within kafka MSK , I am facing difficulties to make my kafka connect source mongo plugin works, to export the connector from my local machine , I brought the following modifications:
At the connector properties level :
name=mongodb-source-connectorszes
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.hosts=test/localhost:27017 #keeping the same local mongo
database.history.kafka.bootstrap.servers=remote-msk-kakfa-brokers:9092
mongodb.name=mongo_conn
database.whitelist=test
initial.sync.max.threads=1
tasks.max=1
at the connect worker level, I brought the following modifications:
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
zookeeper.connect=remote-msk-kakfa-zookeeper:9092:2181
rest.port=18083
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java/test
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
bootstrap.servers=remote-msk-kakfa-brokers:9092:9092
but seems that this is not enough as I am getting the following error:
[2020-01-31 11:58:01,619] WARN [Producer clientId=producer-1] Error while fetching metadata with correlation id 118 : {mongo_conn.test.docs=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1031)
[2020-01-31 11:58:01,731] WARN [Producer clientId=producer-1] Error while fetching metadata with correlation id 119 : {mongo_conn.test.docs=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1031)
Usually, I manage to request the Kafka MSK cluster from my local machine ( via the use of a VPN ,and sshuttle to EC2 instance) . for example, to list topics in the remote kafka msk cluster. I just have to do:
bin/kafka-topics.sh --list --zookeeper remote-zookeeper-server:2181
by going to my local kafka installation folder.
and this cmmand works perfectly , without changing server.properties in my local machine. Any idea how to solve this in order to export the kafka Debezium mongo Source to kafka MSK cluster.

It's recommended to use connect-distributed script and properties for running Connect/Debezium
Anything that says zookeeper.connect should be removed (only Kafka brokers use that). Anything that says bootstrap servers should point at the address MSK gives you.
If you're getting connection errors, make sure you check firewall / VPC settings

Related

Change-data-capture from Postgres SQL to kafka topics using standalone mode Kafka-connect

I have been trying to get data from postgres sql to kafka topics using the following command /bin connect-standalone.properties config/connect-standalone.properties postgres.sproperties, but am facing several issues with it
here are the contents of my postgres.properties file:
name=cdc_demo
connector.class=io.debezium.connector.postgresql.PostgresConnector
tasks.max=1
plugin.name=decoderbufs
slot.name=debezium
slot.drop_on_stop=false
database.hostname=localhost
database.port=5432
database.user=postgres
database.password=XXXXX
database.dbname=snehildb
time.precision.mode=adaptive
database.sslmode=disable
database.server.name=localhost:5432/snehildb
table.whitelist=public.students
decimal.handling.mode=precise
topic.creation.enable=true`
Here are the contents of connect-standalone.properties:
# These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every
Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into
Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the
converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for
plugins
# (connectors, converters, transformations). The list should consist of top level directories that
include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and
their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/home/azureuser/plugins
I am getting several warnings but here are three main errors that I am unable to resolve:
ERROR Postgres server wal_level property must be "logical" but is: replica
(io.debezium.connector.postgresql.PostgresConnector:101)
(org.apache.kafka.common.config.AbstractConfig:361)
ERROR Failed to create job for config/postgres.properties
(org.apache.kafka.connect.cli.ConnectStandalone:110)
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:121)
I am new to Kafka and it would be very helpful if someone could point out my mistakes.
Debezium requires wal_level to be logical:
https://www.postgresql.org/docs/9.6/runtime-config-wal.html
Take a look inside the postgres connector at the class:
io.debezium.connector.postgresql.PostgresConnector.java in the debeizum repo:
https://github.com/debezium/debezium/blob/master/debezium-connector-postgres/src/main/java/io/debezium/connector/postgresql/PostgresConnector.java

Kafka connect Jdbc source connector data is stored as encoded string

I am new to kafka & exploring kafka connect in distributed mode. I have some issues which I have listed below.
Data from my oracle table is stored as encoded values in strings. (for example, one of my column which is an integer has value 60015 is stored as "AN+w").
If I use AVRO converter in worker configuration kafka connect throws the error "Invalid decimal scale 127 (greater than precision 64)".
Below is my configuration:
Worker Configuration:
##
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended
# to be used with the examples, and some settings may differ from those used in a production system, especially
# the `bootstrap.servers` and those specifying replication factors.
# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
bootstrap.servers=192.168.220.128:9092
# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
group.id=my-example-connect-cluster
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=false
# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
offset.storage.topic=connect-offsets-dm
offset.storage.replication.factor=1
#offset.storage.partitions=25
# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,
# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
config.storage.topic=connect-configs-dm
config.storage.replication.factor=1
# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
status.storage.topic=connect-status-dm
status.storage.replication.factor=1
#status.storage.partitions=5
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
# These are provided to inform the user about the presence of the REST host and port configs
# Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests.
#rest.host.name=
rest.port=8083
# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.
#rest.advertised.host.name=
#rest.advertised.port=
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/home/bjanakiraman/Desktop/confluent-5.3.0/share/java
connect_plugin_path=/home/bjanakiraman/Desktop/confluent-5.3.0/share/java/kafka-connect-jdbc
Connect configuration:
{
"name": "test-oracle-jdbc-connector",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"connection.url": "MY-URL",
"connection.user": "username",
"connection.password": "password",
"mode": "incrementing",
"incrementing.column.name": "ID",
"topic.prefix": "test2-",
"name": "test-oracle-jdbc-connector",
"schema.pattern": "ABC",
"table.whitelist" : "TABLENAME"
}
}
Following is the full log error when I use AVRO converter in my connector:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:270)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Invalid decimal scale: 127 (greater than precision: 64)
at org.apache.avro.LogicalTypes$Decimal.validate(LogicalTypes.java:217)
at org.apache.avro.LogicalType.addToSchema(LogicalType.java:70)
at org.apache.avro.LogicalTypes$Decimal.addToSchema(LogicalTypes.java:182)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:944)
at io.confluent.connect.avro.AvroData.addAvroRecordField(AvroData.java:1059)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:900)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:732)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:726)
at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:365)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:80)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:270)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
Please Kindly help me in resolving this.
Check whether you have any columns of NUMBER type without defining any precision or scale. I got away this issue with changing my column data type to an NUMBER(38,0) which is INTEGER

Missing required configuration "key.converter" which has no default

When I try to start Kafka connect for elastic search reactor, in stand alone mode I receive the following error:
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Missing required configuration "key.converter" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:463)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:75)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:218)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:272)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:72)
Can I solve this error?
EDIT 01/05/2018
Sorry I try to be more specific. I Use the stream reactor connector:
https://github.com/Landoop/stream-reactor
This is the command that I launch from an EC2 instance in which there is the unique broker of my kafka:
./bin/connect-standalone.sh config/elastic-config.properties config/connect-
standalone.properties.
In order this is connect-standalone.properties:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# These are defaults. This file just demonstrates how to override some
settings.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it
into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when
loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's
setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true
# The internal converter used for offsets and config data is configurable
and must be specified, but most users will
# always want to use the built-in default. Offset and config data is never
visible outside of Copcyat in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
plugin.path=/home/ubuntu/kafka_2.11-1.0.1/libs
And this is the other file:
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=test
topic.index.map=test:test_index
connection.url=myurl
type.name=log
key.ignore=true
schema.ignore=true
The error kinda says it all. You're missing a required configuration entry, for key.converter. This tells Kafka Connect how to deserialise the data on the Kafka topic (JSON or Avro, usually).
You can see an example of a valid connector configuration for Elasticsearch here in this gist. If you update your question to include the configuration you're using, I can point out how to incorporate it.
After seeing your config, the cause of your error is that you're invoking Connect with your config files in the wrong order, and hence Connect can't find the config it is expecting.
Should be:
./bin/connect-standalone.sh config/connect-standalone.properties config/elastic-config.properties
Read more about streaming from Kafka to Elasticsearch in this article, and this general series on using Kafka Connect:
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/

Read the content of a file with Kafka producer - FileSource Connector

How to use Kafka producer to read the content of a file? The typical solution found here(pipe the file into the producer with |) looks dirty and ugly.
I recently found a solution more decent than piping the content of a file into the producer shell, that is to use FileSource Connector.
According to the link, FileSource Connector aims to solve exactly the use case of "reading the data of a file into producer", like examining the content of a log file and launches alert when [ERROR] or [FATAL] is encountered.
The full command is(suppose we are in the root folder of Kafka):
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties
Two properties file to configure:
config/connect-standalone.properties
config/connect-file-source.properties
The first one defines how to connect to standalone connector. It is like:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=false
value.converter.schemas.enable=false
# The internal converter used for offsets and config data is configurable and must be specified, but most users will
# always want to use the built-in default. Offset and config data is never visible outside of Kafka Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
#plugin.path=
Quite straightforward. Only two things to pay attention to:
bootstrap.servers=localhost:9092: the Kafka bootstrap server
(internal.)key/value.converter.schemas.enable=false: You must set them to false to parse string lines in the file.
The second file is simpler:
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=/tmp/test.txt
topic=connect-test
file: which file to read
topic: create a topic to make consumer to listen for
If you want to consume the content with Storm, that is enough.
If, instead of reading a file, you want to write content from Kafka to a file, you use FileSink Connector. I haven't use it personally, but I guess it is likewise, but on the consumer side. The config file is config/connect-file-sink.properties.

Confluent server went down

I am a starter in Confluent and Kafka.
When I am using the Confluent Platform on the slave node server(distributed mode but only on one server), the Confluent Server(only the server, the kafka is working properly) went down from time to time. Cause I am new to that, so I make mistakes when creating the sources and sinks, does that have anything to do with the break-down?
Here is my config:
# Sample configuration for a distributed Kafka Connect worker that uses Avro serialization and
# integrates the the Schema Registry. This sample configuration assumes a local installation of
# Confluent Platform with all services running on their default ports.
# Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated.
bootstrap.servers=localhost:9092
# The group ID is a unique identifier for the set of workers that form a single Kafka Connect
# cluster
group.id=connect-cluster
# The converters specify the format of data in Kafka and how to translate it into Connect data.
# Every Connect user will need to configure these based on the format they want their data in
# when loaded from or stored into Kafka
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:18081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:18081
# The internal converter used for offsets and config data is configurable and must be specified,
# but most users will always want to use the built-in default. Offset and config data is never
# visible outside of Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# Kafka topic where connector configuration will be persisted. You should create this topic with a
# single partition and high replication factor (e.g. 3)
config.storage.topic=connect-configs
# Kafka topic where connector offset data will be persisted. You should create this topic with many
# partitions (e.g. 25) and high replication factor (e.g. 3)
offset.storage.topic=connect-offsets
# Kafka topic where connector status data will be persisted. You should create this topic with many
# partitions (e.g. 25) and high replication factor (e.g. 3)
status.storage.topic=connect-statuses
# Confuent Control Center Integration -- uncomment these lines to enable Kafka client interceptors
# that will report audit data that can be displayed and analyzed in Confluent Control Center
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
So curious about that, cause Confluent Platform is a well designed project and Supported by a lot of experts, more importantly it is commercial.
Feiran