Kafka Connect JDBC sink connector issue - apache-kafka

Getting below error while running JDBC sink connector
[2020-01-08 15:05:39,271] ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found. Returning: org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader#6f2cfcc2 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-08 15:05:39,272] INFO Finished creating connector test-sink (org.apache.kafka.connect.runtime.Worker:273)
[2020-01-08 15:05:39,273] ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found. Returning: org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader#6f2cfcc2 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-08 15:05:39,273] INFO SinkConnectorConfig values:
I have set plugin path properly as per given in the documentation.

I had the same issue with you and just solved it. The point here is you should not copy the connector jar file to the kafka libs directory. You should set the CLASSPATH during running the command like this:
env CLASSPATH=./* connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/quickstart-couchbase-source.properties
or set the plugin.path in the worker .properties file.
plugin.path=/path_to_the_plugin_jar_file
Hope this help.

Related

Kafka Snowflake Connector - Stopping after connector error

I've been checking all the kafka snowflake connector posts but none of them talked about the issue I'm having.
I installed Kafka in local, with zookeper, and I also want to run a Snowflake connector, to copy data from Kafka towards Snowflake.
I run zookeeper, every thing looks right:
zookeeper log
Then I launch the kafka server, looks correct as well:
server log
However when I launch the snowflake-kafka-connector:
sh connect-standalone.sh /usr/local/kafka/kafka_2.11-1.1.0/config/connect-standalone.properties /usr/local/kafka/kafka_2.11-1.1.0/config/SF_connect.properties
, it breaks like this:
[2022-05-27 10:41:37,380] INFO Finished creating connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:224)
[2022-05-27 10:41:37,380] INFO Skipping reconfiguration of connector kafkatest since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:285)
[2022-05-27 10:41:37,381] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.lang.NullPointerException: Cannot invoke "org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo.name()" because the return value of "org.apache.kafka.connect.runtime.Herder$Created.result()" is null
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:104)
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:98)
at org.apache.kafka.connect.util.ConvertingFutureCallback.onCompletion(ConvertingFutureCallback.java:44)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:185)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2022-05-27 10:41:37,382] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2022-05-27 10:41:37,382] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:211)
I tried to find information on what's the matter, but I can't find anything. Can you please help me on that?
This is the sf_connector.properties file:
sf_connector.properties
Thanks!

Snowflake Kafka connector config issue

I'm following the steps in this guide Snowflake Connector for Kafka
The error message I'm getting is
BadRequestException: Connector config {.....} contains no connector type
I am running the command as
sh kafka_2.12-2.3.0/bin/connect-standalone.sh connect-standalone.properties snowflake_kafka_config.json
my config files are
connect-standalone.properties
bootstrap.servers=localhost:9092
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/Users/kafka_test/kafka
jar file snowflake-kafka-connector-0.5.1.jar is in plugin.path
snowflake_kafka_config.json
{
"name":"Kafka_Test",
"Config":{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":"8",
"topics":"test",
"snowflake.topic2table.map": "",
"buffer.count.records":"1",
"buffer.flush.time":"60",
"buffer.size.bytes":"65536",
"snowflake.url.name":"<url>",
"snowflake.user.name":"<user_name>",
"snowflake.private.key":"<private_key>",
"snowflake.private.key.passphrase":"<pass_phrase>",
"snowflake.database.name":"<db>",
"snowflake.schema.name":"<schema>",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"com.snowflake.kafka.connector.records.SnowflakeJsonConverter",
"value.converter.schema.registry.url":"",
"value.converter.basic.auth.credentials.source":"",
"value.converter.basic.auth.user.info":""
}
}
Kafka is running on local, I have a producer and consumer up, can see the data flowing.
This is the same question I answered over on the Confluent community Slack, but I'll post it here for reference too :-)
The connect worker log shows that the connector JAR itself is being loaded, so the 'contains no connector type` is because your config formatting is fubar.
You're running in Standalone mode, but passing in a JSON file which won't. My personal opinion is always use distributed, even if just a single node of it. Check this out if you need a recap on standalone vs distributed : http://rmoff.dev/ksldn19-kafka-connect
If you must use standalone then you need your connector config (snowflake_kafka_config.json) to be a properties file like this:
param1=argument1
param2=argument2
You can see valid JSON examples (if you use distributed mode) here: https://github.com/confluentinc/demo-scene/blob/master/kafka-connect-zero-to-hero/demo_zero-to-hero-with-kafka-connect.adoc#stream-data-from-kafka-to-elasticsearch

Error trying to start zookeeper server- Confluent setup

I am trying to setup Confluent-4.1.1 on Ubuntu 16.04. To start the ZooKeeper server, I ran ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties.txt from the root directory of Confluent by following this tutorial.
The error that comes up is-
log4j:ERROR Could not read configuration file from URL [file:./bin/../config/log4j.properties].
java.io.FileNotFoundException: ./bin/../config/log4j.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:64)
log4j:ERROR Ignoring configuration file [file:./bin/../config/log4j.properties].
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I am new to kafka, and I have no clue what this means. Any help in resolving this would be appreciated.
The link you're following is only Apache Kafka, not Confluent, though they should work similarly at least for starting Zookeeper.
If you've downloaded the Confluent distribution, though, and want a single node cluster, you can use the Confluent CLI
To start Zookeeper, Kafka, and the rest of the Confluent Platform, run
./bin/confluent start
Otherwise, the Zookeeper startup script doesn't use a txt file, and it might be unable to detect where you've extracted the tarball, so instead you can use apt like a normal software package
https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html
According to the documentation in the link
1:run these commands after changing path-to-confluent with your path
export CONFLUENT_HOME=
export PATH="${CONFLUENT_HOME}/bin:$PATH"
(these commands will make the "confluent" command recognizable from
terminal)
2: run below command
confluent local services start
(this command will start all the services including zookeeper ,
kafka, schema registry etc)

Kafka org.apache.kafka.connect.converters.ByteArrayConverter doesn't work as values for key.converter and value.converter

I'm trying to build a pipeline where I have to move binary data from kafka topic to kinesis stream with out transforming. So I'm planning to use ByteArrayConverter for worker properties setup. But I'm getting the following error! Although I could see the ByteArrayConverter class in here
on 0.11.0 version. I cannot find the same class under 3.2.x :(
Any help would be much appreciated.
key.converter=io.confluent.connect.replicator.util.ByteArrayConverter
value.converter=io.confluent.connect.replicator.util.ByteArrayConverter
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.replicator.util.ByteArrayConverter for configuration key.converter: Class io.confluent.connect.replicator.util.ByteArrayConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:672)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:418)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:55)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:156)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:198)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:65)
org.apache.kafka.connect.converters.ByteArrayConverter was only added to Apache Kafka 0.11 (which is Confluent 3.3). If you are running a Confluent distro earlier than 3.3 then you will need the Confluent Enterprise distro (not Confluent Open Source) and use the io.confluent.connect.replicator.util.ByteArrayConverter converter

Getting exception while instantiating KafkaProducer

I am using IBM Bluemix implementation of the Kafka Broker.
I am creating the KafkaProducer with following properties:
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
bootstrap.servers=xxxx.xxxxxx.xxxxxx.xxxxxx.bluemix.net:xxxx
client.id=messagehub
acks=-1
security.protocol=SASL_SSL
ssl.protocol=TLSv1.2
ssl.enabled.protocols=TLSv1.2
ssl.truststore.location=xxxxxxxxxxxxxxxxx
ssl.truststore.password=xxxxxxxx
ssl.truststore.type=JKS
ssl.endpoint.identification.algorithm=HTTPS
KafkaProducer<byte[], byte[]> kafkaProducer =
new KafkaProducer<byte[], byte[]>(props);
With this I got following exception:
org.apache.kafka.common.KafkaException:
org.apache.kafka.clients.producer.internals.DefaultPartitioner is not
an instance of org.apache.kafka.clients.producer.Partitioner
After reading the following blog:
http://blog.rocana.com/kafkas-defaultpartitioner-and-byte-arrays I added the following line to my property file, even though I was using new API:
partitioner.class=kafka.producer.ByteArrayPartitioner
Now I am getting this exception:
org.apache.kafka.common.KafkaException: Could not instantiate class
kafka.producer.ByteArrayPartitioner Does it have a public no-argument
constructor?
It looks like ByteArrayPartitioner does not have a default constructor.
Any idea what I am missing here?
Thanks
Madhu
As I was using the KafkaProducer API, I did not need
partitioner.class=kafka.producer.ByteArrayPartitioner
property. The issue was there were 2 copies of the kafkaclient jar. We have configured our installation such that all library jar files are in an external shared directory. But due to the POM configuration error the war file also had a copy of the kafka client in its lib directory. Once I fixed this issue, it worked fine.
Madhu