Confluent Platform connectors missing - apache-kafka

I have downloaded and run Confluent Platform 6.1.0. When I run the command to list available connectors I cannot see any JDBC or S3 connectors. All I can see is file connectors and replicator. What I need to enable S3 and Jdbc connectors.

As of 6.x, the Confluent written connectors are no longer packaged with Confluent Platform, and you must use confluent-hub CLI to install or download them on your own

Related

Use kafka connect without docker

I just finished the udemy course on kafka connect,
the course is based on docker, but what should I do not want to use docker?
Kafka connect requires JVM. Although you can run it only with JRE, I recommend installing jdk (like openjdk). Download JAR from https://packages.confluent.io/archive/6.2/ (or version that you prefer). And run it as java process passing parameters file as a configuration.
You don't need Confluent Platform. Download Kafka from Apache website. It comes with all commands to run Kafka Connect. The only requirement is Java (version 11 is recommended, although 17 is the latest supported).
To install connectors, you can use confluent-hub without Confluent Platform

How to run kafka s3 sink connector in confluent 6.2.0

Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties but not sure how to start that...
Same answer applies without Confluent Platform.
You'd run one of the included bin/connect-* scripts to start a Connect worker.
The quickstart file is only applicable for standalone mode.

Install Debezium SQL Server CDC Source Connector

I am following the link below to install the SQL Server CDC connector
https://www.confluent.io/hub/debezium/debezium-connector-sqlserver
But gets the error message
Unable to detect Confluent Platform installation. Specify
--component-dir and --worker-configs explicitly.
Error: Invalid options or arguments
This is on my development machine I am trying to set up the connector.
Kafka folder
~/kafka
Kafka plugin folder
/usr/local/share/kafka/plugins
I also tried to install it manually by following the link https://docs.confluent.io/home/connect/install.html
but not sure about the plugin.path
OS: Ubuntu 20.04
Can you help
not sure about the plugin.path
Using Confluent hub command sets that for you. As you've already said, it's /usr/local/share/kafka/plugins, so you give that as --component-dir argument
Then you need to find your Kafka Connect property file and give it to --worker-configs

confluent CLI - windows environment

I am exploring to see if we could run confluent on windows. As per the following articles, it seems windows is not supported.
https://docs.confluent.io/current/installation/versions-interoperability.html#operating-systems
Confluent Platform in Windows
However, when I look at confluent CLI, windows seems to be supported
https://docs.confluent.io/current/cli/installing.html#tarball-installation
But again, there is a phrase here about windows is not being supported.
On non-Windows platforms, the Confluent CLI offers confluent local commands (designed to operate on a local install of Confluent Platform) which require Java, and JDK version 1.8 or 1.11 is recommended. If you have multiple versions of Java installed, set JAVA_HOME to the version you want Confluent Platform to use.
So, the questions are
Is windows supported, as per latest ? ( I doubt it is not ?)
What is the CLI that is being supported for windows ? For what it
could be used for ?
It also seems windows is NOT supported for local development perspective as well ? ( I mean is it possible to issue "confluent local" commands ?
PS : Please give inputs without referring to virtualized environments such as Docker
Yes, you are right windows is not supported.
The CLI you get for windows is only to manage and retrieve metadata for the remote confluent platform. First, you will have to log in to confluent by issuing command confluent.exe login --url <url>.
More info at confluent-login.
Following are the commands you get with confluent windows distribution:
Available Commands:
audit-log Manage audit log configuration.
cluster Retrieve metadata about Confluent Platform clusters.
completion Print shell completion code.
connect Manage Connect.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB applications.
login Log in to Confluent Platform (required for RBAC).
logout Log out of Confluent Platform.
schema-registry Manage Schema Registry.
secret Manage secrets for Confluent Platform.
update Update the Confluent CLI.
version Print the Confluent CLI version.
And windows is also not supported for local development. You can't issue confluent commands like confluent local
I'm facing same challenge. I got Confluent Platform on Docker/Windows 10 machine nad wany to access CLI using WSL as stated here:
https://docs.confluent.io/current/cli/installing.html
The issue is that when running commands in ubuntu terminal I get unknown command when triggering confluent.
Confluent CLI are facade for local installation of confluent variants of Kafka, were with local command you can manage your local installation.
Look here : Confluent CLI local documentation
It assumes that you got the product installed locally. I had installed by following this page Confluent Ubuntu local installation and got all components working, well almost.
So it can work on Windows 10 but throw WSL only. There are some explanations how to install Kafka on Windows, but hole idea behind Confluent is to use Confluent Cloud for production environments.

Download Kafka or confluent platform in package distributed CDH 5.16

I have installed CDH 5.16 Express using packages in a RHEL server. I am trying to install Kafka now and i observed that it can be installed only if CDH is installed as parcels.
1) Is it possible to install Kafka or confluent platform" separately in the server and use it along withCDH` components.
2) Is there any other workaround to install Kafka using Cloudera Manager
In order use the CDK 4.0 (cloudera distribution of Kafka) with Cloudera 5.13, I was forced to install CDK 4.0 as a parcel.
I had a cloudera quickstart docker VM that I downloaded. It runs without Kerberos authentication. After starting the quickstart VM, I separately installed the quickstart Kafka from Apache kafka's website. This was required as the kafka packaged within cloudera was a older version. Since, this was non kerberos environment, the Kafka server upon startup started using the zookeeper that was running in quickstart VM. This way I achieved connection of Kafka with cloudera VM.
If you are new to CDH/CM then I suggest you first try and use the Kafka service that is bundled within Cloudera. Go to 'Add Service' within Cloudera drop-down and select kafka. Enabling this Kafka service will give you a set of brokers for kafka to run. Also, Kafka needs Zookeeper to run. Zookeeper comes by default in Cloudera. So, you would get a working cluster with kafka enabled in it. You can think of changing to the latest version of Kafka (using the approach mentioned above) once you are comfortable with inbuilt tools of CDH/CM.