I just finished the udemy course on kafka connect,
the course is based on docker, but what should I do not want to use docker?
Kafka connect requires JVM. Although you can run it only with JRE, I recommend installing jdk (like openjdk). Download JAR from https://packages.confluent.io/archive/6.2/ (or version that you prefer). And run it as java process passing parameters file as a configuration.
You don't need Confluent Platform. Download Kafka from Apache website. It comes with all commands to run Kafka Connect. The only requirement is Java (version 11 is recommended, although 17 is the latest supported).
To install connectors, you can use confluent-hub without Confluent Platform
Related
Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties but not sure how to start that...
Same answer applies without Confluent Platform.
You'd run one of the included bin/connect-* scripts to start a Connect worker.
The quickstart file is only applicable for standalone mode.
I am exploring to see if we could run confluent on windows. As per the following articles, it seems windows is not supported.
https://docs.confluent.io/current/installation/versions-interoperability.html#operating-systems
Confluent Platform in Windows
However, when I look at confluent CLI, windows seems to be supported
https://docs.confluent.io/current/cli/installing.html#tarball-installation
But again, there is a phrase here about windows is not being supported.
On non-Windows platforms, the Confluent CLI offers confluent local commands (designed to operate on a local install of Confluent Platform) which require Java, and JDK version 1.8 or 1.11 is recommended. If you have multiple versions of Java installed, set JAVA_HOME to the version you want Confluent Platform to use.
So, the questions are
Is windows supported, as per latest ? ( I doubt it is not ?)
What is the CLI that is being supported for windows ? For what it
could be used for ?
It also seems windows is NOT supported for local development perspective as well ? ( I mean is it possible to issue "confluent local" commands ?
PS : Please give inputs without referring to virtualized environments such as Docker
Yes, you are right windows is not supported.
The CLI you get for windows is only to manage and retrieve metadata for the remote confluent platform. First, you will have to log in to confluent by issuing command confluent.exe login --url <url>.
More info at confluent-login.
Following are the commands you get with confluent windows distribution:
Available Commands:
audit-log Manage audit log configuration.
cluster Retrieve metadata about Confluent Platform clusters.
completion Print shell completion code.
connect Manage Connect.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB applications.
login Log in to Confluent Platform (required for RBAC).
logout Log out of Confluent Platform.
schema-registry Manage Schema Registry.
secret Manage secrets for Confluent Platform.
update Update the Confluent CLI.
version Print the Confluent CLI version.
And windows is also not supported for local development. You can't issue confluent commands like confluent local
I'm facing same challenge. I got Confluent Platform on Docker/Windows 10 machine nad wany to access CLI using WSL as stated here:
https://docs.confluent.io/current/cli/installing.html
The issue is that when running commands in ubuntu terminal I get unknown command when triggering confluent.
Confluent CLI are facade for local installation of confluent variants of Kafka, were with local command you can manage your local installation.
Look here : Confluent CLI local documentation
It assumes that you got the product installed locally. I had installed by following this page Confluent Ubuntu local installation and got all components working, well almost.
So it can work on Windows 10 but throw WSL only. There are some explanations how to install Kafka on Windows, but hole idea behind Confluent is to use Confluent Cloud for production environments.
i have following problem
I am using java 8 version for this and zookeeper is not working for this.
Downloads$ cd confluent-5.2.2/
roshni#roshni-HP-Pavilion-15-Notebook-PC:~/Downloads/confluent-5.2.2$
/home/roshni/Downloads/confluent-5.2.2/bin/confluent start
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html
WARNING: Java version 1.8 or 1.11 is recommended.
See https://docs.confluent.io/current/installation/versions-interoperability.html
What you did is correct. Based on the warning, it seems to suggest you're not using Java 8 or 11, though, so check your JAVA_HOME variable, for example.
You could try running confluent logs to see if you get more information, and just do confluent start kafka if you're truly only trying to run Kafka
Otherwise, you could explicitly start Zookeeper and Kafka manually, as well as other Confluent services, using the other scripts in the bin folder, just the same as any other Kafka install
I have installed CDH 5.16 Express using packages in a RHEL server. I am trying to install Kafka now and i observed that it can be installed only if CDH is installed as parcels.
1) Is it possible to install Kafka or confluent platform" separately in the server and use it along withCDH` components.
2) Is there any other workaround to install Kafka using Cloudera Manager
In order use the CDK 4.0 (cloudera distribution of Kafka) with Cloudera 5.13, I was forced to install CDK 4.0 as a parcel.
I had a cloudera quickstart docker VM that I downloaded. It runs without Kerberos authentication. After starting the quickstart VM, I separately installed the quickstart Kafka from Apache kafka's website. This was required as the kafka packaged within cloudera was a older version. Since, this was non kerberos environment, the Kafka server upon startup started using the zookeeper that was running in quickstart VM. This way I achieved connection of Kafka with cloudera VM.
If you are new to CDH/CM then I suggest you first try and use the Kafka service that is bundled within Cloudera. Go to 'Add Service' within Cloudera drop-down and select kafka. Enabling this Kafka service will give you a set of brokers for kafka to run. Also, Kafka needs Zookeeper to run. Zookeeper comes by default in Cloudera. So, you would get a working cluster with kafka enabled in it. You can think of changing to the latest version of Kafka (using the approach mentioned above) once you are comfortable with inbuilt tools of CDH/CM.
I installed Oracle Virtual Machine and inside that did Hortonworks
set up.
Now I am trying to install Kafka in it.
When I fetch file using wget it got installed.
How can I see in which location the file saved.
And how to call it from Virtual Box.
How can I see all dependencies has install which required for KAFKA like
Java, scala,zookeeper
Please help
Thanks
Not sure why you're using wget when Ambari should be installing components for you.
Hortonworks installs all libraries under /usr/hdp/current
There should be a Kafka folder there
However, it's recommended you use Ambari to configure those resources, and all Kafka CLI tools should be on your path already