I installed Oracle Virtual Machine and inside that did Hortonworks
set up.
Now I am trying to install Kafka in it.
When I fetch file using wget it got installed.
How can I see in which location the file saved.
And how to call it from Virtual Box.
How can I see all dependencies has install which required for KAFKA like
Java, scala,zookeeper
Please help
Thanks
Not sure why you're using wget when Ambari should be installing components for you.
Hortonworks installs all libraries under /usr/hdp/current
There should be a Kafka folder there
However, it's recommended you use Ambari to configure those resources, and all Kafka CLI tools should be on your path already
Related
I just finished the udemy course on kafka connect,
the course is based on docker, but what should I do not want to use docker?
Kafka connect requires JVM. Although you can run it only with JRE, I recommend installing jdk (like openjdk). Download JAR from https://packages.confluent.io/archive/6.2/ (or version that you prefer). And run it as java process passing parameters file as a configuration.
You don't need Confluent Platform. Download Kafka from Apache website. It comes with all commands to run Kafka Connect. The only requirement is Java (version 11 is recommended, although 17 is the latest supported).
To install connectors, you can use confluent-hub without Confluent Platform
I am following the link below to install the SQL Server CDC connector
https://www.confluent.io/hub/debezium/debezium-connector-sqlserver
But gets the error message
Unable to detect Confluent Platform installation. Specify
--component-dir and --worker-configs explicitly.
Error: Invalid options or arguments
This is on my development machine I am trying to set up the connector.
Kafka folder
~/kafka
Kafka plugin folder
/usr/local/share/kafka/plugins
I also tried to install it manually by following the link https://docs.confluent.io/home/connect/install.html
but not sure about the plugin.path
OS: Ubuntu 20.04
Can you help
not sure about the plugin.path
Using Confluent hub command sets that for you. As you've already said, it's /usr/local/share/kafka/plugins, so you give that as --component-dir argument
Then you need to find your Kafka Connect property file and give it to --worker-configs
I am exploring to see if we could run confluent on windows. As per the following articles, it seems windows is not supported.
https://docs.confluent.io/current/installation/versions-interoperability.html#operating-systems
Confluent Platform in Windows
However, when I look at confluent CLI, windows seems to be supported
https://docs.confluent.io/current/cli/installing.html#tarball-installation
But again, there is a phrase here about windows is not being supported.
On non-Windows platforms, the Confluent CLI offers confluent local commands (designed to operate on a local install of Confluent Platform) which require Java, and JDK version 1.8 or 1.11 is recommended. If you have multiple versions of Java installed, set JAVA_HOME to the version you want Confluent Platform to use.
So, the questions are
Is windows supported, as per latest ? ( I doubt it is not ?)
What is the CLI that is being supported for windows ? For what it
could be used for ?
It also seems windows is NOT supported for local development perspective as well ? ( I mean is it possible to issue "confluent local" commands ?
PS : Please give inputs without referring to virtualized environments such as Docker
Yes, you are right windows is not supported.
The CLI you get for windows is only to manage and retrieve metadata for the remote confluent platform. First, you will have to log in to confluent by issuing command confluent.exe login --url <url>.
More info at confluent-login.
Following are the commands you get with confluent windows distribution:
Available Commands:
audit-log Manage audit log configuration.
cluster Retrieve metadata about Confluent Platform clusters.
completion Print shell completion code.
connect Manage Connect.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB applications.
login Log in to Confluent Platform (required for RBAC).
logout Log out of Confluent Platform.
schema-registry Manage Schema Registry.
secret Manage secrets for Confluent Platform.
update Update the Confluent CLI.
version Print the Confluent CLI version.
And windows is also not supported for local development. You can't issue confluent commands like confluent local
I'm facing same challenge. I got Confluent Platform on Docker/Windows 10 machine nad wany to access CLI using WSL as stated here:
https://docs.confluent.io/current/cli/installing.html
The issue is that when running commands in ubuntu terminal I get unknown command when triggering confluent.
Confluent CLI are facade for local installation of confluent variants of Kafka, were with local command you can manage your local installation.
Look here : Confluent CLI local documentation
It assumes that you got the product installed locally. I had installed by following this page Confluent Ubuntu local installation and got all components working, well almost.
So it can work on Windows 10 but throw WSL only. There are some explanations how to install Kafka on Windows, but hole idea behind Confluent is to use Confluent Cloud for production environments.
I have installed CDH 5.16 Express using packages in a RHEL server. I am trying to install Kafka now and i observed that it can be installed only if CDH is installed as parcels.
1) Is it possible to install Kafka or confluent platform" separately in the server and use it along withCDH` components.
2) Is there any other workaround to install Kafka using Cloudera Manager
In order use the CDK 4.0 (cloudera distribution of Kafka) with Cloudera 5.13, I was forced to install CDK 4.0 as a parcel.
I had a cloudera quickstart docker VM that I downloaded. It runs without Kerberos authentication. After starting the quickstart VM, I separately installed the quickstart Kafka from Apache kafka's website. This was required as the kafka packaged within cloudera was a older version. Since, this was non kerberos environment, the Kafka server upon startup started using the zookeeper that was running in quickstart VM. This way I achieved connection of Kafka with cloudera VM.
If you are new to CDH/CM then I suggest you first try and use the Kafka service that is bundled within Cloudera. Go to 'Add Service' within Cloudera drop-down and select kafka. Enabling this Kafka service will give you a set of brokers for kafka to run. Also, Kafka needs Zookeeper to run. Zookeeper comes by default in Cloudera. So, you would get a working cluster with kafka enabled in it. You can think of changing to the latest version of Kafka (using the approach mentioned above) once you are comfortable with inbuilt tools of CDH/CM.
I install Storm and Ambari UI in a Ubuntu machine.
But now I want to join the Storm with ambari UI. Is there any tutorial? anyone have tips?
Note: I have just installed on the virtual machine the Storm, Kafka and Ambari server (default).
I know that there is a VM NortonWorks with these pre-installed services, but the idea is to install on a virgin machine.
thanks :)
Are you trying to manage and monitor Storm through Ambari? If so then you must provision Storm through Ambari. You can do this by logging into the ambari UI then clicking the actions buttons and select 'Add Service'. Follow the service installation wizard to install Storm. During this process you will be able to configure Storm to your needs. Storm is available in HDP versions 2.1 and higher.