error conctting to the cluster.Unable to connecting to xxx.com kafka tool - apache-kafka

I tried to connect openShift cluster via kafka tool 2.0.8
I defined all the fields and I got:
![1]

Related

How to connect already setup kafka cluster to mongodb?

How can I connect kafka events to a mongodb sink?
The resources I found on the net using confluent they make a cluster for you and didn't find how to connect my already existing cluster
You need to install the Mongo Connector into the plugin.path of your connect properties file, then start Kafka Connect using one of the bin/connect- scripts in your Kafka installation

connector list does not show my installed connector

Good day,
Base on https://docs.confluent.io/kafka-connect-jdbc/current/index.html#installing-jdbc-drivers ,
I use the following command to isntall the jdbc connector:
confluent-hub install confluentinc/kafka-connect-jdbc:latest
Command run successfully, I can see confluentinc-kafka-connect-jdbc folder is created under <confluent-plaform>/share/confluent-hub-components.
And here is the screen shot to show the result of my install command:
After that, I following the next instruction, to upload the jdbc drivers jar file to share/java/kafka-connect-jdbc.
After that, I come to https://docs.confluent.io/kafka-connect-jdbc/current/source-connector/index.html , to load the db connector, first step, I use the list command to list down the connector I have by using following command:
confluent local services connect connector list
The output is show as follow:
[meow#localhost confluent-7.0.1]$ confluent local services connect connector list
The local commands are intended for a single-node development environment only,
NOT for production usage. https://docs.confluent.io/current/cli/index.html
Bundled Connectors:
file-sink
file-source
replicator
There is no connector name jdbc-source in the list, thus, I cant proceed to the next step to continue.
May I know what mistake on my steps?
After running confluent-hub install you must restart the Kafka Connect worker for it to pick up the new connector.
Since you're using the Confluent CLI the commands are:
confluent local services connect stop
confluent local services connect start
Edit: your screenshot shows that you told the Confluent Hub client not to update any of the Kafka Connect worker configurations. Therefore the worker will not pick up the connector that you've installed.
You should run the Confluent Hub client again and tell it to update the Kafka Connect worker configurations when prompted, and then restart the Kafka Connect worker. After that it will pick up the new connector.

connect self managed ksqlDB to Confluent Cloud | managed ksql server

This is a question on how to connect self managed ksqlDB / ksql server to confluent cloud.
I have a confluent basic cluster running in https://confluent.cloud/ in the GCP asia south.
In this cluster i want to connect self managed ksqlDB to Confluent Cloud Control center.
Here is my configurations which i copied from confluent cloud and put in the managed ksqldb.
This self managed ksqldb is a single machine GCP compute unit.
The same configuration is present in following properties.
/home/confluent/confluent-5.5.1/etc/ksqldb/ksql-server.properties
and the ksql server was started using following commands.
nohup /home/confluent/confluent/confluent-5.5.1/bin/ksql-server-start /home/confluent/confluent/confluent-5.5.1/etc/ksqldb/ksql-server.properties &
Command line :
/home/confluent/confluent-5.5.1/bin/ksql
Couple of things were noted in ksql terminal :
STREAM was created successfully in the terminal but not available in the cloud.
On the command "show streams;" It is able to show the specific STREAM.
print {STREAM}; It doesnt show up data even while data is pushed to STREAM.
I have not set any host entries.
On show connectors following exception is generated in ksql terminal.
ksql> show connectors;
io.confluent.ksql.util.KsqlServerException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8083 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
localhost:8083 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed:
Connection refused (Connection refused)
Caused by: Could not connect to the server.
Caused by: Could not connect to the server.
I am expecting my ksqlDB shows up in the confluent cloud but unable to see though.
I dont know what more configurations are required, so that my self managed ksql server works and show up in confluent cloud.
seems that you are confusing some terminology here, self-managed != managed.
Managed KSQLDB is the service available on your Confluent Cloud console (last image). In there you have to add applications which spin up a KSQLDB cluster for your queries.
For self-managed KSQLDB instance running in GCP, you can connect it to Confluent Cloud, but it is not going to appear on the list of KSQLDB Applications, as you'll have to operate it yourself.
Docs:
Self-managed KSQLDB with Confluent Cloud: https://docs.confluent.io/current/cloud/cp-component/ksql-cloud-config.html
Confluent Cloud (managed) KSQLDB: https://docs.confluent.io/current/quickstart/cloud-quickstart/ksql.html#cloud-ksql-create-application

No active Drillbit endpoint found from ZooKeeper

I am currently working with a simple project to query the messages from Apache Kafka topic using Apache Drill. And now I am encountering an error when running the Apache Drill cluster when running this command.
sqlline.bat -u "jdbc:drill:zk=localhost:2181"
And the error that I encountered is:
No active Drillbit endpoint found from ZooKeeper. Check connection parameters
I am using the single cluster instance of ZooKeeper that came from Apache Kafka.
Can anyone help me with this problem? Is it ok to use the Zookeeper from Apache Kafka installation with Drill?
sqlline.bat -u "jdbc:drill:zk=localhost:2181" command only connects to running DrillBit. If you have Drill running in distributed mode, please replace localhost with the correct IP address of the node, where Zookeeper is running and update port if needed.
If you want to start Drill in embedded mode, you may try running drill-embedded.bat or sqlline.bat -u "jdbc:drill:zk=local" command.
For more details please refer to https://drill.apache.org/docs/starting-drill-on-windows/.

Having Kafka connected with ip as well as service name - Openshift

In our Openshift ecosystem, we have a kafka instance sourced from wurstmeister/kafka. As of now I am able to have the kafka accessible withing the Openshift system using the below parameters,
KAFKA_LISTENERS=PLAINTEXT://:9092
KAFKA_ADVERTISED_HOST_NAME=kafka_service_name
And ofcourse, the params for port and zookeper is there.
I am able to access the kafka from the pods within the openshift system. But I am unable to access kafka service from the host machine. Eventhough I am able to access the kafka pod using its IP and able to telnet the pod using, telnet Pod_IP 9092
When I am trying to connect using the kafka producer from the host machine, I am getting the below error,
2017-08-07 07:45:13,925] WARN Error while fetching metadata with
correlation id 2 : {tls21=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
And When I try to connect from Kafka consumer from the host machine using IP, it is blank.
Note: As of now, its a single openshift server. And the use case is for dev testing.
Maybe you want to take a look at this POC for having Kafka on OpenShift ?
https://github.com/EnMasseProject/barnabas