Kafka connect not working? - apache-kafka

While going through the apache official page
https://kafka.apache.org/quickstart
A text file is created as
echo -e "foo\nbar" > test.txt
And to use kakfa connect following command is used
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
But while above command gets executed it shows a message kafka-connect stopped

Something else is using the same port that Kafka Connect wants to use.
You can use netstat -plnt to identify the other program (you'll need to run it as root if the process is owned by a different user).
If you want to get Kafka Connect to use a different port, edit config/connect-standalone.properties to add:
rest.port=18083
Where 18083 is an available port.

Related

apache flink doesnt connect to port 8081

Hi i am new to apache flink and i am trying to run a batch wordcount example to start learning about it.I have run
./bin/start-cluster.sh
and then i executed ./bin/flink run ./examples/batch/WordCount.jar --input test.txt --output out.txt
and i get the following
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8081
console messages
so i think its about server connection error and i tried some things like xampp but nothing better
So what's your opinion on that?
It seems like your cluster is not starting. Please try ./bin/start-cluster.sh again and go to http://localhost:8081/ to confirm your cluster is up. After that, the word count example should run fine after specifying the appropriate input and output files.

Kafdrop (localhost/127.0.0.1:9092) could not be established. Broker may not be available

I setup Kafka and Zookeeper on my local machine and I would like to use Kafdrop as UI. I tried running with docker command below:
docker run -d --rm -p 9000:9000 \
-e KAFKA_BROKERCONNECT=<localhost:9092> \
-e JVM_OPTS="-Xms32M -Xmx64M" \
-e SERVER_SERVLET_CONTEXTPATH="/" \
obsidiandynamics/kafdrop
and I get -bash: https://locahost:9092: No such file or directory
When I remove the KAFKA_BROKERCONNECT parameter, the application run but I got error below:
[AdminClient clientId=kafdrop-admin] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2020-07-22 09:39:29.108 WARN 1 [| kafdrop-admin] o.a.k.c.NetworkClient
I did setup Kafka Server's listener setting to this but did not help.
listeners=PLAINTEXT://localhost:9092
I found this similar issue on GitHub but couldn't understand most of the answers.
Kafka is not HTTP-based. You do not need a schema protocol to connect to Kafka, and angle brackets do not need used.
You also cannot use localhost, as that is Kafdrop container, not Kafka.
I suggest you use Docker Compose with Kafdrop and Kafka
I followed what #OneCricketeer said in his answer and everything worked perfectly
Here what I did :
I downloaded the compose file from GitHub the link above or Click here
Run the compose file by going to the directory where the file exists and run it using cmd docker-compose up
Stop all your kafka server and Zookeeper because everything going to be downloaded with the docker-compose command
after go to http://localhost:9000/ and Voila

How can I get into the Zookeeper that is integrated in Kafka? ( 2.2.0 )

I have installed a kafka that has integrated zookeeper.
I have seen that to enter an independent Zookeeper installation, you can run the following command to enter the zookeeper console:
bin/ZkCli.sh
ls /zookeeper/quota
But in Kafka's scripts I only have:
zookeeper-security-migration.sh
zookeeper-server-start.sh
zookeeper-server-stop.sh
zookeeper-shell.sh
I have tried to do the following:
./zookeeper-shell.sh -server 127.0.0.1:2181 ls /zookeeper/quota
But it doesn't work, it doesn't do anything
How can I get into the Zookeeper that is integrated in Kafka?
After starting Zookeeper, you can connect to it using the zookeeper-shell.sh tool.
To get into the shell:
./zookeeper-shell IP:2181
Then you can execute commands, like:
ls /
You can use cd to move within the nodes and get to print the content of nodes.
You can also use this script to just run commands and return (without getting into the shell)
./zookeeper-shell.sh localhost:2181 get /controller
/zookeeper/quota is not a path used by Kafka, Quotas are stored under /config

kafka connect connector doesn't start automatically

I have a Kafka Connect source and sink connector for putting data into Kafka and taking it back out.
I am running Kafka and Kafka Connect using docker-compose which runs connect in distributed mode. see that it finds my plugin when connect starts up, but it doesn't actually do anything unless I do a POST to the /connectors API, including the configuration in JSON.
I have a properties file with the configuration in it and I've tried putting it under /etc where I find similar properties files for the other plugins that are installed.
Am I missing a step when installing my plugin, or is it required to register the connector via the REST API before it will be assigned to workers?
Yes, you have to configure Kafka Connect using the REST API when using distributed mode.
It's possible to script the creation of connectors though, using a Docker Compose like this:
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connect Elasticsearch sink"
/scripts/create-es-sink.sh
sleep infinity
where /scripts/create-es-sink.sh is the REST call from curl in a file mounted locally to the container.
(source)
You can install a Kafka connector before you start the distributed Connect worker using "confluent-hub install" as shown here: Install Kafka connector manually). However, I'm not sure what the magic is if you aren't using confluent-hub though.

Unable to start the zookeeper server

I am running kafka on amazon EC2 and ubuntu. For starters I'm trying to run the zookeeper server and create a test topic. The ultimate aim is to integrate spark with kafka for sentiment analysis.
When I try to start the zookeeper server I get the following warning and the process doesn't seem to end i.e I don't see a shell prompt after i type this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
Thanks in advance for any help on this.
Thats a warning not an error .. your zookeeper should be running and you should be able to connect to it .. just open another tarminal and run (from ZK home)
bin/zkCli.sh -server 127.0.0.1:2181
optional commands to check server status:
echo ruok | nc 127.0.0.1 2181
echo mntr | nc 127.0.0.1 2181
echo srvr | nc 127.0.0.1 2181