Kafka service does not start - apache-kafka

I am trying to install Kafka following a tutorial from DigitalOcean.com here.
I am doing this on Windows WSL2 with Ubuntu. So, after creating the zookeeper.service and kafka.service as per the tutorial, I do this command (the tutorial uses sudo systemctl start kafka instead), following advice from this thread:
sudo service kafka start
I received :
kafka: unrecognized service
When I do service --status-all to see if kafka is in the list, it is not there.
What am I missing?

There is lack of support in WSL for systemd
why systemd is disabled in WSL?

Related

How to stop a Kafka Connector running in daemon mode?

I am currently start a kafka Connector in --daemon mode below:
bin/connect-standalone.sh -daemon \
/kafka/config/connect-standalone.properties \
/kafka/config/custom-connector.properties
How do I stop this connector process gracefully?
I am currenlty using top command to locate a java process and use kill -15 pid to stop it. I found this quite not practical because I cannot specify the connector by some properties to stop it.
Is there any way to stop a kafka connector in a way like executing a command below? Or any better alternatives?
kafka/bin/kafka-connect-stop.sh \
/kafka/config/connect-standalone.properties
To stop a connector, and not the worker, use PUT /connectors/{connector}/pause REST API endpoint.
https://kafka.apache.org/documentation/#connect_rest
Otherwise, yes, to stop the worker, you can use kill, or you can wrap it in SystemD script, and use systemctl stop to do the same.
Thanks #OneCricketeer's answer.
I wrap my command using systemd script below.
Create a kafka-connector.servce file in /etc/systemd/system as below
[Unit]
Description=Kafka Connector
[Service]
User=root
Type=simple
ExecStart=/bin/sh -c "/kafka/bin/connect-standalone.sh /kafka/config/connect-standalone.properties /kafka/config/my-connector.properties"
Start the kafka connector using
sudo systemctl start kafka-connector
Stop the kafka connector using
sudo systemctl stop kafka-connector
Check the status of the kafka connector using
sudo systemctl status kafka-connector

Problem with kafka - Failed with result 'exit-code', status=1/FAILURE

I tried to install apache-kafka several times but I always had this problem. I'm using ubuntu on my virtual machine. When I'm trying to activate kafka service using sudo systemctl start kafka
and then controlling if it's working at first, the output is "active (running)", but if I double-check it and the output is "failed (Result: exit-code) ". And I tried sudo systemctl enable kafka but it didn't work.
This is the output:
● kafka.service
Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-05-26 05:40:22 PDT; 3s ago
Process: 8098 ExecStart=/bin/sh -c /home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/co>
Main PID: 8098 (code=exited, status=1/FAILURE)
May 26 05:40:19 ubuntu systemd[1]: Started kafka.service.
May 26 05:40:22 ubuntu systemd[1]: kafka.service: Main process exited, code=exited, status=1/FAILURE
May 26 05:40:22 ubuntu systemd[1]: kafka.service: Failed with result 'exit-code'.
You can see the full output attached
I also tried journalctl -xe and it recommended using ./gradlew jar -PscalaVersion=2.13.5, and I download it, at first it seemed to work, but the following day I had the same problem ( kafka.service: Failed with result 'exit-code'.). And if I tried journalctl -xe I had an output that you can see attached.
With zookeeper I had no problem, it's always active.
Thank you in advance.
Open the file meta.properties.
In my case, it was located at the path /home/kafka/logs/meta.properties
Just comment the the cluster.id with a #
Restart zookeeper and kafka.
I had the same issue by following the tutorial from well known site. I fixed the problem by doing all from the scratch this way.
sudo apt update
sudo apt install default-jdk
I downloaded latest BINARY release from here https://kafka.apache.org/downloads. I used https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
sudo wget https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
Unpack and move
tar xzf kafka_2.13-3.0.0.tgz
mv kafka_2.13-3.0.0 /usr/local/kafka
edit zookeeper unit file
sudo vi /etc/systemd/system/zookeeper.service
add this content
[Unit]
Description=Apache Zookeeper server
Documentation=http://zookeeper.apache.org
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
ExecStart=/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties
ExecStop=/usr/local/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
Edit Kafka systemd unit file
sudo vi /etc/systemd/system/kafka.service
and add the content below. Note: You must change JAVA_HOME=path to your path
[Unit]
Description=Apache Kafka Server
Documentation=http://kafka.apache.org/documentation.html
Requires=zookeeper.service
[Service]
Type=simple
Environment="JAVA_HOME=REPLACE-THIS-WITH-YOUR-PATH"
ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh
[Install]
WantedBy=multi-user.target
Reload the systemd daemon to apply new changes.
sudo systemctl daemon-reload
Start zookeeper and kafka
sudo systemctl start zookeeper
sudo systemctl start kafka
check kafka status now, it should be running
sudo systemctl status kafka
All you need to do is to build kafka project before running it:
./gradlew jar -PscalaVersion=2.13.6
Note that you need to have Java installed
tried to install apache-kafka several times
Kafka doesn't come with Systemd scripts. Follow the official Apache Kafka website to see how you start it without systemctl
If you want to install on Ubuntu, Confluent Community edition allows you to do apt-get install to get both Kafka and Zookeeper
Your error shows an InconsistentClusterIdException, which means you need to wipe the data directories for Zookeeper and Kafka so that the broker will start in a fresh state
For me, I found out that the system actually has 2 folder kafka so when the service started, it said "exit-code"
-> My solution for my problem is delete 1 folder and keep folder /home/kafka
In my case Kafka didn't start in the first place, I reassigned a different logs folder to server.properties files and provided necessary rights to the folder, and restarted both the zookeeper and Kafka services, and then they seem to work.
in my case, I was using a Source Download
which I was : kafka-3.3.1-src.tgz
use binary version
Scala 2.13 - kafka_2.13-3.3.1.tgz
you can download it from https://kafka.apache.org/downloads

Kafka starting error in CentOS

Kafka server failed to start on confluent start command.
command lines:
~]# sudo confluent start
zookeeper is already running. Try restarting if needed
Starting kafka
-Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment
Run
confluent log kafka
to see the log from Kafka trying to start, and see what the error is.

Failed to restart mongod.service : Unit mongod.service not found

There are a lot of variations for this question, on different forums. I tried a lot of things to get it to work. I am using AWS EC2 and MEAN by Bitnami, I tried connecting using Node JS and I realized that my monogodb service is not running. I checked it by running on the terminal (connected using Putty)
service mongod status
This is the error I get
mongodb.service Loaded:not-found (Reason: No such file or directory)
Active: inactive(dead)
To try my luck, I tried
sudo service mongod restart
And I get this error:
Failed to restart mongod.service : Unit mongod.service not found
Now, just to probe more I tried looking if I have this service installed.
I ran this command: ls /lib/systemd/system
And it gave a huge list, but I couldn't find mongod.service anywhere.
My Ubuntu Ver: 16.04
I am guessing it's not present or maybe I am looking for the wrong stuff. Please let me know how do I get the service to run. I am sort of new to MongoDB and Bitnami.
Each Bitnami MEAN stack includes a control script that lets you easily stop, start and restart services.
The script is located at /opt/bitnami/ctlscript.sh.
To start all services:
sudo /opt/bitnami/ctlscript.sh start
To start a single service:
sudo /opt/bitnami/ctlscript.sh start <service name>
So to answer your question:
sudo /opt/bitnami/ctlscript.sh start mongod
You can obtain a list of available services and operations by running the script without any arguments:
sudo /opt/bitnami/ctlscript.sh

flocker-docker-plugin not working on centos7.2

I am trying to integrate flocker with docker, for that I found plugin flocker-docker-plugin. I installed it by using the commands on my flocker agents.-
$ yum install -y clusterhq-flocker-docker-plugin
$ systemctl enable flocker-docker-plugin
$ systemctl restart flocker-docker-plugin
It shows flocker-docker-plugin is running. However after few seconds when I checked status by using $ systemctl status flocker-docker-plugin, I got error saying
flocker-docker-plugin.service: main process exited, code=killed, status=11/SEGV
Based on the information you have given there could be multiple reasons for this error:
Check if you can reach the flocker control service and more so if your node-agents can reach the control-service.
Check if the flocker-dataset-agent and the flocker-container-agent are running on your nodes.
Check if you have provided certificates for the flocker-docker-plugin as mentioned on their site (https://docs.clusterhq.com/en/latest/docker-integration/generate-api-plugin.html).
While installing flocker i also got the same error as we have just installed the docker plugin and by default it does't start's up.
First use the command systemctl start flocker-docker-plugin and then check the running status of flocker using systemctl status flocker-docker-plugin
Make sure the control service and dataset agent are running correctly first, you can find logs by looking in /var/log/flocker/, journalctl -u flocker-dataset-agent or running flocker-diagnostics.
Read through any error in these logs such as communication with control service issues, certificates issues, agent.yml config issues etc, or feel free to post them for more help.
You can also find flocker-docker-plugin logs the same way to see specific errors that may be occurring.
Here is more information about how to debug flocker.