I am trying to setup kafka and ksql using confluent platform, but ksql is unable to start.
I have followed the steps to install kafka and ksql from confluent using this link https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html#systemd-ubuntu-debian-install
I have skipped zookeeper config, since we are not using multiple servers and Control center part.
After everything i started zookeeper, kafka, schema-registry, kafka-connect, kafka-rest, ksql in this order. While checking status using
command
_sudo systemctl status confluent*_
ksql failed to start, while everything else is running.
while going into cd /etc/ksql/ksql-server.properties
(#------ Endpoint config -------)
listeners=http://0.0.0.0:8088
ksql.logging.processing.topic.auto.create=true
ksql.logging.processing.stream.auto.create=true
bootstrap.servers=localhost:9092
NOTE: I deleted commented part from config file since for (#) it was showing it as a heading here.
Expected Result:
confluent-ksql.service - Streaming SQL engine for Apache Kafka
Loaded: loaded (/lib/systemd/system/confluent-ksql.service; disabled;
vendor preset: enabled) Active: active(running)
Actual Result:
confluent-ksql.service - Streaming SQL engine for Apache Kafka
Loaded: loaded (/lib/systemd/system/confluent-ksql.service; disabled;
vendor preset: enabled) Active: failed (Result: exit-code) since
Tue 2019-08-27 15:15:08 IST; 9s ago
Docs: http://docs.confluent.io/ Process: 13833 ExecStart=/usr/bin/ksql-server-start /etc/ksql/ksql-server.properties
(code=exited, status=255) Main PID: 13833 (code=exited, status=255)
Aug 27 15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]:
(io.confluent.ksql.util.KsqlConfig:347) Aug 27 15:15:07
Mayank-Vostro-3478 ksql-server-start[13833]: [2019-08-27 15:15:07,722]
ERROR Failed to start KSQL (io.confluent.ksql.rest.serv Aug 27
15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]:
io.confluent.ksql.util.KsqlServerException: Could not create the kafka
streams st Aug 27 15:15:07 Mayank-Vostro-3478
ksql-server-start[13833]: Make sure the directory exists and is
writable for KSQL server Aug 27 15:15:07 Mayank-Vostro-3478
ksql-server-start[13833]: or its parend directory is writbale by KSQL
server Aug 27 15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]:
or change it to a writable directory by setting
'ksql.streams.state.dir' config Aug 27 15:15:07 Mayank-Vostro-3478
ksql-server-start[13833]: at
io.confluent.ksql.rest.server.KsqlServerMain.enforceStreamStateDirAvai
Aug 27 15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]:
at
io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:
Aug 27 15:15:08 Mayank-Vostro-3478 systemd[1]: confluent-ksql.service:
Main process exited, code=exited, status=255/n/a Aug 27 15:15:08
Mayank-Vostro-3478 systemd[1]: confluent-ksql.service: Failed with
result 'exit-code'.
io.confluent.ksql.util.KsqlServerException: Could not create the kafka
streams st Aug 27 15:15:07 Mayank-Vostro-3478
ksql-server-start[13833]: Make sure the directory exists and is
writable for KSQL server Aug 27 15:15:07 Mayank-Vostro-3478
ksql-server-start[13833]: or its parend directory is writbale by KSQL
server Aug 27 15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]: or
change it to a writable directory by setting 'ksql.streams.state.dir'
config Aug 27 15:15:07 Mayank-Vostro-3478 ksql-server-start[13833]: at
According to above error, it is permission issue. User running KSQL process doesn't have write permission to create state directory in the given location.
You have to give permission to that user to create directory or change the ksql.streams.state.dir to some path where user have write permission.
Related
i was using mongodb and it was fine.
then i wanted to convert it to replica set and i get into some problems and i uninstalled it.
after reinstalling (10 times and doing everything on internet xD) why i check status with systemctl status it say failed with exit_code ( i know my conf file dont have problem).
what can i do? i even installed the 3.3 version and even it doesnt start anymore.
i used anything that it came to my mind (purging config files & lot more...).
i really dont want to reinstall my os (really cant).
this is my sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-02-18 20:05:20 +0330; 8s ago
Docs: https://docs.mongodb.org/manual
Process: 147513 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
Main PID: 147513 (code=exited, status=1/FAILURE)
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: Started MongoDB Database Server.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: about to fork child process, waiting until server is ready for connections.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147527]: forked process: 147527
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: ERROR: child process failed, exited with 1
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: To see additional information in this output, start without the "--fork" option.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: mongod.service: Failed with result 'exit-code'.
I solved the problem by changing the default mongodb port from port 27017 to port 27018 in /etc/mongod.conf.
I'm sure this will come handy to a lot of people.
And for the last part, after uninstalling mongodb I removed mongod.service files (every file) in the system and systemd directories in root and installed mongodb again.
(so I think uninstalling mongodb wasn't complete at first time. And 2 instances interfere with each other. Now everything works fine in mongodb with port 27018).
OS: RHEL 8.2
I am trying to create a systemctl service for zookeeper. It fails to access the datadir.
Here is my config for zookeeper,
dataDir=/opt/zookeeper
maxClientCnxns=20
tickTime=2000
dataDir=/var/zookeeper/
initLimit=20
syncLimit=10
server.0=master:2888:3888
clientPort=2181
admin.serverPort=8082
Permission of /opt/zookeeper is set to 777.
[user1#server1 opt]$ ls -lart
total 0
dr-xr-xr-x. 17 root root 244 Jul 3 10:56 ..
drwxr-xr-x 3 root root 27 Jul 10 10:29 rh
drw-r--r-- 2 user2 user2 6 Jul 17 08:48 hsluw_data
drw-r--r-- 2 user2 user2 6 Jul 17 08:58 hsluw_config
drwxr-xr-x. 6 root root 71 Jul 17 08:58 .
drwxrwxrwx 3 user2 user2 23 Jul 17 09:40 zookeeper
If I run the command,
./bin/zookeeper-server-start.sh config/zookeeper.properties
it gives me an error message: Unable to access datadir
[2020-07-30 10:25:50,767] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-07-30 10:25:50,767] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-07-30 10:25:50,769] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-07-30 10:25:50,769] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /var/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:132)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally
However, sudoing the above command works,
sudo ./bin/zookeeper-server-start.sh config/zookeeper.properties
Now I have created a service in /etc/systemd/system/zookeeper.service
I wrote the service in /etc/systemd/system/zookeeper.service in this way,
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=user2
ExecStart=/home/user2/kafka/bin/zookeeper-server-start.sh /home/user2/kafka/config/zookeeper.properties
ExecStop=/home/user2/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The SELinux status is disabled.
user2#server1$ sestatus
SELinux status: disabled
Now if I do the following
sudo systemctl daemon-reload
sudo systemctl start zookeeper
sudo systemctl enable zookeeper
I am getting the the same Unable to access the datadir error like the following,
[user2#server1 /]$ systemctl status zookeeper
\u25cf zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-07-30 10:13:19 CEST; 24s ago
Main PID: 12911 (code=exited, status=3)
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: org.apache.zookeeper.server.persistence.FileTxnSnapLog$Data>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.persistence.FileTxnS>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.ZooKeeperServerMain.>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: at org.apache.zookeeper.server.quorum.QuorumPeerMai>
Jul 30 10:13:19 server1.localdomain zookeeper-server-start.sh[12911]: Unable to access datadir, exiting abnormally
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 30 10:13:19 server1.localdomain systemd[1]: zookeeper.service: Failed with result 'exit-code'.
What am I missing here?
In the configuration file, this line
dataDir=/var/zookeeper/
appears twice. Removing that line solves the issue.
I've got a VPS with Centos 7, But when I try to run MongoDB as service I get the following message:
Apr 06 03:11:46 server.backupserver.com systemd[1]: Starting MongoDB Database Server...
Apr 06 03:11:46 server.backupserver.com mongod[3767]: about to fork child process, waiting until server is ready for connections.
Apr 06 03:11:46 server.backupserver.com mongod[3767]: forked process: 3769
Apr 06 03:11:49 server.backupserver.com systemd[1]: Can't open PID file /var/run/mongodb/mongod.pid (yet?) after start: Too many levels of symbolic links
Apr 06 03:13:17 server.backupserver.com systemd[1]: mongod.service start operation timed out. Terminating.
Apr 06 03:13:17 server.backupserver.com systemd[1]: Failed to start MongoDB Database Server.
Apr 06 03:13:17 server.backupserver.com systemd[1]: Unit mongod.service entered failed state.
Apr 06 03:13:17 server.backupserver.com systemd[1]: mongod.service failed.
I tested the responses from another topics, but it doesn't work yet.
The "var/run/mongodb" directory and the file "mongod.pid" has the right permissions and user (mongod).
Please, help
This seems to be a message that systemd can produce under a variety of conditions. To troubleshoot:
Start with a pristine Docker, Vagrant, VirtualBox etc. image of CentOS 7.
Follow the official MongoDB installation instructions.
If you succeed, follow the official installation instructions on your VPS.
I'm working with firebird3.0 database suddenly my database is stopped working and when i have checked server status by
$ /etc/init.d/firebird3.0 status
i see server is stopped
● firebird3.0.service - Firebird Database Server ( SuperServer )
Loaded: loaded (/lib/systemd/system/firebird3.0.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-05-16 19:01:13 IST; 29s ago
Process: 9628 ExecStart=/usr/sbin/fbguard -pidfile /run/firebird3.0/default.pid -daemon -forever (code=exited, status=252)
May 16 19:00:58 ADMIN-I-61 systemd1: Starting Firebird Database Server ( SuperServer )...
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Control process exited, code=exited status=252
May 16 19:01:13 ADMIN-I-61 systemd1: Failed to start Firebird Database Server ( SuperServer ).
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Unit entered failed state.
May 16 19:01:13 ADMIN-I-61 systemd1: firebird3.0.service: Failed with result 'exit-code'.
when i'm trying following commands to start server
/etc/init.d/firebird3.0 start
/etc/init.d/firebird3.0 restart
it returns me
[....] Starting firebird3.0 (via systemctl): firebird3.0.serviceJob for firebird3.0.service failed because the control process exited with error code. See "systemctl status firebird3.0.service" and "journalctl -xe" for details.
failed!
My today's firebird.log file is looks like this
ADMIN-I-61 Thu May 16 11:06:37 2019
/opt/firebird/bin/fbguard: guardian starting /opt/firebird/bin/firebird
ADMIN-I-61 Thu May 16 11:07:26 2019
INET/inet_error: bind errno = 98
ADMIN-I-61 Thu May 16 11:07:27 2019
startup:INET_connect:
Unable to complete network request to host "ADMIN-I-61".
Error while listening for an incoming connection.
Address already in use
ADMIN-I-61 Thu May 16 11:07:27 2019
/opt/firebird/bin/fbguard: /opt/firebird/bin/firebird terminated due to startup error (2)
ADMIN-I-61 Thu May 16 11:07:27 2019
/opt/firebird/bin/fbguard: /opt/firebird/bin/firebird terminated due to startup error (2)
ADMIN-I-61 Thu May 16 12:22:35 2019
/opt/firebird/bin/fbguard: guardian starting /opt/firebird/bin/firebird
I have check ports
please help...!
When installing the firebird from the package deb, the line in the file /etc/firebird/3.0/firebird.conf was uncommented:
RemoteBindAddress = localhost
Comment out this line:
**#RemoteBindAddress = localhost**
Default:
RemoteBindAddress =
After changes, you must restart the service firebird.
Yesterday service worked fine. But today when i checked service's state i saw:
Mar 11 14:03:16 coreos-1 systemd[1]: scheduler.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 11 14:03:16 coreos-1 systemd[1]: Unit scheduler.service entered failed state.
Mar 11 14:03:16 coreos-1 systemd[1]: scheduler.service failed.
Mar 11 14:03:16 coreos-1 systemd[1]: Starting Kubernetes Scheduler...
Mar 11 14:03:16 coreos-1 systemd[1]: Started Kubernetes Scheduler.
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.808349 4659 reflector.go:118] watch of *api.Service ended with error: very short watch
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.811434 4659 reflector.go:118] watch of *api.Pod ended with error: unexpected end of JSON input
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.847595 4659 reflector.go:118] watch of *api.Pod ended with error: unexpected end of JSON input
It's really confused 'cause etcd, flannel and apiserver work fine.
Only some strange logs are for etcd:
Mar 11 20:22:21 coreos-1 etcd[472]: [etcd] Mar 11 20:22:21.572 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
Mar 11 20:22:48 coreos-1 etcd[472]: [etcd] Mar 11 20:22:48.269 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
Mar 11 20:48:12 coreos-1 etcd[472]: [etcd] Mar 11 20:48:12.070 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
So, I'm really stuck and don't know what's wrong. How can i resolve this problem? Or, how can i check details log for scheduler.
journalctl give me same logs like systemd status
Please see: https://github.com/GoogleCloudPlatform/kubernetes/issues/5311
It means apiserver accepted the watch request but then immediately terminated the connection.
If you see it occasionally, it implies a transient error and is not alarming. If you see it repeatedly, it implies that apiserver (or etcd) is sick.
Is something actually not working for you?