Could not start RStudio Server 0.99.893-x86_64 - centos

I installed it a Centos 7 box.
R studio server service could not start.
I run the command
systemctl status rstudio-server.service
and it showed:
● rstudio-server.service - RStudio Server
Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Thu 2016-01-28 20:18:20 ICT; 1min 6s ago
Process: 48820 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=203/EXEC)
Jan 28 20:18:20 localhost.localdomain systemd[1]: rstudio-server.service: control process exited, code=exited s...=203
Jan 28 20:18:20 localhost.localdomain systemd[1]: Failed to start RStudio Server.
Jan 28 20:18:20 localhost.localdomain systemd[1]: Unit rstudio-server.service entered failed state.
Jan 28 20:18:20 localhost.localdomain systemd[1]: rstudio-server.service failed.
Jan 28 20:18:20 localhost.localdomain systemd[1]: rstudio-server.service holdoff time over, scheduling restart.
Jan 28 20:18:20 localhost.localdomain systemd[1]: start request repeated too quickly for rstudio-server.service
Jan 28 20:18:20 localhost.localdomain systemd[1]: Failed to start RStudio Server.
Jan 28 20:18:20 localhost.localdomain systemd[1]: Unit rstudio-server.service entered failed state.
Jan 28 20:18:20 localhost.localdomain systemd[1]: rstudio-server.service failed.
I installed and run an old version (rstudio-server-0.99.491-1.x86_64) on the same box without any problem.
How could I fix the issues?

Although you asked this question 3 years ago, I think it's still necessary to share my solution to this problem.
I encounter this problem after I updated R.
The reason why you can not restart rstudio-server is that the PORT 8787 was been using by previous rserver. After knowing this, the solution is easy.
First, check the pid that was using PORT 8787
sudo netstat -anp | grep 8787
tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN pid/rserver
Second, kill this pid (use your pid)
sudo kill -9 pid
Third, restart rstudio-server or reinstall resutio server package

Related

Why I got the next errors?

could someone help me with this error please:
Output for command: systemctl start postgresql-13.service
Job for postgresql-13.service failed because the control process exited with error code.
See "systemctl status postgresql-13.service" and "journalctl -xeu postgresql-13.service" for details.
Output for command systemctl status postgresql-13.service
× postgresql-13.service - PostgreSQL 13 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-13.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2022-08-23 12:17:50 CDT; 1min 47s ago
Docs: https://www.postgresql.org/docs/13/static/
Process: 1079 ExecStartPre=/usr/pgsql-13/bin/postgresql-13-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Process: 1110 ExecStart=/usr/pgsql-13/bin/postmaster -D ${PGDATA} (code=exited, status=1/FAILURE)
Main PID: 1110 (code=exited, status=1/FAILURE)
CPU: 40ms
Aug 23 12:17:47 fedora systemd[1]: Starting PostgreSQL 13 database server...
Aug 23 12:17:50 fedora postmaster[1110]: 2022-08-23 12:17:50.144 CDT [1110] LOG: redirecting log output to logging collector process
Aug 23 12:17:50 fedora postmaster[1110]: 2022-08-23 12:17:50.144 CDT [1110] HINT: Future log output will appear in directory "log".
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Main process exited, code=exited, status=1/FAILURE
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Killing process 1132 (postmaster) with signal SIGKILL.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Killing process 1132 (postmaster) with signal SIGKILL.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Failed with result 'exit-code'.
Aug 23 12:17:50 fedora systemd[1]: postgresql-13.service: Unit process 1132 (postmaster) remains running after unit stopped.
Aug 23 12:17:50 fedora systemd[1]: Failed to start PostgreSQL 13 database server.
Already I uninstall and reinstall postgresql but nothing works. Also I tried to install postgresql-14 but I get the same error.
I have to install postgresql to work alongs with Ruby on Rails.

MongoDB does´t start, exit code 203

I installed MongoDB on the Ruspberry pi desktop on a VM, then I started it with the following command:
sudo service mongod start
The result of this command is the following:
systemctl list-unit-files --state enabled
mongodb.service enabled
Then when I check the status using
sudo service mongod status
The other issue is that there´s no .sock file in my /tmp folder.
PS: I removed MongoDB and reinstalled it trying to fix the issue but I always get the same problem.
Can anyone help me please?
Thank you in advance.
I get the following error:
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-06-08 12:33:17 CEST; 3s ago
Docs: https://docs.mongodb.org/manual
Process: 22168 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=203/EXEC)
Main PID: 22168 (code=exited, status=203/EXEC)
Jun 08 12:33:17 raspberry systemd[1]: Started MongoDB Database Server.
Jun 08 12:33:17 raspberry systemd[22168]: mongod.service: Failed to execute command: Exec format error
Jun 08 12:33:17 raspberry systemd[22168]: mongod.service: Failed at step EXEC spawning /usr/bin/mongod: Exec format error
Jun 08 12:33:17 raspberry systemd[1]: mongod.service: Main process exited, code=exited, status=203/EXEC
Jun 08 12:33:17 raspberry systemd[1]: mongod.service: Failed with result 'exit-code'.```

Error to create a systemd service, with live socket

I'm trying to create a systemd service on CentOS 7.5, to acces livestatos from remote thru
File proxy-to-livestatus.service:
[Unit]
Requires=naemon.service
After=naemon.service
[Service]
ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live
File proxy-to-livestatus.socket:
[Unit]
StopWhenUnneeded=true
[Socket]
ListenStream=6557
Status:
systemctl status proxy-to-livestatus.service
● proxy-to-livestatus.service
Loaded: loaded (/etc/systemd/system/proxy-to-livestatus.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2018-07-18 09:11:58 CEST; 15s ago
Process: 3203 ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live (code=exited, status=1/FAILURE)
Main PID: 3203 (code=exited, status=1/FAILURE)
jul 18 09:11:58 chuwi systemd[1]: Started proxy-to-livestatus.service.
jul 18 09:11:58 chuwi systemd[1]: Starting proxy-to-livestatus.service...
jul 18 09:11:58 chuwi systemd-socket-proxyd[3203]: Didn't get any sockets passed in.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service: main process exited, code=exited, status=1/FAILURE
jul 18 09:11:58 chuwi systemd[1]: Unit proxy-to-livestatus.service entered failed state.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service failed.
Hi to resolve this issue, we haver to enable the socket with --now option
systemctl enable --now proxy-to-livestatus.socket
and the start the proxy-to-livestatus.service
systemctl start systemctl enable --now proxy-to-livestatus.socket
Regards

Filebeat Service will not start on RHEL 7

I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards

Job for kube-apiserver.service failed because the control process exited with error code

On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.
It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.