im try to install phpmyadmin on centos 7 with digitalocean droplet.i edited allow IP to dynamic any IP.but when i try to restart the service,i got this message.
[root#centos-512mb-nyc2-01 /]# sudo systemctl restart httpd.service
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
here is the result after run systemctl status httpd.service
[root#centos /]# systemctl status httpd.service
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2016-04-26 04:47:31 EDT; 1min 50s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 2633 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE)
Process: 2632 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
Main PID: 2632 (code=exited, status=1/FAILURE)
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Starting The Apache HTTP Server...
Apr 26 04:47:31 centos-512mb-nyc2-01 httpd[2632]: AH00526: Syntax error on line 1 of /etc/httpd/conf.d/phpMyAdmin.conf:
Apr 26 04:47:31 centos-512mb-nyc2-01 httpd[2632]: allow not allowed here
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
Apr 26 04:47:31 centos-512mb-nyc2-01 kill[2633]: kill: cannot find process ""
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service: control process exited, code=exited status=1
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Failed to start The Apache HTTP Server.
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Unit httpd.service entered failed state.
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service failed.
here is my http file
Allow from# phpMyAdmin - Web based MySQL browser written in php
#
# Allows only localhost by default
#
# But allowing phpMyAdmin to anyone other than localhost should be considered
# dangerous unless properly secured by SSL
Alias /phpMyAdmin /usr/share/phpMyAdmin
Alias /phpmyadmin /usr/share/phpMyAdmin
<Directory /usr/share/phpMyAdmin/>
AddDefaultCharset UTF-8
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
#Require ip 127.0.0.1
Require all granted
#Require ip ::1
</RequireAny>
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
<Directory /usr/share/phpMyAdmin/setup/>
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
Why don't you use the one click Application Image that Digital Ocean offers?
You can get the full tutorial here
Related
Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?
Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.
When trying to start zookeeper service I get the following
● zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-04-02 16:19:24 EDT; 5min ago
Process: 5201 ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh (code=exited, status=1/FAILURE)
Process: 4882 ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties (code=exited, status=127)
Main PID: 4882 (code=exited, status=127)
Apr 02 16:19:24 centos.localdomain systemd[1]: Started zookeeper.service.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: main process exited, code=exited, status=127/n/a
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: control process exited, code=exited status=1
Apr 02 16:19:24 centos.localdomain systemd[1]: Unit zookeeper.service entered failed state.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service failed.
The zookeeper.service file is configured as follows
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=specadmin
ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties
ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
When trying to run zookeeper manually with the same user configured in the service file everything works fine.
Please advise
Turns out the issue was related to the environment variables systemd uses.
Systemd uses a fixed $PATH variable and the changes that are made to the /etc/profile /etc/bashrc and the like are not applied to systemd.
Zookeeper runs java which needs to be part of the search path, but since systemd doesn't use the files where the search path is set, zookeeper start script couldn't find java.
I solved it by overriding the search path by adding Environment=PATH=... parameter in the zookeeper service file and adding all the required directories.
I'm trying to create a systemd service on CentOS 7.5, to acces livestatos from remote thru
File proxy-to-livestatus.service:
[Unit]
Requires=naemon.service
After=naemon.service
[Service]
ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live
File proxy-to-livestatus.socket:
[Unit]
StopWhenUnneeded=true
[Socket]
ListenStream=6557
Status:
systemctl status proxy-to-livestatus.service
● proxy-to-livestatus.service
Loaded: loaded (/etc/systemd/system/proxy-to-livestatus.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2018-07-18 09:11:58 CEST; 15s ago
Process: 3203 ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live (code=exited, status=1/FAILURE)
Main PID: 3203 (code=exited, status=1/FAILURE)
jul 18 09:11:58 chuwi systemd[1]: Started proxy-to-livestatus.service.
jul 18 09:11:58 chuwi systemd[1]: Starting proxy-to-livestatus.service...
jul 18 09:11:58 chuwi systemd-socket-proxyd[3203]: Didn't get any sockets passed in.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service: main process exited, code=exited, status=1/FAILURE
jul 18 09:11:58 chuwi systemd[1]: Unit proxy-to-livestatus.service entered failed state.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service failed.
Hi to resolve this issue, we haver to enable the socket with --now option
systemctl enable --now proxy-to-livestatus.socket
and the start the proxy-to-livestatus.service
systemctl start systemctl enable --now proxy-to-livestatus.socket
Regards
I've a Google cloud VM with MongoDB server running for many months. Today the VM restarted and MongoDB won't run as a service (i can run it mannualy as a process and starts OK).
OS: CentOS 7
MongoDB Verion: 3.2.16
The error thrown:
>sudo service mongod start
Starting mongod (via systemctl): Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[FAILED]
So if i run "systemctl status mongod.service":
>sudo systemctl status mongod.service
mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-02-05 18:05:49 UTC; 1min 20s ago
Docs: man:systemd-sysv-generator(8)
Process: 3755 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=1/FAILURE)
Feb 05 18:05:49 todoturnos-testing systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
Feb 05 18:05:49 todoturnos-testing runuser[3762]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Feb 05 18:05:49 todoturnos-testing mongod[3755]: Starting mongod: [FAILED]
Feb 05 18:05:49 todoturnos-testing systemd[1]: mongod.service: control process exited, code=exited status=1
Feb 05 18:05:49 todoturnos-testing systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
Feb 05 18:05:49 todoturnos-testing systemd[1]: Unit mongod.service entered failed state.
Feb 05 18:05:49 todoturnos-testing systemd[1]: mongod.service failed.
If i run "journalctl -xe"
>sudo journalctl -xe
Feb 05 18:09:58 todoturnos-testing sudo[3827]: janokpodelmundi : TTY=pts/0 ; PWD=/usr/lib/tmpfiles.d ; USER=root ; COMMAND=/sbin/service mongod start
Feb 05 18:09:58 todoturnos-testing polkitd[348]: Registered Authentication Agent for unix-process:3847:870363 (system bus name :1.242 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /or
Feb 05 18:09:58 todoturnos-testing systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
-- Subject: Unit mongod.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has begun starting up.
Feb 05 18:09:58 todoturnos-testing runuser[3860]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Feb 05 18:09:58 todoturnos-testing runuser[3860]: pam_unix(runuser:session): session closed for user mongod
Feb 05 18:09:58 todoturnos-testing mongod[3853]: Starting mongod: [FAILED]
Feb 05 18:09:58 todoturnos-testing systemd[1]: mongod.service: control process exited, code=exited status=1
Feb 05 18:09:58 todoturnos-testing systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Feb 05 18:09:58 todoturnos-testing systemd[1]: Unit mongod.service entered failed state.
Feb 05 18:09:58 todoturnos-testing systemd[1]: mongod.service failed.
Feb 05 18:09:58 todoturnos-testing polkitd[348]: Unregistered Authentication Agent for unix-process:3847:870363 (system bus name :1.242, object path /org/freedesktop/PolicyKit1/AuthenticationAgent,
Feb 05 18:10:00 todoturnos-testing sudo[3866]: janokpodelmundi : TTY=pts/0 ; PWD=/usr/lib/tmpfiles.d ; USER=root ; COMMAND=/bin/journalctl -xe
lines 2280-2321/2321 (END)
Where "janokpodelmundi" is my username.
So, i have disabled SELINUX as i know it could be related with this problem, but didn't resolved it.
I've also changed the "pid" file location to ensure the permissions are OK,and had disabled forking in the config as well.
My mongodb config:
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork d run in background
pidFilePath: /var/run/mongo/mongod.pid # location of pidfile
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.
The mongodb log is empty, and it's not generating rows at any time.
I have been trying many alternatives i've found on internet but the problem persists.
Any help would be great.
Solution:
After trying the "mongod -f /path-to-config-file" and getting the "incorrect YAML" error at line 29, i pasted from original mongo conf the lines 26-29:
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 #
After that i tried again the "mongod -f path-to-config-file" and succeed.
YAML files require spaces for indentation and not tabs. Seems that you have a funny character/indent within your conf file. The easiest way to debug this is to start off with a basic conf file and add in options and make sure your indents etc are correct before adding another option
As mentioned above , the issue may be caused because files isn't in Yaml format
run :
mongod -f /etc/mongod.conf
you will get in what line the issue is (fix the extra spaces and issue will be solved)
I have a RHEL7 server that is part of a Mongo cluster. There are three mongo processes that I would like to be automatically started on system boot. One mongod, one arbiter and one mongos:
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
I have been trying to create systemd services for these commands i.e
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf
[Install]
WantedBy=multi-user.target
When I try to do sudo systemctl daemon-reload && sudo systemctl start mongoconf, I get this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:38:34 AWST; 20s ago
Process: 5114 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:38:34 mdb1 systemd[1]: Started mongo configuration server.
Feb 02 14:38:34 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service: main process exited, code=exited, status=1/FAILURE
Feb 02 14:38:34 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service failed.
I have also tried using a forked type with pid file:
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
Type=forking
PIDFile=/var/run/mongodb/mongoconf/pid
[Install]
WantedBy=multi-user.target
But gives this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:45:36 AWST; 4s ago
Process: 5256 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:45:36 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:45:36 mdb1 mongod[5256]: about to fork child process, waiting until server is ready for connections.
Feb 02 14:45:36 mdb1 mongod[5256]: forked process: 5258
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service: control process exited, code=exited status=1
Feb 02 14:45:36 mdb1 systemd[1]: Failed to start mongo configuration server.
Feb 02 14:45:36 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service failed.
Starting the mongo config manually works fine and creates the pid file
/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
The version of mongod I am using is the one from mongodb.com, and I installed it following their install guide.
db version v3.4.1
git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64
from this repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
I am wondering if I am going about this the wrong way, is there a better way to do this?
I know you said rhel7 but since it's the only answer coming up on duckduckgo for this question, this can be useful. Under Ubuntu 15 and up:
sudo systemctl enable mongod.service
Here is my solution
make a bash script with these lines
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
and then add this line to your crontab
#reboot root cd /foldername && ./scriptname.sh
systemd would be a better solution, if anyone knows how to set it up.
the mongo documentation is no help