Can't make unicorn server start on CentOS boot - centos

I have an unicorn init script in /etc/init.d
I have added description to make it centos-compatible.
It is configured in chkconfig:
unicorn 0:off 1:off 2:on 3:on 4:on 5:on 6:off
It starts normally if I run /etc/init.d/unicorn start
But when i reboot my system, it doesn't work.
Any ideas?

Look for the unicorn log file to see what the problem is. It is possible that not all of the services that unicorn requires have started by the time unicorn is started, which would explain why unicorn starts when started manually, but not when started automatically. If that is the problem, it should be reflected in the unicorn error log.
Also, check the runlevel with the runlevel command (or who -r) to make sure your system is at the runlevel you think it is.

Related

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.

How to configure telnet service for yocto image

telnet is necessary in order to maintain compatibility with older software in this case. I'm working with the Yocto Rocko 2.4.2 distribution. when I try to telnet to the board I'm getting the oh so detailed message "connection refused".
Using the method here and the options here I modified the busybox configuration per suggestion. When the board is booted up and logged in, if you execute: telnet, it spits out usage info and a quick directory check shows that telnet is installed to /usr/bin/telnet. My guess is that the telnet client is installed but the telnet server is not running?
I need to get telnetd to start manually at least so I know it will work with an init script in place. The second reference link there suggests that 'telnetd will not be started automatically though...' and that there will need to be an init script. How can I start telnetd manually for testing?
systemctl enable telnetd
returns: Unit telnetd.service could not be found
UPDATE
telnetd in located in /usr/sbin/telnetd. I was able to manually start the telnetd service for testing from there. After manually starting the service telnet login now works. looking into writing a systemd init script to auto start the telnetd service, so I suppose this issue is closed. unless anyone would like to offer up detailed telnet busybox configuration and setup steps as an answer to 'How to configure telnet service for yocto image'
update
Perhaps there is something more? I created a unit file that looks like this:
[Unit]
Description=auto start telnetd
[Service]
ExecStart=/usr/sbin/telnetd
[Install]
WantedBy=multi-user.target
on reboot, systemd indicates the process executed and succeeded:
systemctl status telnetd
.
.
.
Process: 466 ExecStart=/usr/sbin/telnetd (code=exited, status=0/SUCCESS)
.
.
.
The service is not running however. netstat -l does not list it and telnet login fails. Something I'm missing?
last update...i think
so following this post, I managed to get telnet.socket service to startup on reboot.
systemctl status telnet.socket
shows that it is running and listening on 23. Now however, when I try to remote in with telnet I'm getting
Connection closed by foreign host
Everything I've read so far has been talking about xinetd service (which I do not have...). What is confusing is that, if I just navigate to /usr/sbin/ and execute telnetd, the server is up and running and I can telnet into the board, so I do not believe I'm missing any utilities or services (like the above mentioned xinetd), but something is still not being configured correctly. any ideas?

Aem Instance is not stating up

Aem Author instance is not coming up after the automatic backup process. We have automatic backup process scheduled daily at 10:00 Pm PST. We are running a Shell script(backuprepo.sh) which in-turn triggers the stop.sh to bring the server down, do the backup process and then trigger the start.sh script to start the instance again.
The instance is brought down, but sometimes it is not started with the start script and has to be started manually. We have noticed that the below crx java process is not killed when the server is stopped though the stop script.
adobeam6 8454210 1 2 22:27:57 - 187:34 java -server
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx4096m -Doak.queryLimitInMemory=500000 -Doak.queryLimitReads=100000 -Dupdate.limit=250000 -Doak.fastQuerySize=true -XX:MaxPermSize=512M -Djava.awt.headless=true -javaagent:/app/AEM6/author/crx-quickstart/app/newrelic/newrelic.jar -Djava.io.tmpdir=/app/AEM6/author/tmp -Duser.timezone=America/Los_Angeles -Dsling.run.modes=author,sit -jar crx-quickstart/app/cq-quickstart-6.0.0-standalone.jar start -c
crx-quickstart -i launchpad -p 4502
-Dsling.properties=conf/sling.properties
So we have to manually kill the process and trigger the start script to start the instance. Trying to figure out why the server is not stopped properly sometimes and why the above mentioned process is not killed even when the server is stopped through stop script.
We have verified the error logs which only stated the Instance has stopped as last Info.
Below are the steps executed by the backuprepo.sh script:
1)Trigger the stop.sh script
2)Wait for the server to stop automatically and backup process to complete.
3)After the backupprocess is complete, check if the server is started automatically.
4)Sometimes the server is not started automatically which is the issue.
Attached the start.sh and stop.sh scripts.
https://drive.google.com/open?id=0B67j43IdLr8mLXVCWTYxd1NialE
https://drive.google.com/open?id=0B67j43IdLr8mbDNPWndRdE5SWEE
Question:
Do we have any scenarios when we stop the AEM server, the crx java background process would still be running?
Any clues for our case which is preventing the java process to be release when the stop script is triggered?

restart openerp 7 server on Xubuntu

I am coding a custom module on Oerp7 on Xubuntu 12.04, and today, suddently (after some moduifications in the code I think), the restart server command still do not affecting my module.
i restart with this command :
sudo /etc/init.d/openerp-server restart
but the compiled (.pyc) files stayed unchange.
If I delete the module in the addons dir, the module don't properly work giving me a message saying that models are absent. that is normal; but why restart don't change anything. even if I modify the init.py or openerp.py files.
According tome is as restarting by this command now make nothing, while yesterday it did.
So, please, how could I fix that now.
You need to have -u modulename in the command line that starts the OpenERP server. So either modify the /etc/init.d/openerp-server script to have it there, or just start the server manually while you are developing.
Try
sudo /etc/init.d/openerp-server stop
ps aux | grep openerp
to see if the server really stopped.
Start the server with
sudo /etc/init.d/openerp-server start
Look also in the logs (/var/log/openerp/openerp-server.log for ex.) to see what heppens.

Unable to start Sphinx searchd daemon due to already running searchd process, and it restarts just after killing it

When I try to start searchd, it gives the following error.
bind() failed on 0.0.0.0, retrying...
FATAL: bind() failed on 0.0.0.0: Illegal seek
I can find a searchd process running
root 14863 0.1 0.0 73884 3960 ? Ssl 23:21 0:00 /usr/bin/searchd --nodetach
Now, when i kill it or try to stop it (searchd --stop), it instantly restarts.
root 15841 0.5 0.0 73884 3960 ? Ssl 23:33 0:00 /usr/bin/searchd --nodetach
I am guessing there is some setting by which it automatically starts when the process is not running. How can I stop this from happening?
By default, it seems like the debian package will start Sphinx with an additional keepalive process. I was able to stop it successfully with this;
sudo service sphinxsearch stop
the 'init: ... main process ended, respawning' suggests there is something in the init script that sets a watchdog to make sure sphinx doesnt die.
Perhaps you need to shutdown sphinx via the init script itself
/etc/init.d/sphinxsearch stop
To my knowledge, Upstart is responsible for respawning searchd after you attempt to stop/kill it.
Since we know that this process is being managed by upstart, we can terminate the daemon using "stop sphinxsearch" and then start it again with "start sphinxsearch".
If you want to kill it normally like any other process, then you can remove the "--nodetach" argument in the config file /etc/sphinxsearch/sphinx.conf. However, by doing this, you can no longer stop the process using "stop sphinxsearch".
No, there are no any sphinx option to restart Sphinx.
Probably some monitoring tool like monit installed for Sphinx.