Mongodb: How to auto restart on crashes? - mongodb

I'm using Ubuntu 18.04 & Mongodb 3.2.22.
Now, I want to make sure that it'll be always up and running (if it crashes - auto restart).
I searched for a solution and noticed that some people uses respawn on a file named /etc/init/mongodb.conf. The thing is, I don't have this file.
Currently when I want to restart I use sudo service mongod restart.
Any idea how to accomplish that?

if you have set it up via apt, then the systemd service file should be at /etc/systemd/system/multi-user.target.wants/mongodb.service.
Under ther service tab in the file, add "Restart=always" and do a systemctl daemon-reload

Related

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.

How to run systemctl in a pod

Getting access denied error while running the systemctl command in a pod.
Whenever try to start any service, for example, MySQL or tomcat server in a pod, it gives access denied error.
Is there any way by which I can run systemctl within a pod.
This is a problem related to Docker, not Kubernetes.
According to the page Run multiple services in a container in docker docs:
It is generally recommended that you separate areas of concern by
using one service per container
However if you really want to use a process manager, you can try supervisord, which allows you to use supervisorctl commands, similar to systemctl. The page above explains how to do that:
Here is an example Dockerfile using this approach, that assumes the
pre-written supervisord.conf, my_first_process, and my_second_process
files all exist in the same directory as your Dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
That's a rather short question. The 'systemctl' command does try to talk to the systemd daemon which is not running in a pod by default (it could however). Running multiple services is yet another question about service management. It both cases it could help to use a tool like the docker-systemctl-replacement overwriting /usr/bin/systemctl and registering it as the init-CMD of the container.

modify haproxy systemd configuration

I'm running Ubuntu 18.04 and I've installed haproxy 1.8.8. I want to modify the config so that the "-f" option will read a directory rather than a single haproxy.cfg file.
I see /lib/systemd/system/haproxy.service and also /etc/init.d/haproxy were installed. I think systemd is managing haproxy. But I've read that I'm not supposed to modify the installed haproxy.service.
I copied haproxy.service to /etc/systemd/system/ and edited it there. The changes I made were not picked up when I ran sudo systemctl daemon-reload; sudo service haproxy restart.
Which file do I need to modify and then get systemd to recognize the changes? TIA
As you suspected, you should not edit the unit-files (provided by the OS packager) directly. You can supply a drop-in-snippet using the command
systemctl edit haproxy
and customize the relevant directives (ExecStart)

Can't run Mongo DB deamon in docker container

I'm running docker container in OSX using boot2docker. It is a latest Ubuntu image with mongo installed using official way from package mongodb-org.
I can perfectly run mongod from command line, but can't run it as a service.
When I'm trying to do sudo service mongod start it returns
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
I have tried to do start mongod which doesn't have any output. I have tried everything I found in Google, but no luck.
Meanwhile, I have tried to install MySQL using apt-get and I can perfectly run it as a service.
Also I have tried to install Mongo from Ubuntu's mongodb package which is a older version. Also no problem to run it as a service.
I suspect that there is something wrong with /etc/init.d/mongod script, but don't know exactly what.
Apprieciate any help.
The init-related commands on the Docker Ubuntu image are dummied out / not working because Upstart (/sbin/init) is not the first process started on the machine.
In general, any service which initializes using Upstart will not run properly in a Docker container unless you start the container with /sbin/init (you probably have to be using the ubuntu-upstart image, and make a bunch of tweaks to it too.)
If you really needed to do it this way, write a traditional init script for mongo and insert it using update-rc.d. Then, starting it with /sbin/service should work.
Why not just have the Dockerimage run mongod instead of init/shell/etc? "One process per container", right?
Use a Dockerfile to create your image, and set the CMD to:
CMD ["/usr/bin/mongod", "-f", "/etc/mongod.conf"]

Fabric and init

I am using fabric to restart tomcat and even though it says tomcat restarted successfully it does not. So, as per the FAQ, I set pty=False and tried again. But, now, I get this error:
sudo: /etc/init.d/tomcat restart
out: sudo: sorry, you must have a tty to run sudo
Any ideas around this problem?
To anyone reading this, this is not a problem with fabric but with the way sudo accounts have been set up. This property in /etc/sudoers file controls this;
Defaults requiretty