Mongodb database symbolic link get removed from system - mongodb

We created mongodb on aws linux 2. We want seperate volume for logs, journal & some database. For this we enable 'directoryPerDB' in mongodb configuration.
Here's the mount setup we did
sudo mkfs.xfs -L mongodata /dev/sdf
sudo mkfs.xfs -L mongojournal /dev/sdg
sudo mkfs.xfs -L mongolog /dev/sdh
sudo mkfs.xfs -L sampledb /dev/sdi
sudo mkdir /mongodb
sudo mkdir /mongodb/data
sudo mkdir /mongodb/journal
sudo mkdir /mongodb/log
sudo mkdir /mongodb/sampleDb
sudo mount -t xfs /dev/sdf /mongodb/data
sudo mount -t xfs /dev/sdg /mongodb/journal
sudo mount -t xfs /dev/sdh /mongodb/log
sudo mount -t xfs /dev/sdi /mongodb/sampleDb
sudo ln -s /mongodb/journal /mongodb/data/journal
sudo ln -s /mongodb/sampleDb /mongodb/data/sampleDb
sudo chown -R mongod:mongod /mongodb/data
sudo chown mongod:mongod /mongodb/log/
sudo chown mongod:mongod /mongodb/journal/
sudo chown mongod:mongod /mongodb/sampleDb/
Also updated in etc/fstab
its working fine for some time. But after few hours my datbase symboli link get removed automatically from system. I don't know how.
Please advice.

Related

How do I uninstall minikube on a Mac?

I have a Mac with Apple Silicon (M1) and I have minikube installed. The installation was done following https://medium.com/#seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669 by executing:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-arm64 /usr/local/bin/minikube
How do I remove minikube?
Have you tried to follow any online material to delete Minikube?? Test if this works for you and let me know if you face any issues.
Try using the below command :
minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
Reference used: Delete minikube on Mac
minikube stop; minikube delete
docker stop $(docker ps -aq)
rm -r ~/.kube ~/.minikube
sudo rm /usr/local/bin/localkube /usr/local/bin/minikube
systemctl stop '*kubelet*.mount'
sudo rm -rf /etc/kubernetes/
docker system prune -af --volumes

Go server exits with exit code 1

I am trying to run gocd docker image using docker compose. I want to replace existing cruise-config file with a new one.
I am trying to replace existing cruise-config.xml by a new file by copying it in the Dockerfile. I am able to build docker-compose without any errors but when I run the docker-compose file, the go server container starts but exits after few seconds with error code 1.
File: Docker-compose.yml
version: '2'
services:
go-server:
build:
context: go_server
dockerfile: Dockerfile
ports:
- '8153:8153'
- '8154:8154'
volumes:
- ./go_server/server_home/config:/go-working-dir/config
File: go_server/Dockerfile
FROM gocd/gocd-server:v17.8.0
RUN mkdir -p /go-working-dir/config
RUN chmod 777 -R /go-working-dir/config
COPY ./server_home/config/cruise-config.xml /go-working-dir/config/cruise-config.xml
COPY ./server_home/config/cruise-config.xml /go-working-dir/config/cruise-config.xml2
RUN chown -R go:go /go-working-dir /godata
EXPOSE 8153 8154
Am I missing something here? The log file of the container shows no error
Logs:
dailybuild#DockerHost:~$ sudo docker logs 5af1aefa0c9d
/docker-entrypoint.sh: Creating directories and symlinks to hold GoCD configuration, data, and logs
$ mkdir -v /godata/artifacts
$ chown go:go /godata/artifacts
$ ln -sv /godata/artifacts /go-working-dir/artifacts
$ chown go:go /go-working-dir/artifacts
$ mkdir -v /godata/config
$ chown go:go /godata/config
$ mkdir -v /godata/db
$ chown go:go /godata/db
$ ln -sv /godata/db /go-working-dir/db
$ chown go:go /go-working-dir/db
created directory: '/godata/artifacts'
'/go-working-dir/artifacts' -> '/godata/artifacts'
created directory: '/godata/config'
created directory: '/godata/db'
'/go-working-dir/db' -> '/godata/db'
created directory: '/godata/logs'
'/go-working-dir/logs' -> '/godata/logs'
created directory: '/godata/plugins'
$ mkdir -v /godata/logs
$ chown go:go /godata/logs
$ ln -sv /godata/logs /go-working-dir/logs
$ chown go:go /go-working-dir/logs
$ mkdir -v /godata/plugins
'/go-working-dir/plugins' -> '/godata/plugins'
created directory: '/godata/addons'
'/go-working-dir/addons' -> '/godata/addons'
$ chown go:go /godata/plugins
$ ln -sv /godata/plugins /go-working-dir/plugins
$ chown go:go /go-working-dir/plugins
$ mkdir -v /godata/addons
$ chown go:go /godata/addons
$ ln -sv /godata/addons /go-working-dir/addons
$ chown go:go /go-working-dir/addons

How to create docker image for postgis that will enable extension at build time or before container fully running?

What I mean is that I want to create a docker image for postgis that will be completely usable right after build. So that if user runs
docker run -e POSTGRES_USER=user somepostgis
the user database would be created and extensions already installed?
The official postgres image can't be used for that AFAIK.
Basically need to write script and tell that it would be entrypoint. This script should create database and create extensions with porstgres server running on different port and then restart it on port 5432.
But I don't know sh enough and docker to do that. Right now it's saying that there is no pg_ctl command
If you want to help you can fork
FROM ubuntu:15.04
#ENV RELEASE_NAME lsb_release -sc
#RUN apt-get update && apt-get install wget
#RUN echo "deb http://apt.postgresql.org/pub/repos/apt ${RELEASE_NAME}-pgdg main" >> /etc/apt/sources.list
#RUN cat /etc/apt/sources.list
#RUN wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-9.4-postgis-2.1 \
curl \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove curl
RUN mkdir /docker-entrypoint-initdb.d
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
RUN chmod +x /docker-entrypoint.sh
RUN ls -l /docker-entrypoint.sh
EXPOSE 5432
CMD ["postgres"]
So I'm trying to do somethink like that, but it doesn't work.
#!/bin/bash
${POSTGRES_DB:=$POSTGRES_USER}
gosu postgres pg_ctl start -w -D ${PGDATA} -0 "-p 5433"
gosu postgres createuser ${POSTGRES_USER}
gosu postgres createdb ${POSTGRES_DB} -s -E UTF8
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis;"
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis_topology;"
pg_ctl -w restart

mongodb 3.0.3 Ubuntu 14.04.2 AWS m3.medium Upstart PID mismatch

Is it possible to run mongod via upstart and keep track of the PID via start-stop-daemon or otherwise?
After following these instructions on the mongodb docs page for ubuntu installation:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
All remnants of the ubuntu package have been removed prior to this.
I now have a running instance of mongod on boot, via Upstart. But for some reason Upstart and initctl do not know about it. It starts up fine, but initctl thinks it's in stop/waiting state.
To wit:
My /etc/mongod.conf.yml:
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
systemLog:
destination: file
path: /var/log/mongodb/mongod.log
logAppend: true
logRotate: rename
component:
accessControl:
verbosity: 2
net:
bindIp: 127.0.0.1
port: 27017
processManagement:
fork: true
setParameter:
enableLocalhostAuthBypass: false
security:
authorization: disabled
My /etc/init/mongod.conf upstart script (renamed mongodb.pid to mongod.pid):
# Ubuntu upstart file at /etc/init/mongod.conf
# Recommended ulimit values for mongod or mongos
# See http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
#
limit fsize unlimited unlimited
limit cpu unlimited unlimited
limit as unlimited unlimited
limit nofile 64000 64000
limit rss unlimited unlimited
limit nproc 32000 32000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
DAEMONUSER=${DAEMONUSER:-mongodb}
if [ ! -d /var/lib/mongod ]; then
mkdir -p /var/lib/mongodb && chown mongodb:mongodb /var/lib/mongodb
fi
if [ ! -d /var/log/mongod ]; then
mkdir -p /var/log/mongodb && chown mongodb:mongodb /var/log/mongodb
fi
touch /var/run/mongod.pid
chown $DAEMONUSER /var/run/mongod.pid;
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGOD="yes"
CONF=/etc/mongod.conf.yml
DAEMON=/usr/bin/mongod
DAEMONUSER=${DAEMONUSER:-mongodb}
if [ -f /etc/default/mongod ]; then . /etc/default/mongod; fi
# Handle NUMA access to CPUs (SERVER-3574)
# This verifies the existence of numactl as well as testing that the command works
NUMACTL_ARGS="--interleave=all"
if which numactl >/dev/null 2>/dev/null && numactl $NUMACTL_ARGS ls / >/dev/null 2>/dev/null
then
NUMACTL="$(which numactl) -- $NUMACTL_ARGS"
DAEMON_OPTS=${DAEMON_OPTS:-"--config $CONF"}
else
NUMACTL=""
DAEMON_OPTS="-- "${DAEMON_OPTS:-"--config $CONF"}
fi
if [ "x$ENABLE_MONGOD" = "xyes" ]
then
exec start-stop-daemon --start \
--chuid $DAEMONUSER \
--pidfile /var/run/mongod.pid \
--make-pidfile \
--exec $NUMACTL $DAEMON $DAEMON_OPTS
fi
end script
After a reboot I see this:
$ ps aux | grep mongo
mongodb 1085 0.2 1.1 363764 46704 ? Sl 11:57 0:06 /usr/bin/mongod --config /etc/mongod.conf.yml
And everything appears to be fine. But the mongod.pid file does not store the same pid as the process:
$ cat /var/run/mongod.pid
985
Should be 1085.
What is the best way to fix this so Upstart has access to the actual PID?
UPDATE: tried to add expect daemon and expect fork with some change in behavior: initctl now sees a PID and denotes that mongod is running, but has the wrong PID. This means any subsequent command like sudo stop mongod or sudo start mongod will hang. Neither fork or daemon seems to fix this; what am I missing?
Ok so a bit of egg on my face - I was overlooking the fact that my shiny new /etc/mongod.conf.yml contained processManagement.fork: true. Setting this to false allows start-stop-daemon to capture the appropriate PID.

Docker doesn't start MONGODB, and IPAddress doesn't appear, when started with other services

I have already asked this question on serverfault.com. I am asking it here too as I see different set of questions in these 2 sites (it appears like they have different databases).
I have been trying to build an OS image from Fedora unsuccessfully to start the following:
Systemd
SSHD
RabbitMQ
MongoDB
I can get the first 3 (Systemd, SSHD and RabbitMQ-Server) to work. I can also get MongoDB to work within the container. However, I cannot get MongoDB to work along with other 3 services.
In addition, IP address doesn't show up when I try to "dockerize" MongoDB.
Am I missing something in the Dockerfile?
Here is my dockerfile:
FROM fedora:20
MAINTAINER “Ashfaque” <ashfaque#email.com>
ENV container docker
RUN yum -y update; yum clean all
RUN yum -y install systemd; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
# Dockerizing SSH - is working
RUN yum -y install openssh-server
RUN yum -y install openssh-clients
RUN mkdir /var/run/sshd
RUN systemctl enable sshd.service
RUN echo 'root:mypassword' |chpasswd
EXPOSE 22
# Dockerizing RabbitMQ - is working
RUN yum -y install rabbitmq-server
EXPOSE 5672 15672
RUN systemctl enable rabbitmq-server
# Dockerizing MongoDB - is NOT WORKING
RUN yum -y install mongodb-server
RUN yum -y install boost
RUN yum -y install scons
# Create the MongoDB data directory
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
RUN sed -i 's/dbpath =\/var\/lib\/mongodb/dbpath =\/data\/db/' /etc/mongodb.conf
# Expose port 27017 from the container to the host
EXPOSE 27017
# Set usr/bin/mongod as the dockerized entry-point application
ENTRYPOINT ["/usr/bin/mongod"]
#CMD ["--port", "27017", "--dbpath", "/data/db", "--smallfiles", "--fork", "--syslog"]
#RUN /usr/bin/mongod --smallfiles --port 27017 --dbpath /data/db --fork --syslog
VOLUME ["/sys/fs/cgroup", "/data/db", "/var/log/mongodb", "/usr/bin"]
CMD ["/usr/sbin/init"]
Docker Commands used to build are:
(1) docker build -t rabbitmq_mongo_heisenbug .
(2) docker run --privileged -d -e 'container=docker' -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 29022:22 -p 29672:15672 -p 29017:27017 rabbitmq_mongo_heisenbug
or.. (3) docker run --privileged -ti -e 'container=docker' -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 29022:22 -p 29672:15672 -p 29017:27017 rabbitmq_mongo_heisenbug
You are using both ENTRYPOINT and CMD in your Dockerfile. This means, that docker will run /usr/bin/mongod with default parameter /usr/sbin/init. I'm pretty sure this is not what you want.
Docker will run as long as the command you specified is running. I'm not sure about /usr/bin/mongod, but if it runs in daemon mode (that is, spawn a process and return), then the container will stop running right away. The spawned processes will be terminated. The same is true for /usr/sbin/init or for any other command you specify. You can write a small shell script, which spawns the processes and runs one in the foreground, or you can use runit, or some similar tool.
Also, you probably don't need to run sshd in your container. See here why.