I've got a docker container, in which I'm installing mongo db. After installing,
I'm trying to start mongo and restore a mongo db dump. However, when I start the docker instance, I see that the user has been switched to root (as per supervisor instruction) but mongo is not started.
This is the supervisor snippet:
[supervisord]
nodaemon=true
[program:mongodb]
user=root
command=/usr/bin/mongod
This is my setup in the dockerfile:
RUN apt-get update && sudo apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisor.conf /etc/supervisor/conf.d/supervisor.conf
# Install MongoDB.
RUN \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen'
| tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get install -y mongodb-org && \
rm -rf /var/lib/apt/lists/*
# Define mountable directories.
VOLUME ["/data/db"]
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["mongod"]
EXPOSE 27017
EXPOSE 28017
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
Am I missing any configuration setting? Any help is appreciated.
You're not able to run mongodb because while installing mongodb some files need authentication, so you just have to replace apt-get install -y mongodb-org with apt-get install -y --no-authentication mongodb-org and you'll be able to install mongodb without any problem.
Related
I created my own Docker container that includes the latest version of ubuntu, python3.7 and mongodb.
Dockerfile
FROM ubuntu:latest
MAINTAINER Docker
# Update apt-get sources AND install MongoDB
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y software-properties-common
RUN apt install -y gnupg2
RUN gpg2 --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys F3B1AA8B
# Installation:
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y python3.7
#Mongodb
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
RUN apt-add-repository 'deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse'
RUN apt-get update
RUN apt-get install -y mongodb-org
# Create the MongoDB data directory
RUN mkdir -p /data/db
# Create the MongoDB data directory
RUN mkdir -p /data/code
RUN mongod --version
RUN mongod --dbpath /data/db --fork --logpath /data/db/log
# COPY some Code to Container
COPY dev /data/code
# Installing pip for python modules
RUN apt-get install -y python3-pip
# Install modules
WORKDIR /data/code/
RUN pip3 install -r requirements.txt
RUN service mongodb start
RUN python3 main.py
RUN python3 server.py
EXPOSE 80
# Set /bin/bash as the dockerized entry-point application
ENTRYPOINT ["/bin/bash"]
when I run the build command:
docker build -t myContainer --no-cache .
it runs successfully till to the point where mongodb should start as a service
.
.
.
Removing intermediate container 3d43e1d1cd96
---> 62f10ce67e07
Step 21/25 : RUN service mongodb start
---> Running in 42e08e7d7638
mongodb: unrecognized service
How do I start the service? I'm trying to start the service with the command: service mongodb start. Isn't that correct? And what does the line:
Removing intermediate container 3d43e1d1cd96
means?
Firstly, it should be service mongod start i guess. But this is not going to solve your problem.
While using Docker, your process has to be a foreground process service mongod start will go into background & your container will exit immediately.
You should use mongod foreground process as below -
CMD ["mongod"]
Put the above CMD at the end of Dockerfile to make sure your container runs mongod.
Official Dockerfile -
https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/3.4/Dockerfile
If you want to run multiple processes, use docker ENTRYPOINT in conjunction with supervisord or use a wrapper script.
Ref - https://docs.docker.com/config/containers/multi-service_container/
I have two images that use a two-stage build to build Scala code and copy the artifacts to a final image. To speed up the build, I copy my local ~/.ivy2 to the context directory and from there to the images (~1GB). Unfortunately this means that even when nothing has changed and the images don't need to be re-built, docker-compose build (or docker build) hangs for quite a while to copy Docker context. This happens twice of course, once for each image.
Is there any cleverer way to do this?
Dockerfile:
FROM openjdk:8
RUN apt-get update &&\
apt-get install -y apt-transport-https gnupg2 &&\
echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list &&\
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823 &&\
apt-get update &&\
apt-get install -y sbt=1.1.6
COPY ivy-cache/ /root/.ivy2
COPY app/source/ /app/source
RUN cd /app/source &&\
sbt assembly &&\
cp target/scala-2.11/my-app-*.jar /app/my-app.jar
FROM gettyimages/spark:2.3.1-hadoop-3.0
COPY --from=0 /app/my-app.jar /app/my-app.jar
CMD ["spark-submit", "--master", "local", "/app/my-app.jar"]
With 18.09, docker includes BuildKit. By itself, BuildKit will cache the previous context and only send over the differences with the equivalent of rsync in the background.
For this specific case, you can use some experimental features to mount in your dependency cache as the equivalent of a named volume using the RUN --mount syntax. The cache directory never makes it into the image, but is there for later builds, and when you pull in a new dependency it will behave just like a local build, downloading only the new dependencies.
# syntax=docker/dockerfile:experimental
FROM openjdk:8 as build
RUN apt-get update &&\
apt-get install -y apt-transport-https gnupg2 &&\
echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list &&\
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823 &&\
apt-get update &&\
apt-get install -y sbt=1.1.6
COPY app/source/ /app/source
RUN --mount=type=cache,target=/root/.ivy2 \
cd /app/source &&\
sbt assembly &&\
cp target/scala-2.11/my-app-*.jar /app/my-app.jar
FROM gettyimages/spark:2.3.1-hadoop-3.0 as release
COPY --from=build /app/my-app.jar /app/my-app.jar
CMD ["spark-submit", "--master", "local", "/app/my-app.jar"]
To use BuildKit under 18.09, you can either export an environment variable:
export DOCKER_BUILDKIT=1
or update the engine with the new default in /etc/docker/daemon.json:
{ "features": {"buildkit": true} }
So I'm having this problem where for some reason I can't install any package on my ubuntu system.
I'm currently on Ubuntu 16.10.
terminal install logs
Update:
I've done entered those commands and got this.
after update and apt-cache
What should I do now?
sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt-get install postgresql postgresql-contrib
After installing the PostgreSQL database server, by default, it creates a user ‘postgres’ with role ‘postgres’. Also creates a system account with the same name ‘postgres’. So to connect to Postgres server, log in to your system as user postgres and connect database.
sudo su - postgres
psql
First do
sudo apt-get update
You should get no errors upon updating. In case you do, then you might have issues with your firewall, or something blocking you from updating repositories. Check the output carefully.
And then search for the correct (exact!) package name using this command:
apt-cache search postgresql
As a last resort you could add external 3rd Party repository as described in this answer. Just remember to use your distribution name instead of "xenial".
It should work.
$ sudo apt-get install postgresql postgresql-client
If you are getting (E: Unable to locate package postgresql-12) while migrating following step may helps you:
sudo apt-get -y install bash-completion wget
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql-12 postgresql-client-12
sudo systemctl status postgresql
ref:install postgres12 in ubuntu-18.04
Following Commands worked for me: sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ lsb_release -cs-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt install postgresql-11 libpq-dev
I have a data file which needs to be added to the mongodb in docker. i have a docker file but i do not know how to copy my data file from local machine to mongodb docker image.
My Docker file :
FROM dockerfile/ubuntu
RUN \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' > /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get install -y mongodb-org && \
rm -rf /var/lib/apt/lists/*
VOLUME ["/data/db"]
WORKDIR /data
CMD ["mongod"]
EXPOSE 27017
EXPOSE 28017
How can i add data files from my local to the mongodb using above docker file
Docker has COPY command. Firstly, make sure your date file is at the same directory with Dockerfile. Then simply add below line to your Dockerfile.
COPY your-data-file /pathInContainer
This is my Dockerfile for installing Postgres.
# Set the base image to Ubuntu
FROM ubuntu:14.04
# Update the repository sources list
RUN apt-get update -y
################## BEGIN INSTALLATION ######################
# Install wget
RUN apt-get install wget -y
# Setup Postgres repository
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
| sudo apt-key add -
# Add Postgres repository
RUN sh -c "echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" >> /etc/apt/sources.list.d/postgresql.list"
# Update repository
RUN apt-get update -y
# Install Postgres with Postgis
RUN apt-get install postgresql-9.3-postgis-2.1 -y
How can i add an Entrypoint for Postgres so that Postgres is automatically started in a Docker-container
My solution to start Postgres automatic:
RUN chmod +x /etc/init.d/postgresql
CMD service postgresql start && tail -F /var/lib/postgresql/data/serverlog
You can take ideas from the official docker-library/postgres Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
They use a docker-entrypoint.sh script which will, at the end, launch postgres
exec gosu postgres "$#"