I am running a rethinkdb alongside a .NET Core App using docker-compose.
Is there any way so that i can set up 2 tables for rethinkdb and some secondary indexes?
Can Rethinkdb be configured (set up a db,table) directly with a bash command?
docker-compose
version: "3.3"
services:
rethink:
restart: always
image: rethinkdb:2.3.6
container_name: rethink0
ports: //i want to create a db,a table and a secondary index after set up
- 8080:8080
networks:
- ret-net
mp:
build: ./mpserver
image: mp
restart: always
container_name: mp0
depends_on:
- rethink
ports:
- 8203:8202
networks:
- ret-net
networks:
ret-net:
your best option is to setup the python driver and then you can run commands as bash script
sudo pip install rethinkdb
import rethinkdb as r
r.connect('localhost',28015).repl()
r.db_create('test').run()
r.db('test').table_create('myTable').run()
you can also consider building a docker image that includes this driver, i think the official image does not include it.
I cannot tell you confidently how to build a docker container like this, but based on this description it should be something like:
FROM library/rethinkdb
apt-get update &&
apt-get install -y python-pip &&
RUN pip install rethinkdb
.. and the you can execute the creation commands from inside the docker container
docker exec -it <container name> <command>
Related
I am trying to containerise an python-flask application which uses MongoDB as database.
the error that I am getting
The error is same when whether I run the Dockerfile of project or the Docker-compose file.
It works fine when I run it on my machine locally.
My DOCKERFILE
FROM python:3
COPY requirements.txt ./
WORKDIR /
RUN apt update -y
RUN apt install build-essential libdbus-glib-1-dev libgirepository1.0-dev -y
RUN apt-get install python-dev -y
RUN apt-get install libcups2-dev -y
RUN apt install libgirepository1.0-dev -y
RUN pip install pycups
RUN pip install cmake
RUN pip install dbus-python
RUN pip install reportlab
RUN pip install PyGObject
RUN pip install -r requirements.txt
COPY . .
CMD ["python3","main.py"]
MY DOCKER-COMPOSE.YML
version: '2.0'
networks:
app-tier:
driver: bridge
services:
myapp:
image: 'chatapp'
networks:
- app-tier
links:
- mongodb
ports:
- 8000:8000
depends_on:
- mongodb
mongodb:
image: 'mongo'
networks:
- app-tier
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 27018:27017
I tried linking the two containers via --links but i am unable to figure out what actual problem is.
Your app is trying to mongodb with localhost:27017.
For your app, localhost, is the container where the app is running.
To access the mingoDb container you must use the service name in your docker-compose.yaml.
In your case: mongodb.
So the connection to the db should be: mongodb:27017.
To access mongodb directly from your host macchine you use localhost:27018. In this case localhost refers to your hostsystem (your pc).
Your docker-compose is a bit obsolete. You can update it like so:
version: '3.9'
networks:
app-tier:
driver: bridge
services:
myapp: // this is a service name
image: 'chatapp'
networks:
- app-tier
ports:
- 8000:8000
depends_on:
- mongodb
mongodb: // this is the servicename to connect from the app container
image: 'mongo'
networks:
- app-tier
ports:
- 27018:27017
You can remove also allowempty password.
i try run PostgreSQL in docker-compose. I can`t create my custom database. For example:
Dockerfile
FROM postgres:14.3
#pg_amqp
WORKDIR /code/pg_amqp-master
COPY ./conf/postgres/pg_amqp-master .
RUN apt update && apt install && apt install make
RUN apt install postgresql-server-dev-14 -y
RUN make && make install
Docker-compose.yml
version: '3.8'
services:
db:
container_name: postgres
build:
context: .
dockerfile: conf/postgres/Dockerfile
volumes:
- ./conf/postgres/scripts:/docker-entrypoint-initdb.d
- ./conf/postgres/postgresql.conf:/etc/postgresql/postgresql.conf
- postgres_data:/var/lib/postgresql/data/
command: postgres -c config_file=/etc/postgresql/postgresql.conf
environment:
- POSTGRES_DB=${DB_NAME}
- PGUSER=${POSTGRES_USER}
- PGPASSWORD=${POSTGRES_PASSWORD}
ports:
- '5432:5432'
env_file:
- ./.env
/scripts/create_extension.sql
create extension if not exists pg_stat_statements;
create extension if not exists amqp;
.env
DB_NAME=mydb
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypass
When i run docker-compose up -d --build creating is done, but i have one default database - postgres. And all 'create extension' is done in default database - postgres. Where is my mistake?
The Postgres environment variables and initialization scripts are only used if Postgres doesn't find an existing database on startup.
Delete your existing database by deleting the postgres_data volume and then start the Postgres container. Then it'll see an empty volume and will create a database for you using your variables and your scripts.
I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.
I have an image (gepick:latest) with node app created from Dockerfile:
FROM centos:7
# Create app directory
WORKDIR /usr/src/app
RUN curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
RUN yum install -y nodejs
RUN curl --silent --location https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo
RUN rpm --import https://dl.yarnpkg.com/rpm/pubkey.gpg
RUN yum install -y yarn
RUN yarn
COPY . .
EXPOSE 8080
CMD [ "yarn", "test-matches-collecting-job"]
My goal is run tests in docker. But it requires mongodb
docker run gepick:latest :
...
Mongoose default connection error: MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
...
I tried link mongo:4 images container docker run --link 0d24c3a35d5a gepick:latest but get same error.
When you launch your container using a docker-compose yaml file Docker bridges the containers together and allows you to have it launch the mongo container before other containers which rely on mongo to be active ... try something like this
cat my-docker-compose.yml
version: '3'
services:
my-gepick:
image: gepick:latest
container_name: blah_gepick
restart: always
depends_on:
- loudmongo
volumes:
- /cryptdata5/var/log/blobs:/blobs
- /webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
loudmongo:
image: mongo
container_name: loud_mongo
restart: always
ports:
- 127.0.0.1:$GKE_MONGO_PORT:$GKE_MONGO_PORT
volumes:
- /cryptdata7/var/data/db:/data/db
so your launch sequence may look like
docker-compose -f /somedir/my-docker-compose.yml pull
docker-compose -f /somedir/my-docker-compose.yml up -d
I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.