Gradle Docker tasks - postgresql

For my local development tasks
1. i want to ensure that the DB is running in the docker container which in this case is Postgres, i have a bootRun task defined in my build.gradle file
bootRun{
jvmArgs = [
"-Ddb.host=jdbc:postgresql://localhost:5432/postgres",
"-Ddb.username=postgres",
"-Ddb.password=apgdb"
]
}
and docker installed on my machine i just want to ensure that i do not have to manually go and start the postgres image from terminal and then do a bootRun,
can we create a gradle task which can ensure that it restarts the postgres on every exit of bootRun and start everytime we spin the app.

I use the gradle-docker-compose plugin to achieve this kind of task. You can create a docker-compose.yml file that defines your postgres db:
services:
db:
image: postgres:11
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: apgdb
POSTGRES_DB: postgres
And that would be the respective build.gradle file:
plugins {
id "com.avast.gradle.docker-compose" version "0.8.14"
}
dockerCompose {
database {
useComposeFiles = ['docker-compose.yml']
}
}
bootRun{
dependsOn 'databaseComposeUp'
jvmArgs = [
"-Ddb.host=jdbc:postgresql://localhost:5432/postgres",
"-Ddb.username=postgres",
"-Ddb.password=apgdb"
]
}
Now when you run gradle bootRun it will start up the database before spring boots up.

Related

Failed Authentication when connecting with Flask through PyMongo to MongoDB in Docker Compose

I'm using Docker Compose and trying to make two containers talk to each other. One runs a MongoDB database and the other one is a Flask app that needs to read data from the first one using PyMongo.
The Mongo image is defined with the following Dockerfile:
FROM mongo:6.0
ENV MONGO_INITDB_ROOT_USERNAME admin
ENV MONGO_INITDB_ROOT_PASSWORD admin-pwd
ENV MONGO_INITDB_DATABASE admin
COPY mongo-init.js /docker-entrypoint-initdb.d/
EXPOSE 27017
And my data is loaded through the following mongo-init.js script:
db.auth('admin','admin-pwd')
db = db.getSiblingDB('quiz-db')
db.createUser({
user: 'quiz-admin',
pwd: 'quiz-pwd',
roles: [
{
role: 'readWrite',
db: 'quiz-db'
}
]
});
db.createCollection('questions');
db.questions.insertMany([
{
question: "Do you like sushi?",
answers: {
0:"Yes",
1:"No",
2:"Maybe"
}
}
]);
The Flask app is pretty straightforward. I'll skip the Dockerfile for this one as I don't think it's important to the issue. I try to connect to the database with the following code:
from flask import Flask, render_template
from pymongo import MongoClient
app = Flask(__name__)
MONGO_HOST = "questions-db"
MONGO_PORT = "27017"
MONGO_DB = "quiz-db"
MONGO_USER = "quiz-admin"
MONGO_PASS = "quiz-pwd"
uri = "mongodb://{}:{}#{}:{}/{}?authSource=quiz-db".format(MONGO_USER, MONGO_PASS, MONGO_HOST, MONGO_PORT, MONGO_DB)
client = MongoClient(uri)
db=client["quiz-db"]
questions=list(db["questions"].find())
I'm not an expert when it comes to Mongo, but I've set authSource to 'quiz-db' since that's the database where I've created the user in the 'mongo-init.js' script. I tried to run the database container alone and I did successfully log in using mongosh with the user 'quiz-db'. All the data is there and everything works fine.
The problem is only coming up when trying to connect from the Flask app. Here's my Docker compose file:
version: '3.9'
services:
#Flask App
app:
build: ./app
ports:
- "8000:5000"
depends_on:
- "questions-db"
networks:
- mongo-net
#Mongo Database
questions-db:
build: ./questions_db
hostname: questions-db
container_name: questions-db
ports:
- "27017:27017"
networks:
- mongo-net
networks:
mongo-net:
driver: bridge
When I run 'docker compose up' I get the following error on the Flask container startup:
pymongo.errors.OperationFailure: command find requires authentication
full error: {'ok': 0.0, 'errmsg': 'command find requires authentication', 'code': 13, 'codeName': 'Unauthorized'}
MongoDB stores all user credentials in the admin database, unless you are using a really ancient version.
Use authSource=admin in the URI

waiting service database running before others services running in Docker [duplicate]

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 3 years ago.
I am trying to run my app which depends_on my Postgresql in Docker
let say my database PostgreSQL not running now
and in my docker-compose.yml:
version: "3"
services:
myapp:
depends_on:
- db
container_name: myapp
build:
context: .
dockerfile: Dockerfile
restart: on-failure
ports:
- "8100:8100"
db:
container_name: postgres
restart: on-failure
image: postgres:10-alpine
ports:
- "5555:5432"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: 12345678
POSTGRES_DB: dev
when I try docker-compose up -d yes it created the postgres and then create that myapp service
but it seems my Postgresql is not running yet, after finish install and running myapp,
it said:
my database server not running yet
how to make myapp running until that db service know that my db running ??
The documentation of depends_on says that:
depends_on does not wait for db to be “ready” before starting myapp - only until it have been started.
So you'll have to check that your database is ready by yourself before running your app.
Docker has a documentation that explains how to write a wrapper script to do that:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Then you can just call this script before running your app in your docker-compose file:
command: ["./wait-for-postgres.sh", "db", "python", "app.py"]
There are also tools such as wait-for-it, dockerize or wait-for.
However these solutions has some limitations and Docker says that:
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
This method will be more resilient.
Here is how I use a retry strategy in javascript:
async ensureConnection () {
let retries = 5
const interval = 1000
while (retries) {
try {
await this.utils.raw('SELECT \'ensure connection\';')
break
} catch (err) {
console.error(err)
retries--
console.info(`retries left: ${retries}, interval: ${interval} ms`)
if (retries === 0) {
throw err
}
await new Promise(resolve => setTimeout(resolve, interval))
}
}
}
Please have a look at: https://docs.docker.com/compose/startup-order/.
Docker-compose won't wait for your database, you need a way to check it externally (via script or retrying the connection as Mickael B. proposed). One of the solutions proposed in the above link is a wait-for.sh utility script - we used it in a project and it worked quite well.

Docker failed to run entry-point

So I have created a node/mongo app and I am trying to run everything on docker.
I can get everything to run just fine until I try and add the init file for the mongo instance into the entry-point.
Here is my docker file for mongo: (Called mongo.dockerfile in /MongoDB)
FROM mongo:4.2
WORKDIR /usr/src/mongo
VOLUME /docker/volumes/mongo /user/data/mongo
ADD ./db-init /docker-entrypoint-initdb.d
CMD ["mongod", "--auth"]
The db-init folder contains an init.js file that looks like so (removed the names of stuff):
db.createUser({
user: '',
pwd: '',
roles: [ { role: 'readWrite', db: '' } ]
})
Here is my docker-compose file:
version: "3.7"
services:
web:
container_name: web
env_file:
- API/web.env
build:
context: ./API
target: prod
dockerfile: web.dockerfile
ports:
- "127.0.0.1:3000:3000"
depends_on:
- mongo
links:
- mongo
restart: always
mongo:
container_name: mongo
env_file:
- MongoDB/mongo.env
build:
context: ./MongoDB
dockerfile: mongo.dockerfile
restart: always
The exact error I get when running a docker-compose up is:
ERROR: for mongo Cannot start service mongo: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"docker-entrypoint-initdb.d\": executable file not found in $PATH": unknown
I had this working at one point with another project but cannot seem to get this on to work at all.
Any thoughts on what I am doing wrong?
Also note I have seen other issues like this saying to chmod +x the path (tried that didnt work)
Also tried to chmod 777 also didnt work. (Maybe I did this wrong and I dont know exactly what to run this on?)
Your entrypoint has been modified from the upstream image, and it's not clear how from the input you've provided. You may have modified the mongo image itself and need to pull a fresh copy with docker-compose build --pull. Otherwise, you can force the entrypoint back to the upstream value:
ENTRYPOINT ["docker-entrypoint.sh"]

Create a MongoDB database through docker-compose

docker-compose.yml
mongo:
image: mongo:3.4.13-jessie
restart: always
ports:
- "27019:27017"
volumes:
- ./db1.js:/docker-entrypoint-initdb.d/db1.js:ro
- ./db2.js:/docker-entrypoint-initdb.d/db2.js:ro
db1.js
use testdba;
db.getCollection("users").insert({
"name": "A"
})
db2.js
use testdbb;
db.getCollection("users").insert({
"name": "B"
})
the files are copied in the mongo container but not executed
my understanding is that files(*.js) inside docker-entrypoint-initdb.d/ gets executed, but with above setup it doesn't
does anyone know what am I missing?
also in some of the post I found
environment:
MONGO_INITDB_ROOT_USERNAME: test
MONGO_INITDB_ROOT_PASSWORD: test
MONGO_INITDB_DATABASE: admin
do I need to add root user before the .js fils can be executed ?
MongoDB finds your files and tries to execute them but it fails. You can see that in the logs: docker-compose logs -f and look for the line where it tries to run your first js file.
To fix it use this syntax in your js files:
db.testdba.getCollection("users").insert({
"name": "A"
});
After you edit both your files start your service with docker-compose up -d. Check again the logs and you will see that the files are executed successfully.

concourse git resource error: getting the final child's pid from pipe caused "EOF"

when trying to pull a git resource we are getting an error
runc run: exit status 1: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\""
we are using oracle linux - release 7.6. Docker version 18.03.1-ce.
we have followed the instructions on https://github.com/concourse/concourse-docker. we have tried with older versions of concourse (4.2.0 & 4.2.3). we can see the workers are up using fly.
we found this: https://github.com/concourse/concourse/issues/4021 on github which had a similar issue but couldn't find the relating story on stack overflow which the answerer had mentioned.
our docker compose file:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_USER: concourse_user
POSTGRES_PASSWORD: concourse_pass
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["61111:8080"]
volumes: ["<path to repo folder>/keys/web:/concourse-keys"]
environment:
CONCOURSE_EXTERNAL_URL: <our url>
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
worker:
image: concourse/concourse
command: worker
privileged: true
depends_on: [web]
volumes: ["<path to repo folder>/keys/worker:/concourse-keys"]
links: [web]
stop_signal: SIGUSR2
environment:
CONCOURSE_TSA_HOST: web:2222
we expected the resource to pull as the connectivity to the repo is in place and verified.
Not sure about your second issue with volumes, but I solved the original problem by setting user.max_user_namespaces parameter to 15000:
sysctl -w user.max_user_namespaces=15000
The solution was found here: https://github.com/docker/docker.github.io/issues/7962
This issue was fixed by updating the kernal from 3.1.x to 4.1.x. we have a new issue: failed to create volume on all our pipelines. i will update if i find a solution to this too