How can I make an init.sql script run when using Docker with Postgresql? - postgresql

I'm trying to run docker and postgresql. To do this, I have such a docker-compose.
version: '3'
services:
postgres:
image: postgres:12
restart: always
networks:
- backend
ports:
- '5432:5432'
volumes:
- ./db_data:/var/lib/postgresql/data
- ./app/config/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
env_file:
- ./app/config/.env
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "devdb", "-U", "postgres" ]
timeout: 45s
interval: 10s
retries: 10
app:
build: app
ports:
- 3200:3200
networks:
- backend
depends_on:
postgres:
condition: service_healthy
volumes:
db_data:
networks:
backend:
driver: bridge
now I'm creating an init.sql file:
app/config/init.sql
CREATE TABLE todos(
id SERIAL PRIMARY KEY NOT NULL,
title VARCHAR NOT NULL,
description VARCHAR NOT NULL,
creating_time timestamp
);
I use the pgx driver to communicate with Postgresql:
insertQuery string = `INSERT INTO todos(title, description, creating_time) VALUES($1, $2, $3)`
dbUrl := fmt.Sprintf("postgres://%s:%s#%s:5432/%s", connInfo.User, connInfo.Password, connInfo.Host, connInfo.DBName)
db, errDB := pgxpool.Connect(context.Background(), dbUrl)
if errDB != nil {
fmt.Println(errDB)
}
conn, err := db.Acquire(context.Background())
if err != nil {
fmt.Errorf("Unable to acquire a database connection: %v\n", err)
}
defer conn.Release()
datetime := time.Now()
dt := datetime.Format(time.RFC3339)
_, err = db.Exec(context.Background(), insertQuery, "todo.Title", "todo.Description", dt)
if err != nil {
fmt.Println(err)
}
And when the container starts, everything goes fine until it comes to executing the request.
Then I get an error relation "todos" does not exist at character 13
I've never done this before, so I can't figure out what's wrong.The connection to the database is normal, since it does not issue an error, but for some reason the script for creating the table is not executed.I will be glad to help how to solve this problem.

how's going?
This entry point that you're configure would work only if does not exists any database configured. As you can see here. The main objective of this entry point is configure the database to be used, once this is already made does not make sense executing again.
My suggestion to you is use this lib. It is pretty straightforward to use it. If you want to, I can share some examples.
Please let me know if this helped you.

Related

Error saying PostgreSQL database table doesn't exist when it is created in the entrypoint script - DOCKER COMPOSE and POSTGRESQL

I am completely new to Docker and Docker compose and I have encountered issue which I don't know how to solve.
I am trying to make a simple app which shows some items from my existing postgres database on the localhost:8080. Everything works fine, so further on I want to put this app into Docker containers using Docker compose. I used dump to get apartments.sql file, which I am trying to load to the postgres docker container by using VOLUMES, but I am getting an error that says that the table that I am trying to access doesn't exist. My app code is:
def get_db_connection():
conn = psycopg2.connect(host='db',
user='postgres',
password='pass',
port = '5432')
return conn
#app.route('/')
def index():
conn = get_db_connection()
cur = conn.cursor()
cur.execute('SELECT * FROM maki;')
# save the data into the variable called items
items = cur.fetchall()
new_items = tuples_to_list(items)
cur.close()
conn.close()
return render_template('index.html', new_items=new_items)
if __name__ == '__main__':
app.run(host='0.0.0.0',port=8080,debug=True)
When I run it locally, it connects to my local database and works normally.
My docker-compose.yml file is:
version: '3.8'
services:
web:
build: ./flask_app
ports:
- 8080:8080
volumes:
- ./:/app
depends_on:
- db
db:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=pass
- POSTGRES_USER=postgres
- POSTGRES_DB=apartments
volumes:
- data:/var/lib/postgresql/data
- ./data/apartments.sql:/docker-entrypoint-initdb.d/apartments.sql
container_name: postgresdb
volumes:
data:
driver: local
where in the folder apartments.sql I put the dumped apartments.sql which has table named maki. I get the following error: psycopg2.errors.UndefinedTable: relation "maki" does not exist
LINE 1: SELECT * FROM maki;
I am not sure if this is the right approach, or in general, how should I do this in order for docker compose to work properly and show me the items on localhost:8080? What am I supposed to do with the index.html file in docker?
My dumped apartments.sql file:
-- PostgreSQL database dump
--
-- Dumped from database version 13.7
-- Dumped by pg_dump version 13.7
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: maki; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE public.maki (
name character varying(255),
image text
);
ALTER TABLE public.maki OWNER TO postgres;
--
-- Data for Name: maki; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY public.maki (name, image) FROM stdin;
Prodej bytu 3+kk 79\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gY_m/AQF296.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 60\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gV_p/VBhduS.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 98\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gT_o/kPyBHP.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 120\\\\u00a0m\\\\u00b2 (Mezonet)\\ https://d18-a.sdn.cz/d_18/c_img_gX_l/r7TKmQ.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 81\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gT_p/7XDIio.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 55\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gW_m/eBNBuKo.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 75\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gS_n/xcgCeK.jpeg?fl=res,400,300,3|shr,,20|jpg,90
--
-- PostgreSQL database dump complete
--

Can't connection DB use ktor application

I try start ktor application with postgresql database. For it i used docker compose. My docker-compose.yml.
version: '3.0'
services:
ktor-sample:
build: ./
command: ./ktor-sample
ports:
- "8080:8080"
depends_on:
- db
db:
restart: unless-stopped
image: postgres:9.6.10-alpine
volumes:
- ./test:/var/lib/postgresql/data
ports:
- "5433:5433"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: AutoHistory
I every one receive error:
WARN Exposed - Transaction attempt #2 failed: Connection to localhost:5433 refused.
Check that the hostname and port are correct and that the postmaster is accepting
TCP/IP connections.. Statement(s): null
ktor-sample_1 | org.postgresql.util.PSQLException: Connection to localhost:5433
refused. Check that the hostname and port are correct and that the postmaster is
accepting TCP/IP connections.
This database is created and can connection to it.
As for me I do next I create an object for initializing of db
it can look like this
object DbFactory {
fun init() {
val pool = hikari()
val db = Database.connect(pool)
transaction(db) {
SchemaUtils.create(Creators, Visitors, Places, Roles, UserRoles, Users)
}
}
private fun hikari(): HikariDataSource {
val hikariConfig = HikariConfig("db.properties")
return HikariDataSource(hikariConfig)
}
}
and in Application.kt, on the first place initialize your db
fun main() {
embeddedServer(Netty, port = 8080, host = "0.0.0.0") {
DbFactory.init()
...
the file which is describing your data should live in root and be called db.properties look something like
jdbcUrl=jdbc:postgresql://localhost:5433/AutoHistory
dataSource.driverClass = org.postresql.Driver
dataSource.driver=postgresql
dataSource.database= AutoHistory
dataSource.user= postgres
dataSource.password=
I have no such rows in docker-compose.yml
ktor-sample:
build: ./
command: ./ktor-sample
ports:
- "8080:8080"
depends_on:
- db
and do start my app manually but anyway I think it should help you

Connecting Golang and Postgres docker containers

I'm trying to run a golang server at localhost:8080 that uses a postgres database. I've tried to containerize both the db and the server but can't seem to get them connected.
main.go
func (a *App) Initialize() {
var db *gorm.DB
var err error
envErr := godotenv.Load(".env")
if envErr != nil {
log.Fatalf("Error loading .env file")
}
var dbString = fmt.Sprintf("port=5432 user=sample dbname=sampledb sslmode=disable password=password host=db")
db, err = gorm.Open("postgres", dbString)
if err != nil {
fmt.Printf("failed to connect to databse\n",err)
}
a.DB=model.DBMigrate(db)
a.Router = mux.NewRouter()
a.setRoutes()
}
//Get : get wrapper
func (a *App) Get(path string, f func(w http.ResponseWriter, r *http.Request)) {
a.Router.HandleFunc(path, f).Methods("GET")
}
//Post : post wrapper
func (a *App) Post(path string, f func(w http.ResponseWriter, r *http.Request)) {
a.Router.HandleFunc(path, f).Methods("POST")
}
//Run : run on port
func (a *App) Run(port string) {
handler := cors.Default().Handler(a.Router)
log.Fatal(http.ListenAndServe(port, handler))
}
func (a *App) setRoutes() {
a.Get("/", a.handleRequest(controller.Welcome))
a.Get("/users", a.handleRequest(controller.GetUsers))
a.Get("/user/{id}", a.handleRequest(controller.GetUser))
a.Post("/login", a.handleRequest(controller.HandleLogin))
a.Post("/users/add", a.handleRequest(controller.CreateUser))
a.Post("/validate", a.handleRequest(controller.HandleValidation))
}
func main() {
app := &App{}
app.Initialize()
app.Run(":8080")
}
server Dockerfile
FROM golang:latest
RUN mkdir /app
WORKDIR /app/server
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
docker-compose.yml
version: '3.7'
services:
db:
image: postgres
container_name: ep-db
environment:
- POSTGRES_PORT=${DB_PORT}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
ports:
- '5432:5432'
volumes:
- ./db:/var/lib/postgresql/data"
networks:
- internal
server:
container_name: ep-server
build:
context: ./server
dockerfile: Dockerfile
command: bash -c "go build && ./server -b 0.0.0.0:8080 --timeout 120"
volumes:
- './server:/app/server'
expose:
- 8080
depends_on:
- db
networks:
- internal
stdin_open: true
volumes:
db:
server:
networks:
internal:
driver: bridge
I have some get and post requests that return the right values when i run it locally on my computer (for ex. localhost:8080/users would return a JSON full of users from the database) but when I use curl inside the server container, I don't get any results. I am new to docker, Is there something wrong with what I'm doing so far?
Each docker container has its own IP address. When you connect to the postgres db from your application, you are using localhost, which is the container for the application and not the db. Based on your docker-compose, you should use the hostname db (the service name) to connect to the database.
As suggested by #DavidMaze you should verify the logs from your server container. Also,
First ensure ep-server is running (check that the output of docker container ls has status running for ep-server)
Run docker logs ep-server to view errors (if any)
If there are no errors in the logs, then run a docker exec -it ep-server bash to login to your container and run a telnet ep-db 5432 to verify that your postgres instance is reacheable from ep-server

Express app won't connect to docker postgres instance in production

I am building a webapp using express and postgres. For building my app I use the following files:
docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:12
volumes:
- postgres-volume:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${ADMIN_DB_PASSWORD}
networks:
- db
restart: unless-stopped
api:
build:
context: ./backEnd/
dockerfile: Dockerfile.debug
volumes:
- ./backEnd/index.js:/app/index.js
ports:
- "80:3002"
networks:
- db
depends_on:
- postgres
environment:
- DB_HOST=postgres
- DB_PORT=5432
- DB_DATABASE=hive
- DB_USER=${DB_API_USER}
- DB_PASSWORD=${DB_API_PASSWORD}
restart: unless-stopped
networks:
db:
The container runs the following code:
const express = require('express');
const bodyParser = require('body-parser')
const pgp = require('pg-promise')();
const connection = {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
database: process.env.DB_DATABASE,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
max: 30,
}
const db = pgp(connection);
function getUser(email){
return(
db.oneOrNone("SELECT * FROM users WHERE email = ${email}", {
email:email.toLowerCase()
})
)
}
const app = express();
app.use(bodyParser.json());
app.post('/login', async (req, res)=>{
const email = req.body.email;
console.log(email)
const user = await getUser(email);
console.log("made it past")
});
app.listen(3002, function () {
console.log('Listening on port 3002!');
});
When I call the login endpoint, it faithfully logs the email, but never gets past the
await getUser(email)
It does not throw and error or returns null, it just stays there. Interestingly it is working on my local machine and gets past the await, just not on my linux box.
I have also noticed, that if I change the db host to something nonsensical, pg-promise throws an error on my local machine. It does not however throw an error on my remote linux machine.
Also, if I run the script on the Linux box, without docker, it seamlessly connects to postgres. It appears to be something that only happens in conjunction with docker on Linux.
I am completely stumped by this error, as I have no indication of what is going wrong. The database also appears to be set up correctly, as I can connect to it and use it from my local machine with the same code.
Thank you in advance for your help.

ConnectionTimeoutException: No suitable servers found while inserting data into mongodb database

I am new in Docker and Mongodb.
I have following in my docker-compose.yml file.
version: '3.3'
services:
web:
build:
context: ./
dockerfile: Dockerfile
container_name: php73
volumes:
- ./src:/var/www/html/
ports:
- 8000:80
depends_on:
- db
networks:
- my-network
db:
image: mongo:latest
container_name: mymongo
restart: always
ports:
- '27017-27019:27017-27019'
networks:
- my-network
networks:
my-network:
Following file is run in php container. it just creates a database and insert some collectins into the database.
<?php
require 'vendor/autoload.php';
$myClient = new MongoDB\Client('mongodb://127.0.0.1:27017');
$mydb = $myClient->my_db;
$mycollection = $mydb->my_collection;
$insertData = $mycollection->insertOne([
'doc1' => 'abc',
'doc2' => 'def'
]);
?>
But, it shows following error:
PHP Fatal error: Uncaught MongoDB\\Driver\\Exception\\ConnectionTimeoutException: No suitable servers found (`serverSelectionTryOnce` set): [connection refused calling ismaster on '127.0.0.1:27017'] in /var/www/html/vendor/mongodb/mongodb/src/functions.php:431\nStack trace:\n#0 /var/www/html/vendor/mongodb/mongodb/src/functions.php(431): MongoDB\\Driver\\Manager->selectServer(Object(MongoDB\\Driver\\ReadPreference))\n#1 /var/www/html/vendor/mongodb/mongodb/src/Collection.php(929): MongoDB\\select_server(Object(MongoDB\\Driver\\Manager), Array)\n#2 /var/www/html/mycode.php(16): MongoDB\\Collection->insertOne(Array)\n#3 {main}\n thrown in /var/www/html/vendor/mongodb/mongodb/src/functions.php on line 431, referer: http://localhost:8000/index.php
I could not figure out why it is showing ConnectionTimeoutException.
Could anyone give any hints ?
Update your connection string to
<?php
require 'vendor/autoload.php';
$myClient = new MongoDB\Client('mongodb://db:27017');
// or new MongoDB\Client('mongodb://db:27017');
$mydb = $myClient->my_db;
$mycollection = $mydb->my_collection;
$insertData = $mycollection->insertOne([
'firstname' => 'abc',
'lastname' => 'def'
]);
?>
docker-compose create network by default, you can access other container using container name, where 127.0.0.1 refer to localhost of the php container, not DB container.