I have a docker-compose.yml file
version: '3'
services:
postgres:
image: postgres:13.1
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "root" ]
timeout: 45s
interval: 10s
retries: 10
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
- APP_DB_USER=docker
- APP_DB_PASS=docker
- APP_DB_NAME=docker
volumes:
- ./db:/docker-entrypoint-initdb.d/
ports:
- 5432:5432
in the same directory there is a directory db with 1 file init.sql:
CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
password VARCHAR ( 50 ) NOT NULL,
email VARCHAR ( 255 ) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);
insert into accounts(username,password,email,created_on) values ('a','aaa','assdfdas',now())
when I run docker-compose up (if the db is empty) the init.sql file is executed but the database in which it is executed is root and not postgres, How can I change it?
Imgur link to DataGrip screenshot
You can use the psql command \connect <db-name> inside your init.sql file in order to connect to the correct database (given that the database already exists).
In your case, the init.sql file would look something like:
\connect postgres <-- connects to the 'postgres' database
CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
password VARCHAR ( 50 ) NOT NULL,
email VARCHAR ( 255 ) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);
...
Related
I have docker-compose.yml
services:
postgres:
image: postgres:latest
container_name: postgres
ports:
- "5432:5432"
restart: unless-stopped
volumes:
- ./create-databases.sh:/docker-entrypoint-initdb.d/create-databases.sh
- ./db_persist:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: tenant_master
POSTGRES_DB2: tenant_1
POSTGRES_DB3: tenant_2
and File create-databases.sh
#!/bin/bash
set -eu
function create_database() {
local database=$1
echo " Creating database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $POSTGRES_USER;
EOSQL
}
create_database $POSTGRES_DB2
create_database $POSTGRES_DB3
I need run below SQL snippet in database tenant_master (POSTGRES_DB)
CREATE TABLE public.tbl_tenant_master
(
tenant_client_id serial NOT NULL,
db_name character varying(128),
url character varying(512) NOT NULL,
user_name character varying(256) NOT NULL,
password character varying(256) NOT NULL,
driver_class character varying(512) NOT NULL,
status character varying(64),
PRIMARY KEY (tenant_client_id)
);
ALTER TABLE IF EXISTS public.tbl_tenant_master
OWNER to postgres;
(1)
use master_db;
INSERT INTO `master_db`.`tbl_tenant_master` (`tenant_client_id`, `db_name`, `url`, `user_name`, `password`, `driver_class`, `status`)
VALUES ('200', 'testdb', 'jdbc:mysql://127.0.0.1:3306/testdb?useSSL=false', 'root', 'root', 'com.mysql.cj.jdbc.Driver', 'Active');
SELECT * FROM master_db.tbl_tenant_master;
(2)
How to inject (1), then (2) to sh script (I mean create table then insert data by sh in Docker supplement)?
my docker-compose:
version: '3.9'
services:
postgres:
image: postgres
restart: always
volumes:
- ./db:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: 123
POSTGRES_DB: location
ports:
- 5432:5432
/db/users.sql - init dump:
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
phone VARCHAR(255),
email VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TRIGGER users_moddatetime
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE PROCEDURE moddatetime (updated_at);
INSERT INTO users (first_name, last_name, phone, email) VALUES('Name', 'Lastname', '986754673823', 'test#mail.com');
INSERT INTO users (first_name, last_name, phone, email) VALUES('Test', 'TEst', '9878721632163', 'test2#mail.com');
When I'm runing docker-compose up I have an error function moddatetime() does
not exist. Any ideas?
I have a yml file that
2 volumes:¬
6 quartz-pg-master_data:¬
7 ¬
8 networks:¬
9 hostnet:¬
10 external: true¬
11 name: host¬
21 configs:¬
24 quartz-create_quartz_tables-20201019-1.sh:¬
25 file: ./config/quartz/create.sql¬
15 quartz-pg-master:¬
14 image: postgres¬
13 networks:¬
12 - internal¬
11 ports:¬
10 - published: 5432¬
9 target: 5432¬
8 mode: host¬
7 environment:¬
6 PGDATA: /pg_data¬
5 POSTGRES_DB: "quartz"
POSTGRES_USER: "quartz"
POSTGRES_PASSWORD: "password"¬
4 configs:¬
3 - source: quartz-create_quartz_tables-20201019-1.sh¬
2 target: /docker-entrypoint-initdb.d/create.sql¬
1 volumes:¬
0 - quartz-pg-master_data:/pg_data¬
which has create.sql:
CREATE TABLE qrtz_service_job_details
(
SCHED_NAME VARCHAR(120) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
JOB_CLASS_NAME VARCHAR(250) NOT NULL,
IS_DURABLE BOOL NOT NULL,
IS_NONCONCURRENT BOOL NOT NULL,
IS_UPDATE_DATA BOOL NOT NULL,
REQUESTS_RECOVERY BOOL NOT NULL,
JOB_DATA BYTEA NULL,
PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
);
CREATE TABLE qrtz_service_triggers
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
NEXT_FIRE_TIME BIGINT NULL,
PREV_FIRE_TIME BIGINT NULL,
PRIORITY INTEGER NULL,
TRIGGER_STATE VARCHAR(16) NOT NULL,
TRIGGER_TYPE VARCHAR(8) NOT NULL,
START_TIME BIGINT NOT NULL,
END_TIME BIGINT NULL,
CALENDAR_NAME VARCHAR(200) NULL,
MISFIRE_INSTR SMALLINT NULL,
JOB_DATA BYTEA NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
REFERENCES qrtz_service_JOB_DETAILS(SCHED_NAME,JOB_NAME,JOB_GROUP)
);
...
for quartz init.db.
I am successfully initializg postresql with logs:
When I am running
docker exec -ti bash, then
su quartz
and my result is:
root#3fac28fb199d:/# su quartz
su: user quartz does not exist.
However, I have tried with postgres -> su postgres
I can successfully go in but after that when I run psql it gives me this error
postgres#3fac28fb199d:/$ psql
psql: error: could not connect to server: FATAL: role "postgres" does not exist
Seems like my db and users are not created at all. What could be the problem?
In the docker bash container I need to use
psql dbname username so in my case it worked and my tables were created.
psql quartz quartz
I'm trying to setup a postgres container with ansible, but the database isn't created completely. Via docker logs, it shows that the script has been executed entirely, but the result in always an empty database.
Playbook script:
- name: Run PostgreSQL container
become: no
docker_container:
env:
POSTGRES_PASSWORD: "{{ postgres_pw }}"
name: postgres
image: postgres:11
ports: "5432:5432"
restart_policy: unless-stopped
volumes:
- /tmp/{{ db_backup_file }}:/docker-entrypoint-initdb.d/{{ db_backup_file }}:ro
- pgdata:/var/lib/postgresql/data
sql file:
CREATE DATABASE database5;
--
-- TOC entry 647 (class 1247 OID 16548)
-- Name: geslacht; Type: TYPE; Schema: public; Owner: postgres
--
CREATE TYPE public.geslacht AS ENUM (
'm',
'v',
'x'
);
ALTER TYPE public.geslacht OWNER TO postgres;
SET default_tablespace = '';
--
-- TOC entry 197 (class 1259 OID 16555)
-- Name: antwoorden; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE public.antwoorden (
id integer NOT NULL,
bericht text NOT NULL,
gebruiker integer NOT NULL,
vraag integer NOT NULL,
gepost_op timestamp without time zone DEFAULT now() NOT NULL,
actief boolean DEFAULT true NOT NULL,
is_verkozen boolean DEFAULT false NOT NULL
);
Note: this is only a part of a script that creates all tables and adds sample data.
What's the reason why database5 is not being filled with the tables and data?
I have been searching around stack overflow for some relevant problems, but I did not find any.
I have a table in sql on this format (call this file for create_table.sql):
CREATE TABLE object (
id BIGSERIAL PRIMARY KEY,
name_c VARCHAR(10) NOT NULL,
create_timestamp TIMESTAMP NOT NULL,
change_timestamp TIMESTAMP NOT NULL,
full_id VARCHAR(10),
mod VARCHAR(10) NOT NULL CONSTRAINT mod_enum CHECK (mod IN ('original', 'old', 'delete')),
status VARCHAR(10) NOT NULL CONSTRAINT status_enum CHECK (status IN ('temp', 'good', 'bad')),
vers VARCHAR(10) NOT NULL REFERENCES vers (full_id),
frame_id BIGINT NOT NULL REFERENCES frame (id),
name VARCHAR(10),
definition VARCHAR(10),
order_ref BIGINT REFERENCES order_ref (id),
UNIQUE (id, name_c)
);
This table is stored in google cloud. I have about 200000 insert statement, where I use a "insert block" method. Look like this (call this file for object_name.sql):
INSERT INTO object(
name,
create_timestamp,
change_timestamp,
full_id,
mod,
status,
vers,
frame_id,
name)
VALUES
('Element', current_timestamp, current_timestamp, 'Element:1', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to element 1'),
('Element', current_timestamp, current_timestamp, 'Element:2', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to element 2'),
...
...
('Element', current_timestamp, current_timestamp, 'Element:200000', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to object 200000');
I have a bash script where a postgres command is used to upload the data in object_name.sql to the table in google cloud:
PGPASSWORD=password psql -d database --username username --port 1234 --host 11.111.111 << EOF
BEGIN;
\i object_name.sql
COMMIT;
EOF
(Source: single transaction)
When I run this, I get this error:
BEGIN
psql:object_name.sql:60002: SSL SYSCALL error: EOF detected
psql:object_name.sql:60002: connection to server was lost
The current "solution" I have done now, is to chunk the file so each file can only have max 10000 insert statements. Running the psql command on these files works, but they take around 7 min.
Instead of having one file with 200000 insert statements, I divided them into 12 files where each file had max 10000 insert statements.
My question:
1. Is there a limit how large a file can contain?
2. I also saw this post about how to speed up insert, but I could not get COPY to work.
Hope someone out there have time to help me 🙂