Docker PostgreSQL: initialization scripts - postgresql

I have copied a test mydb.sql to the directory
/docker-entrypoint-initdb.d
and it works fine.
Now I'd like to create my demo db and insert data. I have
mydb_1_struct.sql -- there is db structure
mydb_2_data.sql -- there is db data
I need to execute these scripts in strong order: structure and then data.
What is the order of the scripts execution ?

According to the docker-entrypoint script
it process the scripts in alphabetical order :
docker_process_init_files /docker-entrypoint-initdb.d/*
So you can just use the method you mention ( mydb_1 , mydb_2 ) etc.

Related

Function/procedure to save a schema as an .sql file in PostgreSQL

I want to create a procedure which when I call creates a backup by creating an .sql file and saves it in my computer.
The procedure's name is trial_gen(). When I execute call trial_gen(), it should create a plain .sql file of the schema.
All solutions I found were only using the SQL shell
SQL code is a script, so I think it makes sense to run one from SQL shell. It would be a stored script (text) in a file anyway.

Creating Batch Files with PostgreSQL \copy Command in Jetbrains Datagrip

I'm familiarizing myself with the standalone version of Datagrip and having a bit of trouble understanding the different approaches to composing SQL via console, external files, scratch files, etc.
I'm managing, referencing the documentation, and am happy to figure things out as such.
However, I'm trying to ingest CSV data into tables via batch files using the Postgres \copy command. Datagrip will execute this command without error but no data is being populated.
This is my syntax, composed and ran in the console view:
\copy tablename from 'C:\Users\username\data_file.txt' WITH DELIMITER E'\t' csv;
Note that the data is tab-separated and stored in a .txt file.
I'm able to use the import functions of Datagrip (via context menu) just fine but I'd like to understand how to issue commands to do similarly.
\copy is a command of the command-line PostgreSQL client psql.
I doubt that Datagrip invokes psql, so it won't be able to use \copy or any other “backslash command”.
You probably have to use Datagrip's import facilities. Or you start using psql.
Ok, but what about the SQL COPY command https://www.postgresql.org/docs/12/sql-copy.html ?
How can I run something like that with datagrip ?
BEGIN;
CREATE TEMPORARY TABLE temp_json(values text) ON COMMIT DROP;
COPY temp_json FROM 'MY_FILE.JSON';
SELECT values->>'aJsonField' as f
FROM (select values::json AS values FROM temp_json) AS a;
COMMIT;
I try to replace 'MY_FILE.JSON' with full path, parameter (?), I put it in sql directory etc.
The data grip answer is :
[2021-05-05 10:30:45] [58P01] ERROR: could not open file '...' for reading : No such file or directory
EDIT :
I know why. RTFM! -_-
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server.
Sorry.....

kdb+ database backup with scripts

I am tring to backup a kdb+ database including all scripts and resource files. i can copy table from below command but this doesn't include scripts and dependency files. Is there any way to copy entire database of Kdb+ or available any tool for this.
copy tables command.
h:hopen hsym `$"localhost:5050"
([x;y] #[`.;y;:;] x y) [h;] each h"tables[]"
You can save and load contexts (taken from http://code.kx.com/q4m3/12_Workspace_Organization/#126-saving-and-loading-contexts):
`:currentws set value `.
That will include the functions that are currently loaded. Presumably scripts are already on file.

Attaching already detached Data and Log files in SQL Server 2008 R2

I have 2 directories of detached SQL Server .mdf's and .ldf's respectively, are there any scripts in T-SQL or Powershell that can pick these files up and attach them to a SQL Server without manually inputting every specific file?
Occasionally unattached data and log files would be dumped to these locations so I would ideally like to not update the script every time.
You can Schedule a job to execute the following T-sql statements and it will do the job for you.
USE [master]
GO
CREATE DATABASE [DataBase_Name] ON
( FILENAME = N'C:\Data\DataBase_Name_Data.mdf' ), --<---- Path to your .mdf file------
( FILENAME = N'C:\Data\DataBase_Name_Log.ldf' ) --<---- Path to your .ldf file------
FOR ATTACH
GO
Found the solution:
http://gallery.technet.microsoft.com/scriptcenter/Attach-all-the-databases-ace9ed34
A few tweaks here and there and it did exactly what i was looking for.

Mysql to Posgresql query conversion

Please help to create postgresql query equal to mysql query
LOAD DATA LOCAL INFILE 'file.txt' REPLACE INTO TABLE newtable TERMINATED BY ',' IGNORE 1 LINES;
There is no equivalent feature in PostgreSQL - at least in the current 9.3 or any prior version.
You must do this in a few steps:
CREATE TEMPORARY TABLE ...
COPY into the temp table
Do an UPDATE ... FROM followed by an INSERT INTO ... WHERE NOT EXISTS (...) to merge data
DROP the temp table
Search for "postgresql bulk upsert" or "postgresql copy upsert".
you might be looking for COPY
COPY will be run by the PostgreSQL backend (user "postgres"). The backend user requires permissions to read & write to the data file in order to copy from/to it. You need to use an absolute pathname with COPY. \COPY on the other hand, runs under the current $USER, and with that users environment. And \COPY can handle relative pathnames. The psql \COPY is accordingly much easier to use if it handles what you need.