execute sql files from a folder in postgres - postgresql

I have a set of sql files placed in a folder in postgres docker image. Whenever a new account is created, I want a new database to be created and all these scripts should get executed on that new created database.
New database will be created manually. I need to create sh file to ask for db name, host, port, password etc and then execute all sql files on new created database.
I tried and I can execute one single file
psql -h localhost -d sampledb -U superuser -f sample.sql
I need to execute all files in the folder. How I can achieve this.

Here, you can make use of small bash script.
Copy all your sql files to a folder, for example test.
Then execute the below line of script.
for i in $(ls -l test/*.sql |awk '{print $NF}'); do
psql -h localhost -d sampledb -U superuser -f $i;
done

Related

Execute a .sql file with psql and make a log file

I'm dealing with psql for the first time and I need a command that executes a .sql file and after executing, it should make a .log file with the script output. I'm looking for something similar to this other command that I use with SQL Server:
sqlcmd -U userid -P password -S serveraddress -i path_to_the_sql_file -o path_where_to_save_log_file
Can you help me, please? :-)
I would spend some time at the psql page.
A quick example:
psql -U userid -h serveraddress -d some_db -f path_to_the_sql_file -L=path_where_to_save_log_file
With Postgres you need to connect to a database with a client. There are other options for inputting commands and capturing output at the link above.

psql, can't copy db content to another - cannot run inside a transaction block-

I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.

copy (pg_dump) .sql file to postgresql using Cygwin

I have downloaded YELP's sql file and tried to import the sql to my local Postgresql.
I am running postgres on Windows 10 and using Cygwin to execute command that I have found on Google. (it took me forever to use Cygwin instead of windows psql shell)
anyhow, Yelp gives data schema in sql and also data in sql. You may find them a link attached below
https://www.yelp.com/dataset/download
so, basically what I thought was creating an empty table with YELP's schema
the, copy all YELP's data into that table.
pg_dump -h localhost -p 5433 -U postgres -s mydb < D:/YELP/yelp_schema.sql
pg_dump -h localhost -p 5433 -U postgres -d mydb < D:/YELP/yelp_sql-2.tar
and I am checking my database transaction and it doesn't change nothing and i do not see the table.
this is what i see on Cygwin terminal
enter image description here
and nothing on my postgresql database
Please let me know what I have missed.
Thanks a lot
your link asks for email to download. I would suspect yelp to be not transparent for it. So I did not check if they have any advise on their data sets... Anyway you use pg_dump to dump the data. to import the resulting file use psql for plain sql files and pg_restore for custom format ones...
Eg:
psql -h localhost -p 5433 -U postgres -f D:/YELP/yelp_schema.sql
pg_restore -h localhost -p 5433 -U postgres -s worldmap -Ft D:/YELP/yelp_schema.sql
https://www.postgresql.org/docs/current/static/app-pgdump.html
pg_dump — extract a PostgreSQL database into a script file or other
archive file

Postgresql Database export to .sql file

I want to export my database as a .sql file.
Can someone help me? The solutions I have found don't work.
A detailed description please.
On Windows 7.
pg_dump defaults to plain SQL export. both data and structure.
open command prompt and
run pg_dump -U username -h localhost databasename >> sqlfile.sql
Above command is preferable as most of the times there will be an error which will be something like - ...FATAL: Peer authentication failed for user ...
In windows, first, make sure the path is added in environment variables PATH
C:\Program Files\PostgreSQL\12\bin
After a successful path adding restart cmd and type command
pg_dump -U username -p portnumber -d dbname -W -f location
this command will export both schema and data
for only schema use -s in place of -W
and for only data use -a.
replace each variable like username, portnumber, dbname and location according to your situation
everything is case sensitive, make sure you insert everything correctly,
and to import
psql -h hostname -p port_number -U username -f your_file.sql databasename
make sure your db is created or creation query is present in .sql file
Documentation: https://www.postgresql.org/docs/current/app-pgdump.html
Go to your command line and run
pg_dump -U userName -h localhost -d databaseName > ~/Desktop/cmsdump.sql

Creating a copy of a database in PostgreSQL

What's the correct way to copy entire database (its structure and data) to a new one in pgAdmin?
Postgres allows the use of any existing database on the server as a template when creating a new database. I'm not sure whether pgAdmin gives you the option on the create database dialog but you should be able to execute the following in a query window if it doesn't:
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
Still, you may get:
ERROR: source database "originaldb" is being accessed by other users
To disconnect all other users from the database, you can use this query:
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'originaldb' AND pid <> pg_backend_pid();
A command-line version of Bell's answer:
createdb -O ownername -T originaldb newdb
This should be run under the privileges of the database master, usually postgres.
To clone an existing database with postgres you can do that
/* KILL ALL EXISTING CONNECTION FROM ORIGINAL DB (sourcedb)*/
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'SOURCE_DB' AND pid <> pg_backend_pid();
/* CLONE DATABASE TO NEW ONE(TARGET_DB) */
CREATE DATABASE TARGET_DB WITH TEMPLATE SOURCE_DB OWNER USER_DB;
IT will kill all the connection to the source db avoiding the error
ERROR: source database "SOURCE_DB" is being accessed by other users
In production environment, where the original database is under traffic, I'm simply using:
pg_dump production-db | psql test-db
Don't know about pgAdmin, but pgdump gives you a dump of the database in SQL. You only need to create a database by the same name and do
psql mydatabase < my dump
to restore all of the tables and their data and all access privileges.
First, sudo as the database user:
sudo su postgres
Go to PostgreSQL command line:
psql
Create the new database, give the rights and exit:
CREATE DATABASE new_database_name;
GRANT ALL PRIVILEGES ON DATABASE new_database_name TO my_user;
\d
Copy structure and data from the old database to the new one:
pg_dump old_database_name | psql new_database_name
In pgAdmin you can make a backup from your original database, and then just create a new database and restore from the backup just created:
Right click the source database, Backup... and dump to a file.
Right click, New Object, New Database... and name the destination.
Right click the new database, Restore... and select your file.
Copying an "under load" db
I pieced this approach together with the examples from above. I'm working on an "under load" server and got the error when I attempted the approach from #zbyszek. I also was after a "command line only" solution.
createdb: database creation failed: ERROR: source database "exampledb" is being accessed by other users.
Here's what worked for me (Commands prepended with nohup to move output into a file and protect from a server disconnect):
nohup pg_dump exampledb > example-01.sql
createdb -O postgres exampledbclone_01
my user is "postgres"
nohup psql exampledbclone_01 < example-01.sql
What's the correct way to copy entire database (its structure and data) to a new one in pgAdmin?
Answer:
CREATE DATABASE newdb WITH TEMPLATE originaldb;
Tried and tested.
Here's the whole process of creating a copying over a database using only pgadmin4 GUI (via backup and restore)
Postgres comes with Pgadmin4. If you use macOS you can press CMD+SPACE and type pgadmin4 to run it. This will open up a browser tab in chrome.
Steps for copying
1. Create the backup
Do this by rightclicking the database -> "backup"
2. Give the file a name.
Like test12345. Click backup. This creates a binary file dump, it's not in a .sql format
3. See where it downloaded
There should be a popup at the bottomright of your screen. Click the "more details" page to see where your backup downloaded to
4. Find the location of downloaded file
In this case, it's /users/vincenttang
5. Restore the backup from pgadmin
Assuming you did steps 1 to 4 correctly, you'll have a restore binary file. There might come a time your coworker wants to use your restore file on their local machine. Have said person go to pgadmin and restore
Do this by rightclicking the database -> "restore"
6. Select file finder
Make sure to select the file location manually, DO NOT drag and drop a file onto the uploader fields in pgadmin. Because you will run into error permissions. Instead, find the file you just created:
7. Find said file
You might have to change the filter at bottomright to "All files". Find the file thereafter, from step 4. Now hit the bottomright "Select" button to confirm
8. Restore said file
You'll see this page again, with the location of the file selected. Go ahead and restore it
9. Success
If all is good, the bottom right should popup an indicator showing a successful restore. You can navigate over to your tables to see if the data has been restored propery on each table.
10. If it wasn't successful:
Should step 9 fail, try deleting your old public schema on your database. Go to "Query Tool"
Execute this code block:
DROP SCHEMA public CASCADE; CREATE SCHEMA public;
Now try steps 5 to 9 again, it should work out
EDIT - Some additional notes. Update PGADMIN4 if you are getting an error during upload with something along the lines of "archiver header 1.14 unsupported version" during restore
From the documentation, using createdb or CREATE DATABASE with templates is not encouraged:
Although it is possible to copy a database other than template1 by
specifying its name as the template, this is not (yet) intended as a
general-purpose “COPY DATABASE” facility. The principal limitation is
that no other sessions can be connected to the template database while
it is being copied. CREATE DATABASE will fail if any other connection
exists when it starts; otherwise, new connections to the template
database are locked out until CREATE DATABASE completes.
pg_dump or pg_dumpall is a good way to go for copying database AND ALL THE DATA. If you are using a GUI like pgAdmin, these commands are called behind the scenes when you execute a backup command. Copying to a new database is done in two phases: Backup and Restore
pg_dumpall saves all of the databases on the PostgreSQL cluster. The disadvantage to this approach is that you end up with a potentially very large text file full of SQL required to create the database and populate the data. The advantage of this approach is that you get all of the roles (permissions) for the cluster for free. To dump all databases do this from the superuser account
pg_dumpall > db.out
and to restore
psql -f db.out postgres
pg_dump has some compression options that give you much smaller files. I have a production database I backup twice a day with a cron job using
pg_dump --create --format=custom --compress=5 --file=db.dump mydatabase
where compress is the compression level (0 to 9) and create tells pg_dump to add commands to create the database. Restore (or move to new cluster) by using
pg_restore -d newdb db.dump
where newdb is the name of the database you want to use.
Other things to think about
PostgreSQL uses ROLES for managing permissions. These are not copied by pg_dump. Also, we have not dealt with the settings in postgresql.conf and pg_hba.conf (if you're moving the database to another server). You'll have to figure out the conf settings on your own. But there is a trick I just discovered for backing up roles. Roles are managed at the cluster level and you can ask pg_dumpall to backup just the roles with the --roles-only command line switch.
For those still interested, I have come up with a bash script that does (more or less) what the author wanted. I had to make a daily business database copy on a production system, this script seems to do the trick. Remember to change the database name/user/pw values.
#!/bin/bash
if [ 1 -ne $# ]
then
echo "Usage `basename $0` {tar.gz database file}"
exit 65;
fi
if [ -f "$1" ]
then
EXTRACTED=`tar -xzvf $1`
echo "using database archive: $EXTRACTED";
else
echo "file $1 does not exist"
exit 1
fi
PGUSER=dbuser
PGPASSWORD=dbpw
export PGUSER PGPASSWORD
datestr=`date +%Y%m%d`
dbname="dbcpy_$datestr"
createdbcmd="CREATE DATABASE $dbname WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;"
dropdbcmp="DROP DATABASE $dbname"
echo "creating database $dbname"
psql -c "$createdbcmd"
rc=$?
if [[ $rc != 0 ]] ; then
rm -rf "$EXTRACTED"
echo "error occured while creating database $dbname ($rc)"
exit $rc
fi
echo "loading data into database"
psql $dbname < $EXTRACTED > /dev/null
rc=$?
rm -rf "$EXTRACTED"
if [[ $rc != 0 ]] ; then
psql -c "$dropdbcmd"
echo "error occured while loading data to database $dbname ($rc)"
exit $rc
fi
echo "finished OK"
PostgreSQL 9.1.2:
$ CREATEDB new_db_name -T orig_db_name -O db_user;
To create database dump
cd /var/lib/pgsql/
pg_dump database_name> database_name.out
To resote database dump
psql -d template1
CREATE DATABASE database_name WITH ENCODING 'UTF8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0;
CREATE USER role_name WITH PASSWORD 'password';
ALTER DATABASE database_name OWNER TO role_name;
ALTER USER role_name CREATEDB;
GRANT ALL PRIVILEGES ON DATABASE database_name to role_name;
CTR+D(logout from pgsql console)
cd /var/lib/pgsql/
psql -d database_name -f database_name.out
If the database has open connections, this script may help. I use this to create a test database from a backup of the live-production database every night. This assumes that you have an .SQL backup file from the production db (I do this within webmin).
#!/bin/sh
dbname="desired_db_name_of_test_enviroment"
username="user_name"
fname="/path to /ExistingBackupFileOfLive.sql"
dropdbcmp="DROP DATABASE $dbname"
createdbcmd="CREATE DATABASE $dbname WITH OWNER = $username "
export PGPASSWORD=MyPassword
echo "**********"
echo "** Dropping $dbname"
psql -d postgres -h localhost -U "$username" -c "$dropdbcmp"
echo "**********"
echo "** Creating database $dbname"
psql -d postgres -h localhost -U "$username" -c "$createdbcmd"
echo "**********"
echo "** Loading data into database"
psql -d postgres -h localhost -U "$username" -d "$dbname" -a -f "$fname"
Using pgAdmin, disconnect the database that you want to use as a template. Then you select it as the template to create the new database, this avoids getting the already in use error.
pgAdmin4:
1.Select DB you want to copy and disconnect it
Rightclick
"Disconnect DB"
2.Create a new db next to the old one:
Give it a name.
In the "definition" tab select
the first table as an Template (dropdown menu)
Hit create and just left click on the new db to reconnect.
If you want to copy whole schema you can make a pg_dump with following command:
pg_dump -h database.host.com -d database_name -n schema_name -U database_user --password
And when you want to import that dump, you can use:
psql "host=database.host.com user=database_user password=database_password dbname=database_name options=--search_path=schema_name" -f sql_dump_to_import.sql
More info about connection strings: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
Or then just combining it in one liner:
pg_dump -h database.host.com -d postgres -n schema_name -U database_user --password | psql "host=database.host.com user=database_user password=database_password dbname=database_name options=--search_path=schema_name”
Open the Main Window in pgAdmin and then open another Query Tools Window
In the main windows in pgAdmin,
Disconnect the "templated" database that you want to use as a template.
Goto the Query Tools Window
Run 2 queries as below
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TemplateDB' AND pid <> pg_backend_pid();
(The above SQL statement will terminate all active sessions with TemplateDB and then you can now select it as the template to create the new TargetDB database, this avoids getting the already in use error.)
CREATE DATABASE 'TargetDB'
WITH TEMPLATE='TemplateDB'
CONNECTION LIMIT=-1;
New versions of pgAdmin (definitely 4.30) support creating new databases from template. All you need to populate are new database name and existing template database.
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
If you have using Ubuntu.
1 way
createdb -O Owner -T old_db_name new_db_name
2 way
createdb test_copy
pg_dump old_db_name | psql test_copy
Try this:
CREATE DATABASE newdb WITH ENCODING='UTF8' OWNER=owner TEMPLATE=templatedb LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8' CONNECTION LIMIT=-1;
gl XD