I am building a api that is developed by multiple developers and we require to have the same db for testing purposes. We are using UUID and so it would be ideal if we all use the same UUIDs. I can't seem to find a way to export the db contents as plain executable SQL, preferably with insert statements. NO drop tables, NO recreation of database.
I would like the end result to look something like:
INSERT INTO public.bla_bla(
id, bla_bla, bla_bla1, bla_bla2, bla_bla3, bla_bla4)
VALUES (?, ?, ?, ?, ?, ?);
INSERT INTO public.bla_bla(
id, bla_bla, bla_bla1, bla_bla2, bla_bla3, bla_bla4)
VALUES (?, ?, ?, ?, ?, ?);
...
I am using pgAdmin4 as ui. But also have Dbeaver.
I have tried using the Backup wizard to export the data.
On the database, - Does not produce a result if only data is selected. If instead the "sections" category sliders are used the result includes drop statements, which is not wanted and no readable insert statements.
on the schema - same as above
on the table - produces a CSV file, which is not ideal.
I tried following the steps here, but they do not yield the produce I need the result I need.
How to export Postgres schema/data to plain SQL in PgAdmin 4
At this point I am considering just doing it by hand.
Use pg_dump
pg_dump -Fp -d your_database -U your_db_user --column-inserts --data-only -f db_dump.sql
--data-only will skip the creation of the CREATE TABLE statements (or any other DDL).
If you want, you can add --rows-per-insert <nnnn> to only create a single INSERT statement for multiple rows.
You can try pg_dump with the option format plaintext and --column-inserts .
For more details please read here
Related
I have to insert a lot of lines (over 100) from a Postgresql db, to Oracle db.
I know a lot of solutions, writing on Oracle using oracle_fdw, create a csv file then using sqlloader, but I want a very fast solution, a sql script.
I know is possible create a sql script with this command
pg_dump --table=mytable --data-only --column-inserts mydb > data.sql
then import on Oracle db is easy.
I need something like this but with a difference, I want to export on data.sql, only some columns starting after a column id, i know is possible but is csv format
psql -c "copy(SELECT columns1,col2,col3... FROM mytable offset 3226 rows fetch first 100 rows only) to stdout" > dump.csv
is possible something like this but with sql format?
Solution found.
A nice way is to create a view
CREATE view foo2 AS
SELECT col1,col2,col4 FROM mytable
offset 3226 rows fetch first 129 rows only;
you export the view and voilĂ ...empty file!
This is because pg_dump man said
It will not dump the contents of views or materialized views, and the
contents of foreign tables will only be dumped if the corresponding foreign server is specified with
--include-foreign-data.
So we create a temporary table
CREATE table foo2 AS
SELECT col1,col2,col4 FROM mytable
offset 3226 rows fetch first 129 rows only;
and export to sql script with pg_dump
pg_dump --table=mytable --data-only --column-inserts mydb > mydata.sql
after import to other db (first control values)
we can drop the temporary table
drop table foo2
I'd like to get a hash for data in an entire table. I need to compare two databases after migration to validate that the data migration was successful. Is it possible to reliably and reproducibly generate a hash for an entire table in a database?
You can do this from the command line (replacing of course my_database and my_table):
psql my_database -c 'copy my_table to stdout' |sha1sum
If you want to use a query to limit columns, add ordering, etc., just modify the query:
psql my_database -c 'copy (select * from my_table order by my_id_column) to stdout' |sha1sum
Note that this does not hash anything except the column data. No schema information, constraints, indexes, metadata, permissions, etc.
Note also that sha1sum is an arbitrary hashing program; you can pipe this to any program that generates a hash. Some cuspy options are sha256sum and md5sum.
I have a Postgres node in a flow that inserts URL records. I am trying to prevent escaping them as I need the original URL.
INSERT INTO links (id,url,created_at, updated_at)
VALUES (1,'{{msg.url}}','05/08/2020','05/08/2020');
It creates this in the database:
http://abc.com/abcdxc/doc/doc/processing.html
I need the URL unescaped in the database so I can query for it.
Use 3 { rather than 2 in the Mustache template
INSERT INTO links (id,url,created_at, updated_at) VALUES (1,'{{{msg.url}}}','05/08/2020','05/08/2020');
I try to do a table data dump using pg_dump, something like this:
pg96\bin\pg_dump ... --format plain --section data --column-inserts --file obj.account.backup --table obj.account database_xyz
Instead of getting
INSERT INTO obj.account(name, email, password) VALUES ('user1','email1','password1');
INSERT INTO obj.account(name, email, password) VALUES ('user2','email2','password2');
I would like to get
INSERT INTO obj.account (name, email, password) VALUES
('user1','email1','password1'),
('user2','email2','password2');
Is there a way for this without any Non-PostgreSQL postprocessing?
There is no way to get INSERT statements like that with pg_dump.
Since PostgreSQL 12 you can use pg_dump with --rows-per-insert=nrows, see https://www.postgresql.org/docs/12/app-pgdump.html
I'm aware that this is an old question but I wanted to mention it in case somebody else (like me) finds this while searching for a solution. There are cases where COPY can't be used and for bigger data sets using a single INSERT statement is much faster when importing.
I would like to take records from one a table in a SOURCE database and insert these records into a table in a DESTINATION database on a DIFFERENT server with different login credentials.
Could someone please provide points on how this can be done with Sybase TSQL? Or is this not possible?
You can use proxy table as below:
exec sp_addserver 'SvrName', null, '[ip or hostname]:port'
exec sp_addexternlogin 'SvrName', LocalUserName, ExternalUserName, ExternalPassword
create proxy_table proxy_src_tab at 'SvrName.ExternalDb.db_owner.src_table'
insert into dest_tab
select column1,..., columnN from proxy_src_tab