How to convert a postgres database to sqlite - postgresql

We're working on a website, and when we develop locally (one of us from Windows), we use sqlite3, but on the server (linux) we use postgres. We'd like to be able to import the production database into our development process, so I'm wondering if there is a way to convert from a postgres database dump to something sqlite3 can understand (just feeding it the postgres's dumped SQL gave many, many errors). Or would it be easier just to install postgres on windows? Thanks.

I found this blog entry which guides you to do these steps:
Create a dump of the PostgreSQL database.
ssh -C username#hostname.com pg_dump --data-only --inserts YOUR_DB_NAME > dump.sql
Remove/modify the dump.
Remove the lines starting with SET
Remove the lines starting with SELECT pg_catalog.setval
Replace true for ‘t’
Replace false for ‘f’
Add BEGIN; as first line and END; as last line
Recreate an empty development database. bundle exec rake db:migrate
Import the dump.
sqlite3 db/development.sqlite3
sqlite> delete from schema_migrations;
sqlite> .read dump.sql
Of course connecting via ssh and creating a new db using rake are optional

STEP1: make a dump of your database structure and data
pg_dump --create --inserts -f myPgDump.sql \
-d myDatabaseName -U myUserName -W myPassword
STEP2: delete everything except CREATE TABLES and INSERT statements out of myPgDump.sql (using text editor)
STEP3: initialize your SQLite database passing structure and data of your Postgres dump
sqlite3 myNewSQLiteDB.db -init myPgDump.sql
STEP4: use your database ;)

Taken from https://stackoverflow.com/a/31521432/1680728 (upvote there):
The sequel gem makes this a very relaxing procedure:
First install Ruby, then install the gem by running gem install sequel.
In case of sqlite, it would be like this: sequel -C postgres://user#localhost/db sqlite://db/production.sqlite3
Credits to #lulalala .

You can use pg2sqlite for converting pg_dump output to sqlite.
# Making dump
pg_dump -h host -U user -f database.dump database
# Making sqlite database
pg2sqlite -d database.dump -o sqlite.db
Schemas is not supported by pg2sqlite, and if you dump contains schema then you need to remove it. You can use this script:
# sed 's/<schema name>\.//' -i database.dump
sed 's/public\.//' -i database.dump
pg2sqlite -d database.dump -o sqlite.db

Even though there are many very good helpful answers here, I just want to mark this as answered. We ended up going with the advice of the comments:
I'd just switch your development environment to PostgreSQL, developing on top of one database (especially one as loose and forgiving as SQLite) but deploying on another (especially one as strict as PostgreSQL) is generally a recipe for aggravation and swearing. –
#mu is too short
To echo mu's response, DON'T DO THIS..DON'T DO THIS..DON'T DO THIS. Develop and deploy on the same thing. It's bad engineering practice to do otherwise. – #Kuberchaun
So we just installed postgres on our dev machines. It was easy to get going and worked very smoothly.

In case one needs a more automatized solution, here's a head start:
#!/bin/bash
$table_name=TABLENAMEHERE
PGPASSWORD="PASSWORD" /usr/bin/pg_dump --file "results_dump.sql" --host "yourhost.com" --username "username" --no-password --verbose --format=p --create --clean --disable-dollar-quoting --inserts --column-inserts --table "public.${table_name}" "memseq"
# Some clean ups
perl -0777 -i.original -pe "s/.+?(INSERT)/\1/is" results_dump.sql
perl -0777 -i.original -pe "s/--.+//is" results_dump.sql
# Remove public. prefix from table name
sed -i "s/public.${table_name}/${table_name}/g" results_dump.sql
# fix binary blobs
sed -i "s/'\\\\x/x'/g" results_dump.sql
# use transactions to make it faster
echo 'BEGIN;' | cat - results_dump.sql > temp && mv temp results_dump.sql
echo 'END;' >> results_dump.sql
# clean the current table
sqlite3 results.sqlite "DELETE FROM ${table_name};"
# finally apply changes
sqlite3 results.sqlite3 < results_dump.sql && \
rm results_dump.sql && \
rm results_dump.sql.original

when I faced with same issue I did not find any useful advices on Internet. My source PostgreSQL db had very complicated schema.
You just need to remove from your db-file manually everything besides table creating
More details - here

It was VERY easy for me to do using the taps gem as described here:
http://railscasts.com/episodes/342-migrating-to-postgresql
And I've started using the Postgres.app on my Mac (no install needed, drop the app in your Applications directory, although might have to add one line to your PATH envirnment variable as described in the documentation), with Induction.app as a GUI tool to view/query the database.

Related

How To Restore Specific Schema From Dump file in PostgreSQL?

I have a dump file (size around 5 GB) which is taken via this command:
pg_dump -U postgres -p 5440 MYPRODDB > MYPRODDB_2022.dmp
The database consists multiple schemas (let's say Schema A,B,C and D) but i need to restore only one schema (schema A).
How can i achieve that? The command below didn't work and gave error:
pg_restore -U postgres -d MYPRODDB -n A -p 5440 < MYPRODDB_2022.dmp
pgrestore: error: input file appears to be a text format dump. please
use psql.
You cannot do that with a plain format dump. That's one of the reasons why you always use a different format unless you need an SQL script.
If you want to stick with a plain text dump:
pg_dump -U postgres -p 5440 -n A MYPRODDB > MYPRODDB_2022.dmp
psql -U postgres -d MYPRODDB -p 5440 -f MYPRODDB_2022.dmp
Though dumping back over the same database as above will throw errors unless you use --clean or its short form -c to create commands to drop existing objects before restoring them:
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore.
Probably also a good idea to throw in --if-exists:
--if-exists
Use conditional commands (i.e., add an IF EXISTS clause) when cleaning database objects. This option is not valid unless --clean is also specified.

Clean restore from PostgreSQL dump

I want to restore a database from backup and rewrite all data that is there with backup data.
My current command is like this:
pg_restore -h localhost -U postgres -d dbName -v autobackup_file.dmp
How to restore and rewrite all data?
I've seen an option -c; is that the correct way?
And where should I put it in my command?
-c can be anywhere, e.g. immediately after pg_restore.
It will DROP all restored objects before restoring them, but it will not drop any objects that are not in the dump.
To drop and recreate the whole database so you get a clean copy, you can use -C -c.

How to import a sample DB into postgres?

According to a website I can download their sample file dvdrental.zip, but
The database file is in zipformat ( dvdrental.zip) so you need to extract > it to dvdrental.tar
First of all, what is a tar? I thought it had to be tar.gz to be compressed? I don't even know how to create a "tar" by itself. I tried:
tar -zcvf dvdrental.tar.gz dvdrental
and
tar -cf dvdrental.tar dvdrental
I try to import with pgAdmin 4 and I get either:
pg_restore: [archiver] input file does not appear to be a valid archive
or
pg_restore: [tar archiver] could not find header for file "toc.dat" in tar archive
respectively. Now, don't ask me why a popular tutorial site created a file in the wrong format. But, can you tell me how to repackage this file so I can use it as a sample DB?
Using Mac OS 10.12.4. Postgres 9.6. And PgAdmin 4 (not sure if it's in beta? It crashed and does all kinds of nonsensical window movement and highlighting)
I have extracted .zip archive first. Then opened pgAdmin and followed the guide "Load the DVD Rental database using the pgAdmin"
https://www.postgresqltutorial.com/load-postgresql-sample-database/
Pay attention to changing 'Format' field from 'Custom or Tar' to 'Directory'. Then you should be able to restore DB.
If you look into the .tar archive you will find the restore.sql where at the top:
-- File paths need to be edited. Search for $$PATH$$ and
-- replace it with the path to the directory containing
-- the extracted data files.
So to create sample DB you could to extract .tar content somewhere and use single command:
sed -e 's/\$\$PATH\$\$/\/path\/to\/extracted\/files/g' restore.sql | psql
Or
sed -e 's/\$\$PATH\$\$/\/path\/to\/extracted\/files/g' restore.sql > r.sql
and try to execute the r.sql content using PgAdmin.
get sample dataset from the link you cited and save somewhere.
Assuming postgres is installed and running do the following:
Run createdb dvdrental
Run pg_restore -d dvdrental ./dvdrental where "./dvdrental" is the path to the downloaded and unzipped file.
For create sample DB in postgres you following this steps:
1.- Create directory and enter it:
mkdir -p /tmp/dvdrental && cd /tmp/dvdrental
2.- Download zip file dvdrental.zip:
wget https://www.postgresqltutorial.com/wp-content/uploads/2019/05/dvdrental.zip
3.- Uncompress file .zip and later .tar:
unzip dvdrental.zip
tar -xvf dvdrental.tar
4.- Replace in execution time $$PATH variable and review it with grep command:
sed -e 's/\$\$PATH\$\$/\/tmp\/dvdrental/g' restore.sql | grep --color dvdrental
5.- Import DB sample for specific host (localhost), port (5433), user (db) and database name (postgres):
sed -e 's/\$\$PATH\$\$/\/tmp\/dvdrental/g' restore.sql | psql -h localhost -p 5433 -U db -d postgres
Finally, I show import successful with program pgAdmin III

Is it possible to restore PostgreSQL data without overwriting existing data?

I have 2 backup files and I want to merge all this data on my database.
I try to use pg_restore but when I use with the second database file I lost the first data set.
Look for my command:
pg_restore -U postgres -c --if-exists -d ravpacheco_db "C:\Users\ravpacheco\xpto1.backup"
I also search all options flags for pg_restore command but I can't find some usefull thing
My problem is with my pg_dump command. If I created a backup file only with data my restore will work
Now I'm using this pg_dump command
pg_dump --column-inserts --data-only --table=<table> <database>
I resolved my problem using this stackoverflow thread

PostgreSQL: How do I backup database with name A and load it to database with name B?

I have two databases on the same server. One named A and one named B. Booth databases have the same structure. I want to empty database B and load it with data from database A. Which is the best way to do this?
I have tried to take backup of database A in plain format. Then open the resulting sql-file and replace every occurence of 'A' with 'B' and then run the sql-script. This worked but I think it should be an easier way to move data from one database to another. Is it?
I use 'pgAdmin III' as my tool, but this is not necessary.
This is my first post here, hope the question is relevant and structured well enough. I tried google first but found it hard to find anyone with the same question.
Thanks in advance!
/David
SOLUTION: After help from Craig, this is how I did it
pg_dump -Fc -a -f a.dbbackup A
psql -c 'TRUNCATE table1, table2, ..., tableX CASCADE'
pg_restore dblive.backup -d B -c (not sure if -c was necessary)
Backup:
pg_dump -Fc -f a.dbbackup
Restore:
psql -c 'CREATE DATABASE b;'
pg_restore --dbname b a.dbbackup
Use the -U, -h etc options as required to connect to the correct host as the correct user with permissions to dump, create and restore the DB. See the docs for psql, pg_dump and pg_restore for more info (they all take the same options for connection control).