Assuming I have the following table, functions and data
create table people (
id bigserial primary key,
age int,
height int,
weight int
);
create or replace function add_person (p_age int, p_height int, p_weight int);
create or replace function get_person_by_age (p_age int);
create or replace function get_person_by_height (p_height int);
create or replace function get_person_by_weight (p_weight int);
add_person (20,180,100);
add_person (20,181,101);
add_person (20,182,102);
add_person (20,183,103);
I am currently testing my database purely in bash so I would have these very long files like so (pseudocode only)
#!/bin/bash
# Insert data
sudo -u postgres psql -d $db_name -c "add_person(20,180,100)"
sudo -u postgres psql -d $db_name -c "add_person(20,181,101)"
sudo -u postgres psql -d $db_name -c "add_person(20,182,102)"
sudo -u postgres psql -d $db_name -c "add_person(20,183,103)"
# Retrieve data
persons=$(sudo -u postgres psql -d $db_name -c "get_person_by_age (20)")
# Count number of rows and make sure it matches the expected outcome
# (You have to manually strip header and footer lines but ignore for now)
if [ $(echo $persons | wc -l) -ne 4]
then
echo "Fail"
exit 1
fi
My test scripts have grown too large and there are so many things I am trying to catch (actions which should throw errors but which do not ie. false positives, actions which should not throw errors but which do ie false negatives, actions which throw errors other than that which they are supposed to, etc.). More importantly, the tests are incredibly slow as bash keeps trying to establish a connection to Postgre.
The reason I am not doing this in PGSQL is because the logic of queries can grow very complex as my db queries have many filters.
Is there a better existing solution to solve my problem? I looked at pgTAP but the documentation for that is horrendous
First of all, I think you could gain a lot of speed by running multiple commands in one client initialization. The "-c" flag only runs one command, and there is some small overhead in starting a connection that really adds up if you are running many commands, as you said.
For example, you could do something like this for your insert commands:
sudo -u postgres psql -d $db_name << EOF
add_person(20,180,100);
add_person(20,181,101);
add_person(20,182,102);
add_person(20,183,103);
Alternatively, you could list all the commands you want to run in a file and run them with "--file"/"-f":
sudo -u postgres psql -d $db_name -f "my_add_commands.psql"
I can't speak to why you are getting false errors, but generally I find that writing complicated test scripts in bash is a pain. If possible, I would consider using any language you are familiar with that has proper try/catch logic and a robust testing library with assertions. This is personally how I write tests for SQL functions. However, you would probably also need to use library that lets you make and process psql queries in the respective language.
There is a light psql testing library for python, but I haven't used it myself. Maybe worth a look though.
Related
I work with postgresql boosted with timescaledb fork (pretty impressed with its performance while it worked ;)
I got a script that downloads data, modifies it and puts into a csv file.
Then a psql script is invoked to create a temp table that inserts data into the database
psql -U postgres -d q1 -c "CREATE TABLE tmpp (time bigint NOT NULL, ask real NOT NULL, bid real NOT NULL)"
psql -U postgres -d q1 -c "\copy tmpp (time, ask, bid) from '/sth/sth.csv' delimiter ',' CSV"
psql -U postgres -d q1 -c "insert into realfun select * from tmpp"
psql -U postgres -d q1 -c "DROP TABLE tmpp"
Funny thing is, that it worked for me before, but now I got an error :
ERROR: Deprecated trigger function should not be invoked
I must have messed up sth, but cant figure out what it is [how original]
I will be happy to provide more details, if needed
I cannot find anything similar in google, please advise
It seems that the problem is that you have a newer shared library version than the extension version you have installed (Timescale is an extension, not a fork). You can fix this with ALTER EXTENSION timescaledb UPDATE.
The alter command is documented here.
Currently I can almost get a CSV from psql by simply running
psql -A -t -F'\t' -c "SELECT ...;" > myfile.csv
However it returns the number of rows at the end of the file. I can fix his with head -n -1
psql -A -t -F'\t' | head -n -1 | -c "SELECT ...;"
But with very large files seems like overkill. Is here a flag in psql where I can turn off number of records returned?
There is a number of common ways to get a CSV from PostrgeSQL (see e.g. this question). However not all of these ways are appropriate when working with Redshift, partly because Amazon Redshift is based on Postgres 8.0.2.
One can try to use --pset="footer=off" option to prevent psql from outputting the number of rows. Also, please consult 8.0.26 psql documentation.
I have a series of deletes and updates on a few tables in a Postgres database I manage. It has been suggested to schedule a reindex after the series of deletes as a solution to the 10 minute next-step update freezing infinitely (as it randomly does.) The DOS instructions provide this:
Usage:
reindexdb [OPTION]... [DBNAME]
Options:
-a, --all reindex all databases
-d, --dbname=DBNAME database to reindex
-e, --echo show the commands being sent to the server
-i, --index=INDEX recreate specific index only
-q, --quiet don't write any messages
-s, --system reindex system catalogs
-t, --table=TABLE reindex specific table only
--help show this help, then exit
--version output version information, then exit
Connection options:
-h, --host=HOSTNAME database server host or socket directory
-p, --port=PORT database server port
-U, --username=USERNAME user name to connect as
-w, --no-password never prompt for password
-W, --password force password prompt
We have to use version 9.1.3 as this is the corporate standard.
I have tried every option I can think of but it won't take the command to reindex:
reindexdb.exe -U username=MyUserName -W MyPassword -t table=MyDatabase.MyTable
I've also tried
reindexdb.exe -U MyUserName -W MyPassword -t MyDatabase.MyTable
and
reindexdb.exe -U MyUserName -W MyPassword -t MyTable -d MyDatabase
...but they all end with the error:
reindexdb: too many command-line arguments (first is "-t")
Does anyone have a working sample that would be able to clarify what the right syntax is?
Remove MyPassword from your arguments, and enter it in when Postgres prompts you for it.
-W simply causes Postgres to prompt for the password; it doesn't accept the password itself. You should never specify passwords on the command line, as it's usually logged.
If you need to run it non-interactively, either set the PGPASSWORD environment variable or create a pgpass file.
This did it:
reindexdb.exe -d MyDatabase -U postgres -t MyTable
As #Colonel Thirty Two and #Erwin Brandstetter noted, removing the password entirely is possible through %APPDATA%\postgresql\pgpass.conf
Any of these can be forced by adding the keyword FORCE after the command
Recreate a single index, myindex:
REINDEX INDEX myindex
Recreate all indices in a table, mytable:
REINDEX TABLE mytable
Recreate all indices in schema public:
REINDEX SCHEMA public
Recreate all indices in database postgres:
REINDEX DATABASE postgres
Recreate all indices on system catalogs in database postgres:
REINDEX SYSTEM postgres
link
I need to know how to migrate from Postgres to MonetDB. Postgres is getting slow and we are trying to change to Monet. Someone now if already exists a script or another thing to migrate to Monet?
exist something equivalent to plpgsql on MonetDB?
exist materialized view on MonetDB?
The following booklet may be relevant to quickly identify some syntactic feature differences. https://en.wikibooks.org/wiki/SQL_Dialects_Reference
And the citrus performance is covered in a blogpost
https://www.monetdb.org/content/citusdb-postgresql-column-store-vs-monetdb-tpc-h-shootout
firstly: You can export data from postgres like:
psql -h xxxxx -U xx -p xx -d postgres -c "copy (select * from db40.xxx) to '/tmp/xxx.csv' delimiter ';'"
secondly: You must replace the NULL like:
sed 's/\\N/NULL/g' xxx.csv >newxxx.csv
last: You can use this to copy data into monetdb like:
mclient -u monetdb -d voc -h 192.168.205.8 -p 50000 -s "COPY INTO newxxx from '/tmp/newxxx.csv' using delimiters ';';"
We're working on a website, and when we develop locally (one of us from Windows), we use sqlite3, but on the server (linux) we use postgres. We'd like to be able to import the production database into our development process, so I'm wondering if there is a way to convert from a postgres database dump to something sqlite3 can understand (just feeding it the postgres's dumped SQL gave many, many errors). Or would it be easier just to install postgres on windows? Thanks.
I found this blog entry which guides you to do these steps:
Create a dump of the PostgreSQL database.
ssh -C username#hostname.com pg_dump --data-only --inserts YOUR_DB_NAME > dump.sql
Remove/modify the dump.
Remove the lines starting with SET
Remove the lines starting with SELECT pg_catalog.setval
Replace true for ‘t’
Replace false for ‘f’
Add BEGIN; as first line and END; as last line
Recreate an empty development database. bundle exec rake db:migrate
Import the dump.
sqlite3 db/development.sqlite3
sqlite> delete from schema_migrations;
sqlite> .read dump.sql
Of course connecting via ssh and creating a new db using rake are optional
STEP1: make a dump of your database structure and data
pg_dump --create --inserts -f myPgDump.sql \
-d myDatabaseName -U myUserName -W myPassword
STEP2: delete everything except CREATE TABLES and INSERT statements out of myPgDump.sql (using text editor)
STEP3: initialize your SQLite database passing structure and data of your Postgres dump
sqlite3 myNewSQLiteDB.db -init myPgDump.sql
STEP4: use your database ;)
Taken from https://stackoverflow.com/a/31521432/1680728 (upvote there):
The sequel gem makes this a very relaxing procedure:
First install Ruby, then install the gem by running gem install sequel.
In case of sqlite, it would be like this: sequel -C postgres://user#localhost/db sqlite://db/production.sqlite3
Credits to #lulalala .
You can use pg2sqlite for converting pg_dump output to sqlite.
# Making dump
pg_dump -h host -U user -f database.dump database
# Making sqlite database
pg2sqlite -d database.dump -o sqlite.db
Schemas is not supported by pg2sqlite, and if you dump contains schema then you need to remove it. You can use this script:
# sed 's/<schema name>\.//' -i database.dump
sed 's/public\.//' -i database.dump
pg2sqlite -d database.dump -o sqlite.db
Even though there are many very good helpful answers here, I just want to mark this as answered. We ended up going with the advice of the comments:
I'd just switch your development environment to PostgreSQL, developing on top of one database (especially one as loose and forgiving as SQLite) but deploying on another (especially one as strict as PostgreSQL) is generally a recipe for aggravation and swearing. –
#mu is too short
To echo mu's response, DON'T DO THIS..DON'T DO THIS..DON'T DO THIS. Develop and deploy on the same thing. It's bad engineering practice to do otherwise. – #Kuberchaun
So we just installed postgres on our dev machines. It was easy to get going and worked very smoothly.
In case one needs a more automatized solution, here's a head start:
#!/bin/bash
$table_name=TABLENAMEHERE
PGPASSWORD="PASSWORD" /usr/bin/pg_dump --file "results_dump.sql" --host "yourhost.com" --username "username" --no-password --verbose --format=p --create --clean --disable-dollar-quoting --inserts --column-inserts --table "public.${table_name}" "memseq"
# Some clean ups
perl -0777 -i.original -pe "s/.+?(INSERT)/\1/is" results_dump.sql
perl -0777 -i.original -pe "s/--.+//is" results_dump.sql
# Remove public. prefix from table name
sed -i "s/public.${table_name}/${table_name}/g" results_dump.sql
# fix binary blobs
sed -i "s/'\\\\x/x'/g" results_dump.sql
# use transactions to make it faster
echo 'BEGIN;' | cat - results_dump.sql > temp && mv temp results_dump.sql
echo 'END;' >> results_dump.sql
# clean the current table
sqlite3 results.sqlite "DELETE FROM ${table_name};"
# finally apply changes
sqlite3 results.sqlite3 < results_dump.sql && \
rm results_dump.sql && \
rm results_dump.sql.original
when I faced with same issue I did not find any useful advices on Internet. My source PostgreSQL db had very complicated schema.
You just need to remove from your db-file manually everything besides table creating
More details - here
It was VERY easy for me to do using the taps gem as described here:
http://railscasts.com/episodes/342-migrating-to-postgresql
And I've started using the Postgres.app on my Mac (no install needed, drop the app in your Applications directory, although might have to add one line to your PATH envirnment variable as described in the documentation), with Induction.app as a GUI tool to view/query the database.