I work with postgresql boosted with timescaledb fork (pretty impressed with its performance while it worked ;)
I got a script that downloads data, modifies it and puts into a csv file.
Then a psql script is invoked to create a temp table that inserts data into the database
psql -U postgres -d q1 -c "CREATE TABLE tmpp (time bigint NOT NULL, ask real NOT NULL, bid real NOT NULL)"
psql -U postgres -d q1 -c "\copy tmpp (time, ask, bid) from '/sth/sth.csv' delimiter ',' CSV"
psql -U postgres -d q1 -c "insert into realfun select * from tmpp"
psql -U postgres -d q1 -c "DROP TABLE tmpp"
Funny thing is, that it worked for me before, but now I got an error :
ERROR: Deprecated trigger function should not be invoked
I must have messed up sth, but cant figure out what it is [how original]
I will be happy to provide more details, if needed
I cannot find anything similar in google, please advise
It seems that the problem is that you have a newer shared library version than the extension version you have installed (Timescale is an extension, not a fork). You can fix this with ALTER EXTENSION timescaledb UPDATE.
The alter command is documented here.
Related
Assuming I have the following table, functions and data
create table people (
id bigserial primary key,
age int,
height int,
weight int
);
create or replace function add_person (p_age int, p_height int, p_weight int);
create or replace function get_person_by_age (p_age int);
create or replace function get_person_by_height (p_height int);
create or replace function get_person_by_weight (p_weight int);
add_person (20,180,100);
add_person (20,181,101);
add_person (20,182,102);
add_person (20,183,103);
I am currently testing my database purely in bash so I would have these very long files like so (pseudocode only)
#!/bin/bash
# Insert data
sudo -u postgres psql -d $db_name -c "add_person(20,180,100)"
sudo -u postgres psql -d $db_name -c "add_person(20,181,101)"
sudo -u postgres psql -d $db_name -c "add_person(20,182,102)"
sudo -u postgres psql -d $db_name -c "add_person(20,183,103)"
# Retrieve data
persons=$(sudo -u postgres psql -d $db_name -c "get_person_by_age (20)")
# Count number of rows and make sure it matches the expected outcome
# (You have to manually strip header and footer lines but ignore for now)
if [ $(echo $persons | wc -l) -ne 4]
then
echo "Fail"
exit 1
fi
My test scripts have grown too large and there are so many things I am trying to catch (actions which should throw errors but which do not ie. false positives, actions which should not throw errors but which do ie false negatives, actions which throw errors other than that which they are supposed to, etc.). More importantly, the tests are incredibly slow as bash keeps trying to establish a connection to Postgre.
The reason I am not doing this in PGSQL is because the logic of queries can grow very complex as my db queries have many filters.
Is there a better existing solution to solve my problem? I looked at pgTAP but the documentation for that is horrendous
First of all, I think you could gain a lot of speed by running multiple commands in one client initialization. The "-c" flag only runs one command, and there is some small overhead in starting a connection that really adds up if you are running many commands, as you said.
For example, you could do something like this for your insert commands:
sudo -u postgres psql -d $db_name << EOF
add_person(20,180,100);
add_person(20,181,101);
add_person(20,182,102);
add_person(20,183,103);
Alternatively, you could list all the commands you want to run in a file and run them with "--file"/"-f":
sudo -u postgres psql -d $db_name -f "my_add_commands.psql"
I can't speak to why you are getting false errors, but generally I find that writing complicated test scripts in bash is a pain. If possible, I would consider using any language you are familiar with that has proper try/catch logic and a robust testing library with assertions. This is personally how I write tests for SQL functions. However, you would probably also need to use library that lets you make and process psql queries in the respective language.
There is a light psql testing library for python, but I haven't used it myself. Maybe worth a look though.
I run a daily backup of my database using pg_dump and pg_restore that recently stopped working after I pushed an update.
I have a function validate_id that's a Case/When statement just as a quick check for some the data that has integrity issues. Looks something like this:
CREATE OR REPLACE FUNCTION validate_id(
_string text,
_type type
) RETURNS boolean AS
$$
SELECT
CASE WHEN (stuff) THEN TRUE
WHEN (other stuff) THEN TRUE
When (more stuff) THEN raise_err('Not an accepted type, the accepted types are: x y z')
ELSE FALSE
$$
LANGUAGE SQL;
Since I added this function, when I dump using this command:
pg_dump -U postgres -h ipaddress -p 5432 -w -F t databaseName > backupsfolder/databaseName.tar
When I use this command:
pg_restore -U postgres -h localhost -p 5432 -d postgres -C "backupsfolder/databaseName.tar"
As of two days ago, this now throws an error:
pg_restore: error: could not execute query: ERROR: function raise_err(unknown) does not exist
I'm pretty lost on what to do. I think what might be going on is that it's trying to restore this function before it restores the raise_err function. Which I thought was built-in to postgres (I can SELECT raise_err('Hello, World');). Is this possible? Is it my CASE statement because I need to return only Booleans? All of the permissions seem correct and restoring with previous backups works fine.
The problem is that raise_err is not schema qualified in your function code.
This is potentially dangerous: a malicious user could create his own function raise_err and set search_path so that the wrong function is called.
Since pg_restore is typically run by a superuser, this can be a security problem. Imagine such a function being used in an index definition!
For these reasons pg_dump and pg_restore set an empty search_pathin current versions of PostgreSQL.
The solution to your problem is to explicitly use the function's schema in your SQL statement.
I ended up solving this issue by explicitly setting the search paths for both functions, raise_err() and validate_id() to public:
ALTER FUNCTION validate_id(text,text) SET search_path=public;
ALTER FUNCTION raise_err(text,text) SET search_path=public;
I need to know how to migrate from Postgres to MonetDB. Postgres is getting slow and we are trying to change to Monet. Someone now if already exists a script or another thing to migrate to Monet?
exist something equivalent to plpgsql on MonetDB?
exist materialized view on MonetDB?
The following booklet may be relevant to quickly identify some syntactic feature differences. https://en.wikibooks.org/wiki/SQL_Dialects_Reference
And the citrus performance is covered in a blogpost
https://www.monetdb.org/content/citusdb-postgresql-column-store-vs-monetdb-tpc-h-shootout
firstly: You can export data from postgres like:
psql -h xxxxx -U xx -p xx -d postgres -c "copy (select * from db40.xxx) to '/tmp/xxx.csv' delimiter ';'"
secondly: You must replace the NULL like:
sed 's/\\N/NULL/g' xxx.csv >newxxx.csv
last: You can use this to copy data into monetdb like:
mclient -u monetdb -d voc -h 192.168.205.8 -p 50000 -s "COPY INTO newxxx from '/tmp/newxxx.csv' using delimiters ';';"
I'm trying to write a pg_restore command to restore only certain tables (and their data) to my database.
Note: every command described begins with me dropping and re-creating the database and ends in: -v -x -O -j 8 -h localhost -U username -d database file.dump (For the curious, I didn't want to use --clean because the database that the dump came from has a different name.)
Since pg_restore works fine for me (with the above args), I looked at the pg_restore documentation, and tried something like this:
pg_restore -t table1 -t table2 ... (there are 121 tables I specify in this way).
However, I get errors like the following:
pg_restore: creating TABLE people
pg_restore: [archiver (db)] Error from TOC entry 123; 1234 12345 TABLE people dumped_table_username
pg_restore: [archiver (db)] could not execute query: ERROR: type "hstore" does not exist
LINE 14: extra_data hstore,
^
Command was: CREATE TABLE people (
id integer NOT NULL,
name string,
age integer,
date_of_birt...
I don't see why this would be an issue only when the -t flag is set, but it appears to be.
What's going on?
Edit: looks like this is a duplicate of pg_restore on table failing because of hstore, which was recently asked and has no accepted answer as of this time.
Apparently, pg_restore with the -t/--table flag set doesn't run CREATE EXTENSION commands that are in the dump file (because they're not technically part of that table). My problem was solved by manually running psql database -c "CREATE EXTENSION hstore;" before the pg_restore command.
How can I generate the DDL of a table programmatically on Postgresql? Is there a system query or command to do it? Googling the issue returned no pointers.
Use pg_dump with this options:
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql
Description:
-s or --schema-only : Dump only ddl / the object definitions (schema), without data.
-t or --table Dump : Dump only tables (or views or sequences) matching table
Examples:
-- dump each ddl table elon build.
$ pg_dump -U elon -h localhost -s -t spacex -t tesla -t solarcity -t boring > companies.sql
Sorry if out of topic. Just wanna help who googling "psql dump ddl" and got this thread.
You can use the pg_dump command to dump the contents of the database (both schema and data). The --schema-only switch will dump only the DDL for your table(s).
Why would shelling out to psql not count as "programmatically?" It'll dump the entire schema very nicely.
Anyhow, you can get data types (and much more) from the information_schema (8.4 docs referenced here, but this is not a new feature):
=# select column_name, data_type from information_schema.columns
-# where table_name = 'config';
column_name | data_type
--------------------+-----------
id | integer
default_printer_id | integer
master_host_enable | boolean
(3 rows)
The answer is to check the source code for pg_dump and follow the switches it uses to generate the DDL. Somewhere inside the code there's a number of queries used to retrieve the metadata used to generate the DDL.
Here is a good article on how to get the meta information from information schema,
http://www.alberton.info/postgresql_meta_info.html.
I saved 4 functions to mock up pg_dump -s behaviour partially. Based on \d+ metacommand. The usage would be smth alike:
\pset format unaligned
select get_ddl_t(schemaname,tablename) as "--" from pg_tables where tableowner <> 'postgres';
Of course you have to create functions prior.
Working sample here at rextester