Getting a list of a table's attributes that are sortable sorted by uniqueness - postgresql

I frequently use pg_dump to dump databases and compare them with diff. To get rid of most "false positives", I'd like to patch pg_dump to sort the table so that its dumped order isn't changed more than necessary by insertions & Co.
So I'm looking for a query that will return a list of a table's attributes that are sortable (e. g. no XML fields) and sorted by "uniqueness", i. e. first attributes representing the primary key, then other unique keys, then the rest.
Before I dive into the depths of PostgreSQL's system catalogues, has anyone already solved this problem?

You could write a bunch of queries like:
COPY (SELECT * FROM T ORDER BY ...) INTO t.csv
and write the result set of each table into a csv file. It will be written in the set ordering, so should be easily compared.
This is much easier than hacking pg_dump.
All the columns of a table you can get by using the standard INFORMATION_SCHEMA:
select * from information_schema.columns;

Related

Set a global alias for a table and a column?

I'm working with a huge Firebird database where tables have completely unreadable names like WTF$RANDOM_ABBREVIATION_6792 or RPG$RANDOM_ABBREVIATION_5462 where columns have names like "rid9312", "1NUM5", "2NUM4", "RNAME8".
I need to set them global aliases to be able to use them as a full-length table names like Document and column names like
Document.CreationDate instead of xecblob.DDATE4
or
TempDoc.MovingOrderID instead of TMP$LINKED_DOC_6101.DID6101
Altering the database, or a table, or a column might be a big problem because the records might be counted by millions and tens of millions, and more over that, a major part of the Delphi-written front-end for the database is bound to the table names and column names.
Is there a way to do that somehow?
The closest thing there is to a "global alias" is to create views. For example:
create view document
as
select
DDATE4 as creationdate
-- , other columns...
from xecblob;
or
create view document (creationdate /*, other column aliases... */)
as
select
DDATE4
-- , other columns...
from xecblob;
(personally, I find the first variant more readable)
This does require altering the database, but there is no real cost associated with that (it doesn't matter if the table contains no, one, thousands or millions of records).

Best performance method for getting records by large collection of IDs

I am writing a query with code to select all records from a table where a column value is contained in a CSV. I found a suggestion that the best way to do this was using ARRAY functionality in PostgresQL.
I have a table price_mapping and it has a primary key of id and a column customer_id of type bigint.
I want to return all records that have a customer ID in the array I will generate from csv.
I tried this:
select * from price_mapping
where ARRAY[customer_id] <# ARRAY[5,7,10]::bigint[]
(the 5,7,10 part would actually be a csv inserted by my app)
But I am not sure that is right. In application the array could contain 10's of thousands of IDs so want to make sure I am doing right with best performance method.
Is this the right way in PostgreSQL to retrieve large collection of records by pre-defined column value?
Thanks
Generally this is done with the SQL standard in operator.
select *
from price_mapping
where customer_id in (5,7,10)
I don't see any reason using ARRAY would be faster. It might be slower given it has to build arrays, though it might have been optimized.
In the past this was more optimal:
select *
from price_mapping
where customer_id = ANY(VALUES (5), (7), (10)
But new-ish versions of Postgres should optimize this for you.
Passing in tens of thousands of IDs might run up against a query size limit either in Postgres or your database driver, so you may wish to batch this a few thousand at a time.
As for the best performance, the answer is to not search for tens of thousands of IDs. Find something which relates them together, index that column, and search by that.
If your data is big enough, try this:
Read your CSV using a FDW (foreign data wrapper)
If you need this connection often, you might build a materialized view from it, holding only needed columns. Refresh this when new CSV is created.
Join your table against this foreign table or materialized viev.

select all columns except two in q kdb historical database

In output I want to select all columns except two columns from a table in q/kdb historical database.
I tried running below query but it does not work on hdb.
delete colid,coltime from table where date=.z.d-1
but it is failing with below error
ERROR: 'par
(trying to update a physically partitioned table)
I referred https://code.kx.com/wiki/Cookbook/ProgrammingIdioms#How_do_I_select_all_the_columns_of_a_table_except_one.3F but no help.
How can we display all columns except for two in kdb historical database?
The reason you are getting par error is due to the fact that it is a partitioned table.
The error is documented here
trying to update a partitioned table
You cannot directly update, delete anything on a partitioned table ( there is a separate db maintenance script for that)
The query you have used as fix is basically selecting the data first in-memory (temporarily) and then deleting the columns, hence it is working.
delete colid,coltime from select from table where date=.z.d-1
You can try the following functional form :
c:cols[t] except `p
?[t;enlist(=;`date;2015.01.01) ;0b;c!c]
Could try a functional select:
?[table;enlist(=;`date;.z.d);0b;{x!x}cols[table]except`colid`coltime]
Here the last argument is a dictionary of column name to column title, which tells the query what to extract. Instead of deleting the columns you specified this selects all but those two, which is the same query more or less.
To see what the functional form of a query is you can run something like:
parse"select colid,coltime from table where date=.z.d"
And it will output the arguments to the functional select.
You can read more on functional selects at code.kx.com.
Only select queries work on partitioned tables, which you resolved by structuring your query where you first selected the table into memory, then deleted the columns you did not want.
If you have a large number of columns and don't want to create a bulky select query you could use a functional select.
?[table;();0b;{x!x}((cols table) except `colid`coltime)]
And show all columns except a subset of columns. The column clause expects a dictionary hence I am using the function {x!x} to convert my list to a dictionary. See more information here
https://code.kx.com/q/ref/funsql/
As nyi mentioned, if you want to permanently delete columns from an historical database you can use the deleteCol function in the dbmaint tools https://github.com/KxSystems/kdb/blob/master/utils/dbmaint.md

postgresql hstore key/value vs traditional SQL performance

I need to develop a key/value backend, something like this:
Table T1 id-PK, Key - string, Value - string
INSERT into T1('String1', 'Value1')
INSERT INTO T1('String1', 'Value2')
Table T2 id-PK2, id2->external key to id
some other data in T2, which references data in T1 (like users which have those K/V etc)
I heard about PostgreSQL hstore with GIN/GIST. What is better (performance-wise)?
Doing this the traditional way with SQL joins and having separate columns(Key/Value) ?
Does PostgreSQL hstore perform better in this case?
The format of the data should be any key=>any value.
I also want to do text matching e.g. partially search for (LIKE % in SQL or using the hstore equivalent).
I plan to have around 1M-2M entries in it and probably scale at some point.
What do you recommend ? Going the SQL traditional way/PostgreSQL hstore or any other distributed key/value store with persistence?
If it helps, my server is a VPS with 1-2GB RAM, so not a pretty good hardware. I was also thinking to have a cache layer on top of this, but I think it rather complicates the problem. I just want good performance for 2M entries. Updates will be done often but searches even more often.
Thanks.
Your question is unclear because your not clear about your objective.
The key here is the index (pun intended) - if your dealing with a large amount of keys you want to be able to retrieve them with a the least lookups and without pulling up unrelated data.
Short answer is you probably don't want to use hstore, but lets look into more detail...
Does each id have many key/value pairs (hundreds+)? Don't use hstore.
Will any of your values contain large blocks of text (4kb+)? Don't use hstore.
Do you want to be able to search by keys in wildcard expressions? Don't use hstore.
Do you want to do complex joins/aggregation/reports? Don't use hstore.
Will you update the value for a single key? Don't use hstore.
Multiple keys with the same name under an id? Can't use hstore.
So what's the use of hstore? Well, one good scenario would be if you wanted to hold key/value pairs for an external application where you know you always want to retrive all key/values and will always save the data back as a block (ie, it's never edited in-place). At the same time you do want some flexibility to be able to search this data - albiet very simply - rather than storing it in say a block of XML or JSON. In this case since the number of key/value pairs are small you save on space because your compressing several tuples into one hstore.
Consider this as your table:
CREATE TABLE kv (
id /* SOME TYPE */ PRIMARY KEY,
key_name TEXT NOT NULL,
key_value TEXT,
UNIQUE(id, key_name)
);
I think the design is poorly normalized. Try something more like this:
CREATE TABLE t1
(
t1_id serial PRIMARY KEY,
<other data which depends on t1_id and nothing else>,
-- possibly an hstore, but maybe better as a separate table
t1_props hstore
);
-- if properties are done as a separate table:
CREATE TABLE t1_properties
(
t1_id int NOT NULL REFERENCES t1,
key_name text NOT NULL,
key_value text,
PRIMARY KEY (t1_id, key_name)
);
If the properties are small and you don't need to use them heavily in joins or with fancy selection criteria, and hstore may suffice. Elliot laid out some sensible things to consider in that regard.
Your reference to users suggests that this is incomplete, but you didn't really give enough information to suggest where those belong. You might get by with an array in t1, or you might be better off with a separate table.

PostgreSQL: Loading data into Star Schema efficiently

Imagine a table with the following structure on PostgreSQL 9.0:
create table raw_fact_table (text varchar(1000));
For the sake of simplification I only mention one text column, in reality it has a dozen. This table has 10 billion rows and each column has lots of duplicates. The table is created from a flat file (csv) using COPY FROM.
To increase performance I want to convert to the following star schema structure:
create table dimension_table (id int, text varchar(1000));
The fact table would then be replaced with a fact table like the following:
create table fact_table (dimension_table_id int);
My current method is to essentially run the following query to create the dimension table:
Create table dimension_table (id int, text varchar(1000), primary key(id));
then to create fill the dimension table I use:
insert into dimension_table (select null, text from raw_fact_table group by text);
Afterwards I need to run the following query:
select id into fact_table from dimension inner join raw_fact_table on (dimension.text = raw_fact_table.text);
Just imagine the horrible performance I get by comparing all strings to all other strings several times.
On MySQL I could run a stored procedure during the COPY FROM. This could create a hash of a string and all subsequent string comparison is done on the hash instead of the long raw string. This does not seem to be possible on PostgreSQL, what do I do then?
Sample data would be a CSV file containing something like this (I use quotes also around integers and doubles):
"lots and lots of text";"3";"1";"2.4";"lots of text";"blabla"
"sometext";"30";"10";"1.0";"lots of text";"blabla"
"somemoretext";"30";"10";"1.0";"lots of text";"fooooooo"
Just imagine the horrible performance
I get by comparing all strings to all
other strings several times.
When you've been doing this a while, you stop imagining performance, and you start measuring it. "Premature optimization is the root of all evil."
What does "billion" mean to you? To me, in the USA, it means 1,000,000,000 (or 1e9). If that's also true for you, you're probably looking at between 1 and 7 terabytes of data.
My current method is to essentially
run the following query to create the
dimension table:
Create table dimension_table (id int, text varchar(1000), primary key(id));
How are you gonna fit 10 billion rows into a table that uses an integer for a primary key? Let's even say that half the rows are duplicates. How does that arithmetic work when you do it?
Don't imagine. Read first. Then test.
Read Data Warehousing with PostgreSQL. I suspect these presentation slides will give you some ideas.
Also read Populating a Database, and consider which suggestions to implement.
Test with a million (1e6) rows, following a "divide and conquer" process. That is, don't try to load a million at a time; write a procedure that breaks it up into smaller chunks. Run
EXPLAIN <sql statement>
You've said you estimate at least 99% duplicate rows. Broadly speaking, there are two ways to get rid of the dupes
Inside a database, not necessarily the same platform you use for production.
Outside a database, in the filesystem, not necessarily the same filesystem you use for production.
If you still have the text files that you loaded, I'd consider first trying outside the database. This awk one-liner will output unique lines from each file. It's relatively economical, in that it makes only one pass over the data.
awk '!arr[$0]++' file_with_dupes > file_without_dupes
If you really have 99% dupes, by the end of this process you should have reduced your 1 to 7 terabytes down to about 50 gigs. And, having done that, you can also number each unique line and create a tab-delimited file before copying it into the data warehouse. That's another one-liner:
awk '{printf("%d\t%s\n", NR, $0);}' file_without_dupes > tab_delimited_file
If you have to do this under Windows, I'd use Cygwin.
If you have to do this in a database, I'd try to avoid using your production database or your production server. But maybe I'm being too cautious. Moving several terabytes around is an expensive thing to do.
But I'd test
SELECT DISTINCT ...
before using GROUP BY. I might be able to do some tests on a large data set for you, but probably not this week. (I don't usually work with terabyte-sized files. It's kind of interesting. If you can wait.)
Just to questions:
- it neccessary to convert your data in 1 or 2 steps?
- May we modify the table while converting?
Running more simplier queries may improve your performance (and the server load while doing it)
One approach would be:
generate dimension_table (If i understand it correctly, you don't have performance problems with this) (maybe with an additional temporary boolean field...)
repeat: choose one previously not selected entry from dimension_table, select every rows from raw_fact_table containing it and insert them into fact_table. Mark dimension_table record as done, and next... You can write this as a stored procedure, and it can convert your data in the background, eating minimal resources...
Or another (probably better):
create fact_table as EVERY record from raw_fact_table AND one dimension_id. (so including dimension_text and dimension_id rows)
create dimension_table
create an after insert trigger for fact_table which:
searches for dimension_text in fact_table
if not found, creates a new record in dimension_table
updates dimension_id to this id
in a simle loop, insert every record from raw_fact_table to fact_table
You are omitting some details there at the end, but I don't see that there necessarily is a problem. It is not in evidence that all strings are actually compared to all other strings. If you do a join, PostgreSQL could very well pick a smarter join algorithm, such as a hash join, which might give you the same hashing that you are implementing yourself in your MySQL solution. (Again, your details are hazy on that.)
-- add unique index
CREATE UNIQUE INDEX uidx ON dimension_table USING hash(text);
-- for non case-sensitive hash(upper(text))
try hash(text); and btree(text) to see which one is faster
I an see several ways of solving your problem
There is md5 function in PostgreSql
md5(string) Calculates the MD5 hash of string, returning the result in hexadecimal
insert into dimension_table (select null, md5(text), text from raw_fact_table group by text)
add md5 field into raw_fact_table as well
select id into fact_table from dimension inner join raw_fact_table on (dimension.md5 = raw_fact_table.md5);
Indexes on MD5 filed might help as well
Or you can calculate MD5 on the fly while loading the data.
For example our ETL tool Advanced ETL processor can do it for you.
Plus it can load data into multiple tables same time.
There is a number of on-line tutorials available on our web site
For example this one demonstrates loading slow changing dimension
http://www.dbsoftlab.com/online-tutorials/advanced-etl-processor/advanced-etl-processor-working-with-slow-changing-dimension-part-2.html