Let's say we are having a table with this definition:
range (
id bigint primary key,
colourId int references colour(id),
smellId int references smell(id),
from bigint,
to bigint
)
This table is actually a reduced view over enormously big table:
item (
id bigint primary key,
colourId int references colour(id),
smellId int references smell(id),
CONSTRAINT item_colour_smell_unique UNIQUE (colour, smell, id)
)
I would like to translate item_colour_smell_unique constraint in the range table. It should watch overlaps of ranges [from, to] while taking account of colourId and smellId column values.
Note that any trigger-based solution is inherently unsafe from race conditions, e.g. when two concurrent transactions insert a row with conflicting ranges, neither of them will see the other conflicting row, due to the "isolation" ACID property (only commited data can be seen).
Some solutions:
Use procedures with explicit locking of the table to force serialization of inserts.
Split the [from, to] range into [from, from+1, ..., to-1, to] and insert a row for each. This way you can use a simple UNIQUE INDEX on the "range" table.
PostgreSQL developer Jeff Davis has been writing about this lately and will implement range conflict constraints in PostgreSQL 8.5
There's no standard "overlapping" constraint. You will have to build your own from some triggers. There has been discussion of this for 8.5 though.
You might find the "seg" module useful too. See the manuals - Appendix F. Additional Supplied Modules
This doesn't fully give you an answer but it sounds like you might want to make use of a trigger.
Related
Could somebody tell is it good idea use varchar as PK. I mean is it less efficient or equal to int/uuid?
In example: car VIN I want to use it as PK but I'm not sure as good it will be indexed or work as FK or maybe there is some pitfalls.
It depends on which kind of data you are going to store.
In some cases (I would say in most cases) it is better to use integer-based primary keys:
for instance, bigint needs only 8 bytes, varchar can require more space. For this reason, a varchar comparison is often more costly than a bigint comparison.
while joining tables it would be more efficient to join them using integer-based values rather that strings
an integer-based key as a unique key is more appropriate for table relations. For instance, if you are going to store this primary key in another tables as a separate column. Again, varchar will require more space in other table too (see p.1).
This post on stackexchange compares non-integer types of primary keys on a particular example.
I have a main, parent table 'transaction_', which I would like to partition. I know that I can easily partition based on any of the fields listed in transaction_, including foreign keys, using the check constraint within any child table. Essentially what I would like to know is whether, in my check constraint, I can somehow refer to other fields in a table for which I have a foreign key. I would like to avoid having too many foreign keys from the seller and client tables in my transaction_ table as that seems like a lot of unnecessary duplication.
CREATE SEQUENCE transaction_id_seq;
CREATE TABLE transaction_ (
transaction_id bigint PRIMARY KEY DEFAULT nextval('transaction_id_seq'),
seller_id int REFERENCES seller(id),
client_id int REFERENCES client(id),
purchase_date date,
purchase_time time,
price real,
quantity int
);
CREATE TABLE seller (
id int PRIMARY KEY,
name text,
location text,
open_time time,
close_time time
);
CREATE TABLE client (
id int PRIMARY KEY,
name text,
billing_suburb text,
billing_zipcode int
);
So for example, I think that I can do the following:
CREATE TABLE transaction_client1_20130108 (
CHECK ( client_id = 1 AND purchase_date = DATE '2013-01-08')
) INHERITS (transaction_);
I would like to do something like the following:
CREATE TABLE transaction_sellerZip90210_20130108 (
CHECK ( client(billing_zipcode) = 90210 AND purchase_date = DATE '2013-01-08')
) INHERITS (transaction_);
Using the following but happy to update if that provides a better solution:
mydb=#SELECT version();
PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.8.1-10ubuntu9) 4.8.1, 64-bit
whether, in my check constraint, I can somehow refer to other fields in a table for which I have a foreign key
Not directly. CHECK constraints may not contain subqueries. However, you can work around that by declaring a LANGUAGE SQL function that does the work you want and using that from the CHECK constraint.
This isn't safe, though. The query planner expects that a CHECK constraint will be accurate and truthful, and may make optimization decisions based on it. So it's not a good idea to trick the system by adding a roundabout constraint on another table.
Instead, I recommend using triggers to sanity-check things like this, enforcing the check at the time any DML is run.
I have some T-SQL (SQL Server 2008) that I inherited and am trying to find out why some of queries are running really slow. In the Actual Execution Plan I have three clustered index scans which are costing me 19%, 21% and 26%, so this seems to be the source of my problem.
The contents of the fields are usually numeric (but some job numbers have an alpha prefix)
The database design (vendor supplied) is pretty poor. The max length of a job number in their application is 12 chars, but in the tables that are joined it is defined as varchar(50) in some places and varchar(15) in others. My parameter is a varchar(12), but I get same thing if I change it to a varchar(50)
The node contains this:
Predicate: [Live_Costing].[dbo].[TSTrans].[JobNo] as [sts1].[JobNo]=CONVERT_IMPLICIT(varchar(50),[#JobNo],0)
sts1 is a derived table, but the table it pulls jobno from is a varchar(50)
I don't understand why it's doing an implicit conversion between 2 varchars. Is it just because they are different lengths?
I'm fairly new to the execution plan
Is there an easy way to figure out which node in the exec plan relates to which part of the query?
Is the predicate, the join clause?
Regards
Mark
Some variables can have collation: enter link description here
Regardless you need to verify your collations, which can be specified at server, DB, table, and column level.
First, check your collation between tempdb and the vendor supplied database. It should match. If it doesn't, it will tend to do implicit conversions.
Assuming you cannot modify the vendor supplied code base, one or more of the following should help you:
1) Predefine your temp tables and specify the same collation for the key field as in the db in use, rather than tempdb.
2) Provide collations when doing string comparisons.
3) Specify collation for key values if using "select into" with a temp table
4) Make sure your collations on your tables and columns match your database collation (VERY important if you imported only specific tables from a vendor into an existing database.)
If you can change the vendor supplied code base, I would suggest reviewing the cost for making all of your char keys the same length and NOT varchar. Varchar has an overhead of 10. The caveat is that if you create a fixed length character field not null, it will be padded to the right (unavoidable).
Ideally, you would have int keys, and only use varchar fields for user interaction/lookup:
create table Products(ProductID int not null identity(1,1) primary key clustered, ProductNumber varchar(50) not null)
alter table Products add constraint uckProducts_ProductNumber unique(ProductNumber)
Then do all joins on ProductID, rather than ProductNumber. Just filter on ProductNumber.
would be perfectly fine.
Is it possible in PostgreSQL to conditionally add a foreign key?
Something like:ALTER TABLE table1 ADD FOREIGN KEY (some_id) REFERENCES other_table WHERE some_id NOT IN (0,-1) AND some_id IS NOT NULL;
Specifically, my reference table has all positive integers (1+) but the table I need to add the foreign key to can contain zero (0), null and negative one (-1) instead, all meaning something different.
Notes:
I am fully aware that this is poor table design, but it was a clever trick built 10+ years ago when the features and resources we have available at this point did not exist. This system is running hundreds of retail stores so going back and changing the method at this point could take months which we don't have.
I can not use a trigger, this MUST be done with a foreign key.
The short answer is no, Postgres does not have conditional foreign keys. Some options you might consider are:
Just not have a FK constraint. Move this logic into the data access layer and live without the referential integrity.
Allow NULL in the column, which is perfectly valid even with a FK constraint. Then, use another column to store whatever the meaning of 0 and -1 is.
Add a dummy row in the referenced table for 0 and -1. Even if it just had bogus data, it would satisfy the FK constraint.
Hope this helps!
You can add another "shadow" column to table1 which holds the cleaned values (i.e. everything but 0 and -1). Use this column for the referential integrity checks. This shadow column is updated/filled by a simple trigger on table1 which writes all values but 0 and -1 into the shadow column. Both 0 and -1 could be mapped to null.
Then you have reference integrity and your unchanged original column. The downside: You have also a little trigger and some redundant data. But alas, this is the fate of a legacy schema!
Your requirement is equivalent to this check constraint:
create table t (a float check (a >= -1 and a = floor(a) or a is null));
You can implement this with a check constraint and a foreign key.
CREATE TABLE table1 (some_id INT, some_id_fkey INT REFERENCES other_table(other_id), CHECK (some_id IN (0,-1) OR some_id IS NOT DISTINCT FROM some_id_fkey));
(not tested)
Here's another possibility. Use PG Inheritance to enforce a partition of the table into has +1 in the flag column and otherwise. (Usual rules/triggers for maintaining this.) Then have the FK relationship between only the Has_PLUS_ONE child table and the referenced table.
Can I define a primary key according to three attributes? I am using Visual Paradigm and Postgres.
CREATE TABLE answers (
time SERIAL NOT NULL,
"{Users}{userID}user_id" int4 NOT NULL,
"{Users}{userID}question_id" int4 NOT NULL,
reply varchar(255),
PRIMARY KEY (time, "{Users}{userID}user_id", "{Users}{userID}question_id"));
A picture may clarify the question.
Yes you can, just as you showed.(though I question your naming of the 2. and 3. column.)
From the docs:
"Primary keys can also constrain more than one column; the syntax is similar to unique constraints:
CREATE TABLE example (
a integer,
b integer,
c integer,
PRIMARY KEY (a, c)
);
A primary key indicates that a column or group of columns can be used as a unique identifier for rows in the table. (This is a direct consequence of the definition of a primary key. Note that a unique constraint does not, by itself, provide a unique identifier because it does not exclude null values.) This is useful both for documentation purposes and for client applications. For example, a GUI application that allows modifying row values probably needs to know the primary key of a table to be able to identify rows uniquely.
A table can have at most one primary key (while it can have many unique and not-null constraints). Relational database theory dictates that every table must have a primary key. This rule is not enforced by PostgreSQL, but it is usually best to follow it.
"
Yes, you can. There is just such an example in the documentation.. However, I'm not familiar with the bracketed terms you're using. Are you doing some variable evaluation before creating the database schema?
yes you can
if you'd run it - you would see it in no time.
i would really, really, really suggest to rethink naming convention. time column that contains serial integer? column names like "{Users}{userID}user_id"? oh my.