I have the below table
Column | Type | Modifiers
-----------+--------------------------+---------------------------------------------------------
id | integer | not null default nextval('votes_vote_id_seq'::regclass)
voter | character varying |
votee | character varying |
timestamp | timestamp with time zone | default now()
Currently , i have a unique constraint with voter and votee meaning that there is only 1 vote per user
I would like to enforce a condition which allows votes to happen weekly using the timestamp column. A User can only vote for the votee only once a week.
Is there a way i can add custom constraints to postgres? Are they the same thing as functions?
A constraint is something like a special trigger in PostgreSQL.
Normal triggers won't do, because they cannot see concurrent modifications of the database, so two concurrent transactions could both see their condition fulfilled, but after they commit, the condition might be violated.
The solution I'd recommend is to use SERIALIZABLE transactions throughout (everybody, including readers, has to use them for it to work) and verify your condition in a BEFORE trigger.
SERIALIZABLE will guarantee that the above scenario cannot happen.
Related
Suppose I have a psql table with a primary key and some data:
pkey | price
----------------------+-------
0075QlyLvw8bi7q6XJo7 | 20
(1 row)
However, I would like to save historical updates on it without losing the functionality that comes from referencing it's key in other tables as foreign keys.
I am thinking of doing some kind of revision_number + timestamp approach where each "update" would be a new row, example:
pkey | price | rev_no
----------------------+-------+--------
0075QlyLvw8bi7q6XJo7 | 20 | 0
----------------------+-------+--------
0075QlyLvw8bi7q6XJo7 | 15 | 1
(2 rows)
Then create a view that always takes the highest revision number of the table and reference keys from that view.
However to me this workaraound seems a bit too heavy for a task that in my opinion should be fairly common. Is there something I'm missing? Do you have a better solution or is there a well known paradigm for these types of problems which I don't know about?
Assuming PKey is actually the defined primary key you cannot do the revision scheme you outlined without creating a history table and moving old data to it. The primary key must be unique for any revision. But if you have a properly normalized table there several valid method, the following is one:
Review the other attributes and identify the candidate business keys (columns of business meaning that could be defined unique -- perhaps the item name.
If not already present add 2 columns: effective timestamp and superseded timestamp.
Now create a partial unique index on the identified column,from #1) and the superseded timestamp being a column meaning this is the currently active version.
Create a simple view as Select * from table. Since this is a simple view it is fully update-able. Use this View for Select,Insert and Delete, but
for Update create an instead of trigger. This trigger will set the superseded timestamp of the current active row and insert a new row update applied and the updated the version number.
With the above you can get you uniquely keep on the current active revision. Further you maintain the history of all relationships at each version. (See demo, including a couple useful functions)
I just noticed that constraints, such as FOREIGN KEY, automatically generate system triggers, and names them RI_ConstraintTrigger_a or _c + trigger oid. I've looked at the docs, but do not see a way to declare names for these triggers in FOREIGN KEY, etc.. I care because I'm writing a bit of check code to compare objects in two different databases. The local names of system triggers vary, since the oids are naturally going to vary.
Is there a way to declare names for these triggers as they're created? If so, is there some harm in doing so? I think that I read that after a restore or upgrade, the trigger and related function names might be regenerated. If so, using RENAME TRIGGER on these items seems like swimming upstream...and I suspect is a Bad Idea.
I suppose that I could locate a trigger's local name by querying pg_trigger on combination of other attributes...but I'm not seeing what makes a trigger unique, apart from it's unique name. All I can think of is to search against pg_get_triggerdef(oid), and compare the outputs.
For those following along at home, here's a "hello world" example that creates a couple of system triggers.
DROP TABLE if exists calendar_child CASCADE;
CREATE TABLE calendar_child
(
id uuid NOT NULL DEFAULT extensions.gen_random_uuid() PRIMARY KEY,
calendar_id uuid NOT NULL DEFAULT NULL
);
ALTER TABLE calendar_child
ADD CONSTRAINT calendar_year_calendar_fk
FOREIGN KEY (calendar_id) REFERENCES calendar(id)
ON DELETE CASCADE;
select oid,tgrelid::regclass,tgname from pg_trigger where tgrelid::regclass::text = 'calendar_child';
+--------+----------------+-------------------------------+
| oid | tgrelid | tgname |
+--------+----------------+-------------------------------+
| 355281 | calendar_child | RI_ConstraintTrigger_c_355281 |
| 355282 | calendar_child | RI_ConstraintTrigger_c_355282 |
+--------+----------------+-------------------------------+
Here's a sample, lightly formatted, of what pg_get_triggerdef returns.
CREATE CONSTRAINT TRIGGER "RI_ConstraintTrigger_a_352380"
AFTER DELETE ON calendar FROM calendar_year
NOT DEFERRABLE INITIALLY
IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_cascade_del"()
The linked function names aren't named dynamically, they seem to be calls to C routines for standard behaviors, found in https://doxygen.postgresql.org/ri__triggers_8c_source.html.
It is not supported to change the name of these triggers, and there is no support for renaming triggers in general.
There is no point in comparing these trigger names, because they are just implementation details of the foreign key constraint. Constraints can be renamed, and you can get the constraint definition with the pg_get_constraintdef function. That is what you should compare.
I'm trying to implement row-level security in Postgres. In reality, I have many roles, but for the sake of this question, there are four roles: executive, director, manager, and junior. I have a table that looks like this:
SELECT * FROM ex_schema.residence; --as superuser
primary_key | residence | security_level
------------------+---------------+--------------
5 | Time-Share | executive
1 | Single-Family | junior
2 | Multi-Family | director
4 | Condominium | manager
6 | Modular | junior
3 | Townhouse | director
I've written a policy to enable row-level security that looks like this:
CREATE POLICY residence_policy
ON ex_schema.residence
FOR ALL
USING (security_level = CURRENT_USER)
WITH CHECK (primary_key IS NOT NULL AND security_level = CURRENT_USER);
As expected, when the executive connects to the database and selects the table, that role only sees rows that have executive in the security_level column. What I'd like to do is enable the row-level security so that higher security roles can see rows that match their security level as well as rows that have lower security privileges. The hierarchy would look like this:
ROW ACCESS PER ROLE
executive: executive, director, manager, junior
director: director, manager, junior
manager: manager, junior
junior: junior
I'm wondering how to implement this type of row-level policy so that a specific role can access multiple types of security levels. There's flexibility in changing the security_level column structure and data type.
One thing you can do is define an enum type for your levels:
CREATE TYPE sec_level AS ENUM
('junior', 'manager', 'director', 'executive');
Then you can use that type for the security_level column and write your policy as
CREATE POLICY residence_policy ON ex_schema.residence
FOR ALL
USING (security_level >= CURRENT_USER::sec_level);
There is no need to check if the primary key is NULL, that would generate an error anyway.
Use an enum type only if you know that these levels won't change, particularly that no level will ever be removed.
Alternatively, you could use a lookup table:
CREATE TABLE sec_level
name text PRIMARY KEY,
rank double precision UNIQUE NOT NULL
);
The column security_level would then be a foreign key to sec_level(rank), and you can compare the values in the policy like before. You will need an extra join with the lookup table, but you can remove levels.
In postgresql, I can create a table documenting which type of vehicle people have.
CREATE TABLE IF NOT EXISTS person_vehicle_type
( id SERIAL NOT NULL PRIMARY KEY
, name TEXT NOT NULL
, vehicle_type TEXT
);
This table might have values such as
id | name | vehicle_type
----+---------+---------
1 | Joe | sedan
2 | Sue | truck
3 | Larry | motorcycle
4 | Mary | sedan
5 | John | truck
6 | Crystal | motorcycle
7 | Matt | sedan
The values in the car_type column are restricted to the set {sedan, truck, motorcycle}.
Is there a way to formalize this restriction in postgresql?
Personally I would use foreign key and lookup table.
Anyway you could use enums. I recommend to read article PostgreSQL Domain Integrity In Depth:
A few RDBMSes (PostgreSQL and MySQL) have a special enum type that
ensures a variable or column must be one of a certain list of values.
This is also enforcible with custom domains.
However the problem is technically best thought of as referential
integrity rather than domain integrity, and usually best enforced with
foreign keys and a reference table. Putting values in a regular
reference table rather than storing them in the schema treats those
values as first-class data. Modifying the set of possible values can
then be performed with DML (data manipulation language) rather than
DDL (data definition language)....
However when the possible enumerated values are very unlikely to
change, then using the enum type provides a few minor advantages.
Enums values have human-readable names but internally they are simple integers. They don’t take much storage space. To compete with
this efficiency using a reference table would require using an
artificial integer key, rather than a natural primary key of the value
description. Even then the enum does not require any foreign key
validation or join query overhead.
Enums and domains are enforced everywhere, even in stored procedure arguments, whereas lookup table values are not. Reference
table enumerations are enforced with foreign keys, which apply only to
rows in a table.
The enum type defines an automatic (but customizable) order relation:
CREATE TYPE log_level AS ENUM ('notice', 'warning', 'error', 'severe');
CREATE TABLE log(i SERIAL, level log_level);
INSERT INTO log(level)
VALUES ('notice'::log_level), ('error'::log_level), ('severe'::log_level);
SELECT * FROM log WHERE level >= 'warning';
DBFiddle Demo
Drawback:
Unlike a restriction of values enforced by foreign key, there is no way to delete a value from an existing enum type. The only workarounds are messing with system tables or renaming the enum, recreating it with the desired values, then altering tables to use the replacement enum. Not pretty.
Hi
We need to modify a column of a big product table , usually normall ddl statments will be
excutely fast ,but the above ddl statmens takes about 10 minnutes。I wonder know the reason!
I just want to expand a varchar column。The following is the detailsl
--table size
wapreader_log=> select pg_size_pretty(pg_relation_size('log_foot_mark'));
pg_size_pretty
----------------
5441 MB
(1 row)
--table ddl
wapreader_log=> \d log_foot_mark
Table "wapreader_log.log_foot_mark"
Column | Type | Modifiers
-------------+-----------------------------+-----------
id | integer | not null
create_time | timestamp without time zone |
sky_id | integer |
url | character varying(1000) |
refer_url | character varying(1000) |
source | character varying(64) |
users | character varying(64) |
userm | character varying(64) |
usert | character varying(64) |
ip | character varying(32) |
module | character varying(64) |
resource_id | character varying(100) |
user_agent | character varying(128) |
Indexes:
"pk_log_footmark" PRIMARY KEY, btree (id)
--alter column
wapreader_log=> \timing
Timing is on.
wapreader_log=> ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256);
ALTER TABLE
Time: 603504.835 ms
ALTER ... TYPE requires a complete table rewrite, that's why it might take some time to complete on large tables. If you don't need a length constraint, than don't use the constraint. Drop these constraints once and and for all, and you will never run into new problems because of obsolete constraints. Just use TEXT or VARCHAR.
When you alter a table, PostgreSQL has to make sure the old version doesn't go away in some cases, to allow rolling back the change if the server crashes before it's committed and/or written to disk. For those reasons, what it actually does here even on what seems to be a trivial change is write out a whole new copy of the table somewhere else first. When that's finished, it then swaps over to the new one. Note that when this happens, you'll need enough disk space to hold both copies as well.
There are some types of DDL changes that can be made without making a second copy of the table, but this is not one of them. For example, you can add a new column that defaults to NULL quickly. But adding a new column with a non-NULL default requires making a new copy instead.
One way to avoid a table rewrite is to use SQL domains (see CREATE DOMAIN) instead of varchars in your table. You can then add and remove constraints on a domain.
Note that this does not work instantly either, since all tables using the domain are checked for constraint validity, but it is less expensive than full table rewrite and it doesn't need the extra disk space.
Not sure if this is any faster, but it may be you will have to test it out.
Try this until PostgreSQL can handle the type of alter you want without re-writing the entire stinking table.
ALTER TABLE log_foot_mark RENAME refer_url TO refer_url_old;
ALTER TABLE log_foot_mark ADD COLUMN refer_url character varying(256);
Then using the indexed primary key or unique key of the table do a looping transaction. I think you will have to do this via Perl or some language that you can do a commit every loop iteration.
WHILE (end < MAX_RECORDS)LOOP
BEGIN TRANSACTION;
UPDATE log_foot_mark
SET refer_url = refer_url_old
WHERE id >= start AND id <= end;
COMMIT TRANSACTION;
END LOOP;
ALTER TABLE log_foot_mark DROP COLUMN refer_url_old;
Keep in mind that loop logic will need to be in something other than PL\PGSQL to get it to commit every loop iteration. Test it with no loop at all and looping with a transaction size of 10k 20k 30k etc until you find the sweet spot.