I have a table slot which has a start_time and end_time. I want no other slot to be created having the same start and end time. A unique constraint as shown in the schema below
CREATE TABLE slot(
id SERIAL PRIMARY KEY,
start_time TIMETZ NOT NULL,
end_time TIMETZ NOT NULL,
CONSTRAINT slot_start_end_unique UNIQUE (start_time,end_time)
);
can be easily bypassed by picking up one minute + or - time. I want to add a constraint so that no equivalent time slot can be created or a subset time slot cannot be created.
I am thinking of using check to prevent any practically same slot from being created.
Can anyone please point towards the right direction?
Your idea of using a check constraint as unique enforcement is can probable be made to work but there could be issues and should probably be avoided. Your requirement necessitates comparing with other rows in the table but
PostgreSQL does not support CHECK constraints that reference table
data other than the new or updated row being checked. ...
It goes on to indicate a custom trigger is best employed. So, that is the approach here. See Section 5.4.1. Check Constraints.
Beyond that you have a couple issues: First off the data type TIME WITH TIMEZONE (TIMETZ) is a poor choice for data type and is somewhat misleading as it not actually used as indicated as the. As Section 8.5.3. Time Zones puts it:
Although the date type cannot have an associated time zone, the time
type can. Time zones in the real world have little meaning unless
associated with a date as well as a time, ... PostgreSQL assumes
your local time zone for any type containing only date or time.
(emphases mine)
Secondly, by using time only you may have problems specifying some ranges. How, for example, do you code the range from 22:00 to 06:00 or 23:45 to 00:15. But now back to the process.
The following trigger assumes data type TIME rather than TIMETZ and adjusts for the over midnight issue by assuming 'the next day' whenever start_time is greater than end_time.
create or replace
function is_valid_irange()
returns trigger
language plpgsql
strict
as $$
declare
k_existing_message constant text =
'Range Requested (%s,%s). Overlaps existing range (%s,%s).';
l_existing_range tsrange;
l_parm_range tsrange;
begin
with p_times(new_start_time, new_end_time) as
( values ('1970-01-01'::timestamp + new.start_time
,'1970-01-01'::timestamp + new.end_time
)
)
select tsrange(new_start_time,end_time,'[)')
into l_parm_range
from (select new_start_time
, case when new_start_time>new_end_time
then new_end_time + interval '1 day'
else new_end_time
end end_time
from p_times
) pr;
with db_range (id, existing_range) as
( select id, tsrange(start_time, end_time, '[)')
from ( select id, '1970-01-01'::timestamp + start_time start_time
, case when start_time>end_time
then '1970-01-02'::timestamp + end_time
else '1970-01-01'::timestamp + end_time
end end_time
from irange
) dr
)
select d.existing_range
into l_existing_range
from db_range d
where l_parm_range && existing_range
and d.id != new.id
limit 1;
if l_existing_range is not null
then
raise exception 'Invalid Range Requested:'
using detail= format( k_existing_message
, lower(l_parm_range)
, upper(l_parm_range)
, lower(l_existing_range)::time
, upper(l_existing_range)::time
);
end if;
return new;
end ;
$$;
How it works:
Postgres provides a set of built in data range types and a set of range operator functions.
The trigger coheres the start and end times,both new row and existing table rows, into timestamps with a fixed date ( the beginning of time 1970-01-01 according to unix).
Then employs the Overlaps (&&) operator. If any overlaps are found the trigger raises and exception. Instead of an exception it could return null to suppress
the insert or update but otherwise continue processing. For that it needs to become a BEFORE trigger. It is currently an AFTER trigger.
For full example see here. Do not worry about the date, pick any you want, just used a a generator for calculating times and to provide a common base for testing.
Create the table as normal then before you INSERT data into the table perform a SELECT query to search whether or not the time you are looking to insert already exists. For example you want to enter start 1pm and end 2pm as such:
DECLARE #start_value INT = 1
#end_value INT = 2;
Select COUNT(ID) as UseCheck FROM slot WHERE start_time = #start_value or end_time = #end_value
Then apply logic to say; IF UseCheck > 0 Then do stuff
Related
Given a table as the following:
create table meetings(
id integer primary key,
start_time varchar,
end_time varchar
)
Considering that the string stored in this table follow the format 'HH:MM' and have a 24 hours format, is there a command on PostgreSQL 9.4 that I can cast fields to time, calculate the difference between them, and return a single result of the counting of full hours available?
e.g: start_time: '08:00' - end_time: '12:00'
Result must be 4.
In your particular case, assuming that you are working with clock values (both of them belonging to the same day), I would guess you can do this
(clock_to::time - clock_from::time) as duration
Allow me to leave you a ready to run example:
with cte as (
select '4:00'::varchar as clock_from, '14:00'::varchar as clock_to
)
select (clock_to::time - clock_from::time) as duration
from cte
I have been using Python to do this in memory, but I would like to know the proper way to set up an employee mapping table in Postgres.
row_id | employee_id | other_id | other_dimensions | effective_date | expiration_date | is_current
Unique constraint on (employee_id, other_id), so a new row would be inserted whenever there is a change
I would want the expiration date from the previous row to be updated to the new effective_date minus 1 day, and the is_current should be updated to False
Ultimate purpose is to be able to map each employee back accurately on a given date
Would love to hear some best practices so I can move away from my file-based method where I read the whole roster into memory and use pandas to make changes, then truncate the original table and insert the new one.
Here's a general example built using the column names you provided that I think does more or less what you want. Don't treat it as a literal ready-to-run solution, but rather an example of how to make something like this work that you'll have to modify a bit for your own actual use case.
The rough idea is to make an underlying raw table that holds all your data, and establish a view on top of this that gets used for ordinary access. You can still use the raw table to do anything you need to do to or with the data, no matter how complicated, but the view provides more restrictive access for regular use. Rules are put in place on the view to enforce these restrictions and perform the special operations you want. While it doesn't sound like it's significant for your current application, it's important to note that these restrictions can be enforced via PostgreSQL's roles and privileges and the SQL GRANT command.
We start by making the raw table. Since the is_current column is likely to be used for reference a lot, we'll put an index on it. We'll take advantage of PostgreSQL's SERIAL type to manage our raw table's row_id for us. The view doesn't even need to reference the underlying row_id. We'll default the is_current to a True value as we expect most of the time we'll be adding current records, not past ones.
CREATE TABLE raw_employee (
row_id SERIAL PRIMARY KEY,
employee_id INTEGER,
other_id INTEGER,
other_dimensions VARCHAR,
effective_date DATE,
expiration_date DATE,
is_current BOOLEAN DEFAULT TRUE
);
CREATE INDEX employee_is_current_index ON raw_employee (is_current);
Now we define our view. To most of the world this will be the normal way to access employee data. Internally it's a special SELECT run on-demand against the underlying raw_employee table that we've already defined. If we had reason to, we could further refine this view to hide more data (it's already hiding the low-level row_id as mentioned earlier) or display additional data produced either via calculation or relations with other tables.
CREATE OR REPLACE VIEW employee AS
SELECT employee_id, other_id,
other_dimensions, effective_date, expiration_date,
is_current
FROM raw_employee;
Now our rules. We construct these so that whenever someone tries an operation against our view, internally it'll perform a operation against our raw table according to the restrictions we define. First INSERT; it mostly just passes the data through without change, but it has to account for the hidden row_id:
CREATE OR REPLACE RULE employee_insert AS ON INSERT TO employee DO INSTEAD
INSERT INTO raw_employee VALUES (
NEXTVAL('raw_employee_row_id_seq'),
NEW.employee_id, NEW.other_id,
NEW.other_dimensions,
NEW.effective_date, NEW.expiration_date,
NEW.is_current
);
The NEXTVAL part enables us to lean on PostgreSQL for row_id handling. Next is our most complicated one: UPDATE. Per your described intent, it has to match against employee_id, other_id pairs and perform two operations: updating the old record to be no longer current, and inserting a new record with updated dates. You didn't specify how you wanted to manage new expiration dates, so I took a guess. It's easy to change it.
CREATE OR REPLACE RULE employee_update AS ON UPDATE TO employee DO INSTEAD (
UPDATE raw_employee SET is_current = FALSE
WHERE raw_employee.employee_id = OLD.employee_id AND
raw_employee.other_id = OLD.other_id;
INSERT INTO raw_employee VALUES (
NEXTVAL('raw_employee_row_id_seq'),
COALESCE(NEW.employee_id, OLD.employee_id),
COALESCE(NEW.other_id, OLD.other_id),
COALESCE(NEW.other_dimensions, OLD.other_dimensions),
COALESCE(NEW.effective_date, OLD.expiration_date - '1 day'::INTERVAL),
COALESCE(NEW.expiration_date, OLD.expiration_date + '1 year'::INTERVAL),
TRUE
);
);
The use of COALESCE enables us to update columns that have explicit updates, but keep old values for ones that don't. Finally, we need to make a rule for DELETE. Since you said you want to ensure you can track employee histories, the best way to do this is also the simplest: we just disable it.
CREATE OR REPLACE RULE employee_delete_protect AS
ON DELETE TO employee DO INSTEAD NOTHING;
Now we ought to be able to insert data into our raw table by performing INSERT operations on our view. Here are two sample employees; the first has a few weeks left but the second is about to expire. Note that at this level we don't need to care about the row_id. It's an internal implementation detail of the lower level raw table.
INSERT INTO employee VALUES (
1, 1,
'test', CURRENT_DATE - INTERVAL '1 week', CURRENT_DATE + INTERVAL '3 weeks',
TRUE
);
INSERT INTO employee VALUES (
2, 2,
'another test', CURRENT_DATE - INTERVAL '1 month', CURRENT_DATE,
TRUE
);
The final example is deceptively simple after all the build-up that we've done. It performs an UPDATE operation on the view, and internally it results in an update to the existing employee #2 plus a new entry for employee #2.
UPDATE employee SET expiration_date = CURRENT_DATE + INTERVAL '1 year'
WHERE employee_id = 2 AND other_id = 2;
Again I'll stress that this isn't meant to just take and use without modification. There should be enough info here though for you to make something work for your specific case.
Question:
How to make a custom range type using time (or time with tz) as a base?
What I have so far:
create time timerange as range (
subtype = time,
subtype_diff = ??? )
I think subtype_diff needs a function. For time types in pg, the minus function (difference) should work, but I can't seem to find the documentation that describes the correct syntax.
Background:
I am trying to make a scheduling app, where a service supplier would be able to show their availability and fees for different times of day, and a customer could see the price and book in real-time. The service supplier needs to be able to set different prices for different days or times of day. For example, a plumber might want, for a one hour visit:
$100 monday 0900-1800
$200 monday 1800-2200
$500 monday 2200-0000
To support this, the solution I am working on is as follows (any thoughts on better ways of doing this gratefully received)
I want to make a table that contains 'fee_rules'. I want to be able to lookup a given date, time and duration, and be able to check the associated fee based on a set of fee rules based on ranges. My proposed table schema:
id sequence
day_of_week integer [where 0 = Sunday, 1 = Monday..]
time_range [I want to make a custom time-range using only
hours:minutes of the day]
fee integer
fee_schedule_id (foreign key) (reference to a specific supplier, who is the 'owner' of that specific fee rule)
An example of a fee rule would be as follows:
id day_of_week time_range fee fee_schedule_id
12 01 10:00-18:00 100 543
For a given date, I plan to calculate day_of_week (e.g. day_of_week=01 for 'Monday') and generate a time_range based on the start_time and duration of the proposed visit e.g. visit_range=10:00-11:00. I want to be able to search using postgresql's range operators, e.g.
select fee where day_of_week = '01' and visit_range <# (range is contained by)time_range and fee_schedule_id = 543 [reference to the specific supplier's fees]
per #a_horse_with_no_name and #pozs
"I don't think you need the subtype_diff":
create type timerange as range (subtype = time);
create table schedule
(
id integer not null primary key,
time_range timerange
);
insert into schedule
values
(1, timerange(time '08:00', time '10:00', '[]')),
(2, timerange(time '10:00', time '12:00', '[]'));
select *
from schedule
where time_range #> time '09:00'
subtype_diff of timerange is actually in an example of Postgres doc (since 9.5 version):
CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS
'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE;
CREATE TYPE timerange AS RANGE (
subtype = time,
subtype_diff = time_subtype_diff
);
I have a strange problem when retrieving records from db after comparing a truncated field with date_trunc().
This query doesn't return any data:
select id from my_db_log
where date_trunc('day',creation_date) >= to_date('2014-03-05'::text,'yyyy-mm-dd');
But if I add the column creation_date with id then it returns data(i.e. select id, creation_date...).
I have another column last_update_date having same type and when I use that one, still does the same behavior.
select id from my_db_log
where date_trunc('day',last_update_date) >= to_date('2014-03-05'::text,'yyyy-mm-dd');
Similar to previous one. it also returns record if I do id, last_update_date in my select.
Now to dig further, I have added both creation_date and last_updated_date in my where clause and this time it demands to have both of them in my select clause to have records(i.e. select id, creation_date, last_update_date).
Does anyone encountered the same problem ever? This similar thing works with my other tables which are having this type of columns!
If it helps, here is my table schema:
id serial NOT NULL,
creation_date timestamp without time zone NOT NULL DEFAULT now(),
last_update_date timestamp without time zone NOT NULL DEFAULT now(),
CONSTRAINT db_log_pkey PRIMARY KEY (id),
I have asked a different question earlier that didn't get any answer. This problem may be related to that one. If you are interested on that one, here is the link.
EDITS:: EXPLAIN (FORMAT XML) with select * returns:
<explain xmlns="http://www.postgresql.org/2009/explain">
<Query>
<Plan>
<Node-Type>Result</Node-Type>
<Startup-Cost>0.00</Startup-Cost>
<Total-Cost>0.00</Total-Cost>
<Plan-Rows>1000</Plan-Rows>
<Plan-Width>658</Plan-Width>
<Plans>
<Plan>
<Node-Type>Result</Node-Type>
<Parent-Relationship>Outer</Parent-Relationship>
<Alias>my_db_log</Alias>
<Startup-Cost>0.00</Startup-Cost>
<Total-Cost>0.00</Total-Cost>
<Plan-Rows>1000</Plan-Rows>
<Plan-Width>658</Plan-Width>
<Node/s>datanode1</Node/s>
<Coordinator-quals>(date_trunc('day'::text, creation_date) >= to_date('2014-03-05'::text, 'yyyy-mm-dd'::text))</Coordinator-quals>
</Plan>
</Plans>
</Plan>
</Query>
</explain>
"Impossible" phenomenon
The number of rows returned is completely independent of items in the SELECT clause. (But see #Craig's comment about SRFs.) Something must be broken in your db.
Maybe a broken covering index? When you throw in the additional column, you force Postgres to visit the table itself. Try to re-index:
REINDEX TABLE my_db_log;
The manual on REINDEX. Or:
VACUUM FULL ANALYZE my_db_log;
Better query
Either way, use instead:
select id from my_db_log
where creation_date >= '2014-03-05'::date
Or:
select id from my_db_log
where creation_date >= '2014-03-05 00:00'::timestamp
'2014-03-05' is in ISO 8601 format. You can just cast this string literal to date. No need for to_date(), works with any locale. The date is coerced to timestamp [without time zone] automatically when compared to creation_date (being timestamp [without time zone]). More details about timestamps in Postgres here:
Ignoring timezones altogether in Rails and PostgreSQL
Also, you gain nothing by throwing in date_trunc() here. On the contrary, your query will be slower and any plain index on the column cannot be used (potentially making this much slower)
I'm using Entity Framework and MSSQL...
I need to insert a custom reference number when a record is inserted. The format is YYYY-01, YYYY-02, etc but the sequential number needs to be reset when a new year begins.
For example 2011-01, 2011-02, 2012-01
I'm curious if I should just go with a trigger or manage this with EF or ?
Having the sequential numbering reset each year has me a little confused...
Thanks for any advice!
Update:
Sorry, couldn't get the Code tag to work well with the markup
--Variables
DECLARE #year INT,
#seqNum INT;
--Try to find if the [ComplaintCount] table already contains the current year
SET #year = (SELECT [Count_Year]
FROM [ComplaintCount]
WHERE [Count_Year] = YEAR(Getdate()))
--If the current year cannot be found in the [ComplaintCount] table, a new record for the current year needs to be made
IF #year IS NULL
BEGIN
--Get the Current Year and set the initial sequence number to start counting for the new year
SET #year = YEAR(Getdate());
SET #seqNum = 1;
--Insert the new default values into the [ComplaintCount] table
INSERT INTO [ComplaintCount]
(count_year,
count_current)
VALUES (#year,
#seqNum);
END
ELSE
BEGIN
--We found a record already in the [ComplaintCount] table for the current year
--Get the sequence number and increase it by one
SET #seqNum = (SELECT [Count_Current]
FROM [ComplaintCount]
WHERE [Count_Year] = #year) + 1
--Insert the new values into the [ComplaintCount] table
UPDATE [ComplaintCount]
SET [Count_Current] = #seqNum
WHERE [Count_Year] = #year;
END
--Its now safe to insert the correct reference number into the [Complaint] table
UPDATE
UPDATE [Complaint]
SET [Complaint_Reference] = CAST(#year AS VARCHAR) + '-' + CAST(
#seqNum AS VARCHAR)
FROM [Complaint]
INNER JOIN inserted
ON [Complaint].[PK_Complaint_Id] = inserted.[PK_Complaint_Id]
I'd say a trigger. Create a two column table that stores the year and the current record number and then uses a trigger to look up the current year, increment the count column by one, then return that count to the trigger. Build logic into the trigger that if the new year doesn't exist, insert the new year record. I know most people like to avoid triggers if possible but that's a pretty legit use of a trigger and way less processing than trying to count records on every insert.
Having a single row for every year and it's related count may also prove useful in the future when you're trying to audit a past year or answer BI questions.