Unique constraint on single field of a custom type in postgres - postgresql

I have an entity price in my schema it has an attribute amount which is of a custom type money_with_currency.
The money_with_currency is basically type (amount Big Int, currency char(3)).
The price entity belongs to a product. What I want to do is, create a unique constraint on the combination of product_id(foreign key) + currency . How can I do this?

Referencing a single field of a record type is a bit tricky:
CREATE TYPE money_with_currency AS (amount bigint, currency char(3));
CREATE TABLE product_price
(
product_id integer not null references product,
price money_with_currency not null
);
CREATE UNIQUE INDEX ON product_price(product_id, ((price).currency));

Related

How to use existing index to create conditional constraint

I have following index:
op.create_index(
op.f("ix_order_company_id_email_lower"),
"order",
["company_id", sa.text("lower(email)")],
unique=False,
postgresql_concurrently=True
)
Now I want to create a check constraint that is going to create sort of a 'unique' constraint for only specific type of orders.
op.execute(
"""ALTER TABLE order
ADD CONSTRAINT check_order_company_id_email_lower_type
CHECK (type = 'AH01')
NOT VALID"""
)
How can I add additional check to only apply to records in ix_order_company_id_email_lower?
EDIT: Basically this type of order can only be submitted once per email in a specific company.
You need a partial unique index:
CREATE INDEX ON order (company_id, lower(email))
WHERE type = 'AH01';

How deep can we go in levels of nested tables in oracle 12c?

i am trying to do the folowing:
1) create or replace type transaction as object (date Date, description
varchar(30));
create or replace type T_transaction as table of transaction;
2) create or replace type account as object (id int, description varchar(30),
t_transaction T_transaction)
nested table t_transaction store as xxx1;
create or replace type T_account as table of account;
3) create or replace type user as object (id int, descr varchar(30), t_account
T_account)
nested table t_account store as xxx2;
create or replace type T_user as table of user;
4) create or replace table banks (name varchar(20), users T_user)
nested table users store as xxx3;
first 2 types were created successfully, but "create or replace type account..." is giving -> Warning: Type created with compilation errors.
is there an advice for creating such database using multiple level of nested tables ?
Edit:
I did some research on the subject (object nesting limitations) and here are my findings:
According to Database Limits,
every column of a nested table is in effect added to the columns of the host table and the maximum total number of columns in a table is 1000.
So this would be the official upper limit (in case every nested table had a single column).
However, when I did actual testing (on 11g and 12c), I weren't able to create a table with a nesting depth more than 50 because of error
ORA-00036: maximum number of recursive SQL levels (50) exceeded.
Thus I conclude that the maximum possible depth of nesting is 50.
Initial answer:
I am not aware of limits on objects nesting but I think they should be reasonably permissive.
Your code fails because you made a few mistakes:
1. Using type names as column names (date, t_account, etc.);
2. Using nested table clause in a wrong place;
The code should go like this:
create or replace type transaction_type as object (tx_date Date, description varchar2(30));
create or replace type transaction_tab as table of transaction_type;
create or replace type account_type as object (id int, description varchar(30),
transactions transaction_tab);
create or replace type account_tab as table of account_type;
create or replace type user_type as object (id int, descr varchar(30), accounts account_tab);
create or replace type user_tab as table of user_type;
CREATE table banks (name varchar(20), users user_tab)
nested table users store as xxx3 (
nested table accounts store as xxx2 (
nested table transactions store as xxx1
));
Checking
INSERT INTO banks VALUES (
'John', user_tab(
user_type(1
,'regular user'
, account_tab(
account_type(1
,'regular account'
, transaction_tab(transaction_type(
trunc(sysdate)
, 'regular transaction'))
))
)));
SQL> SELECT *FROM banks;
NAME
--------------------
USERS(ID, DESCR, ACCOUNTS(ID, DESCRIPTION, TRANSACTIONS(TX_DATE, DESCRIPTION)))
--------------------------------------------------------------------------------
John
USER_TAB(USER_TYPE(1, 'regular user', ACCOUNT_TAB(ACCOUNT_TYPE(1, 'regular accou
nt', TRANSACTION_TAB(TRANSACTION_TYPE('04-APR-18', 'regular transaction'))))))
Selecting nested table columns
SELECT b.name, u.id, u.descr, a.id, a.description
FROM banks b, table(b.users) u, table(u.accounts) a
WHERE u.descr = 'regular user' AND a.description = 'regular account'
NAME ID DESCR ID DESCRIPTION
----- --- ------------- --- ----------------
John 1 regular user 1 regular account

Efficient way to reconstruct base table from changes

I have a table consisting of products (with ID's, ~15k records) and another table price_changes (~88m records) recording a change in the price for a given productID at a given changedate.
I'm now interested in the price for each product at given points in time (say every 2 hours for a year, so altogether ~ 4300 points; altogether resulting in ~64m data points of interest). While it's very straight forward to determine the price for a given product at a given time, it seems to be quite time-consuming to determine all 64m data points.
My approach is to pre-populate a new target table fullprices with the data points of interest:
insert into fullprices(obsdate,productID)
select obsdate, productID from targetdates, products
and then update each price observation in this new table like this:
update fullprices f set price = (select price from price_changes where
productID = f.productID and date < f.obsdate
order by date desc
limit 1)
which should give me the most recent price change in each point in time.
Unfortunately, this takes ... well, ages. Is there any better way to do it?
== Edit: My tables are created as follows: ==
CREATE TABLE products
(
productID uuid NOT NULL,
name text NOT NULL,
CONSTRAINT products_pkey PRIMARY KEY (productID )
);
CREATE TABLE price_changes
(
id integer NOT NULL,
productID uuid NOT NULL,
price smallint,
date timestamp NOT NULL
);
CREATE INDEX idx_pc_date
ON price_changes USING btree
(date);
CREATE INDEX idx_pc_productID
ON price_changes USING btree
(productID);
CREATE TABLE targetdates
(
obsdate timestamp
);
CREATE TABLE fullprices
(
obsdate timestamp NOT NULL,
productID uuid NOT NULL,
price smallint
);

Is it possible to set UserDefinedTableType column default value?

I have created my own TableType, is it possible to set a date column default in order to not call GetDate() within every insert?
It's quite straightforward, as in case of regular tables, but you can not assign a name to that constraint:
CREATE TYPE TestType AS TABLE
( ID INT,
CreatedDate DATETIME DEFAULT(GETDATE())
)

CoreData equivalent of sum...group by

I have the following pseudo-SQL schema:
table flight
id int primary key
date timestamp
aircraft_id int (foreign key to aircraft.id)
table flight_component
flight_id int (foreign key to flight.id)
component_id int (foreign key to component.id)
duration double
If I convert this to using CoreData, is there an equivalent way to get the total duration for each component with another query predicate? I think the SQL equivalent would be
select component_id, sum(duration)
from flight_component
join flight on (flight_component.flight_id = flight.id)
where flight.date between ? and ?
group by component_id
Or am I going to have to make my query and then sum up all the components myself?
You can query for #"someArray.#sum". There's a list of such operators on the Collection Operators page.