Just getting started in PostgreSQL and wanted to ask some questions.
Suppose that I have a table of Vendors. Each Vendors has an attribute called Sales Record, which is a time series data about their sales. For each Vendors, I want to be one associated Sales Record Table that has the timeseries sales data for that specific vendor.
How might I want to code that?
You shouldn't have a table per vendor.
Rather, create one big table for all. The table contains a column like vendor_id that is a foreign key to vendors and identifies to which vendor a record belongs.
If you create an index on vendor_id, searching the big table for the data of a vendor will be efficient.
Related
I have two tables in my database that I have created through migration say products and product_variations and the corresponding model is Product and ProductVariation.
In products table I have three columns say id,name and price and in product_variations table I have the variations of the product that have id,variation,stock and one foreign key(Product_id) column references to the id column of the products table. now I want to display a table in such a way that it will mention the product,price ,variation and stock and also the logic is that If a product laready exist in the table it will skip and new products will be added and similarly it will scan the variations table .If a variation not exist it will add and at the same time if a variation already exist it will update the stock and also for a new variation the lowest price will be saved.
I am working on data warehousing project, Need help with below
OLAP Table:
Product Dimension Table:
Product_id, category_id, category_name,brand_id, brand_name ,manufacturer_id, manufacturer_name
OLTP Tables:
Each table contains create_ts and update_ts for tracking creation & update in tables.
**Product_info, id, product_name,category_id,brand_id,manufacturer,create_ts, update_ts
Product_category_mapping: id,product_id,category_id,create_ts, update_ts
brand: id, name,create_ts, update_ts
manufacturer:id, name,create_ts, update_ts**
Looking to track all the changes in any of the tables, should reflect in the dimension table.
For Example:
Current OLAP Snapshot
Product_id, category_id, category_name,brand_id, brand_name ,manufacturer_id, manufacturer_name
1,33,Noodles,45, Nestle,455,nestele_pvt_ltd
Suppose brand name changes from nestle to nestle-us, How will we track this as we are capturing changes based on only product_info update_ts??
Should we consider all the 4 table changes??
Please suggest.
if data changes in any table that is a source for your DW then you need to include it in your extract logic.
For reference data like this where you can have a number of tables that contribute to a single "target" table, an approach I often take is to create a View across these tables in your source DB, include all the columns you need to take across to the DW but only have a single update_ts column that is calculated using the SQL GREATEST function where you pass in the update_ts columns from all the tables in the View. Then you only need to compare this single column to your "last extracted date" to determine if there are any changes that you may need to process
In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.
I have a schema with one table with the majority of data, customer, and three other tables with foreign key references to customer.entry_id which is a BIGSERIAL field. The three other tables are called location, devices and urls where we store various data related to a specific entry in the customer table.
I want to partition the customer table into monthly child tables, and have that part worked out; customer will stay as-is, each month will have a table customer_YYYY_MM that inherits from the master table with the right CHECK constraint and indexes will be created on each individual child table. Data will be moved to the correct child tables while the master table stays empty.
My question is about the other three tables, as I want to partition them as well. However, they have no date information (at all), only the reference to the primary key from the master table. How can I setup the constraints on these tables? Is it even meaningful or possible without date information?
My application logic knows where to insert all the data (it's fairly trivial), but I expect to be able to do simple SELECT queries without specifying which child tables to get it from. So this should work as you would expect from non-partitioned tables:
SELECT l.*
FROM customer c
JOIN location l USING entry_id
WHERE c.date_field > '2015-01-01'
I would partition them by the reference key. The foreign key is used in join conditions and is not usually subject to change so it fulfills the following important points:
Partition by the information that is used mostly in the WHERE clauses of the queries or other parts where partitioning can be used to filter out tables that don't need to be scanned. As one guide puts it:
The objective when defining partitions should be to allow as many queries as possible to fetch data from as few partitions as possible - ideally one.
Partition by information that is not going to be changed so that rows don't constantly need to be thrown from one subtable to another
This all depends of the size of the tables too of course. If the sizes stay small then there is no need to partition.
Read more about partitioning here.
Use views:
create view customer as
select * from customer_jan_15 union all
select * from customer_feb_15 union all
select * from customer_mar_15;
create view location as
select * from location_jan_15 union all
select * from location_feb_15 union all
select * from location_mar_15;
I'm working in an API that needs to return a list of financial transactions. These records are held in 6 different tables, but all have 3 common fields:
transaction_id int NOT NULL,
account_id bigint NOT NULL,
created timestamptz NOT NULL
note: might have actually
been a good use of table in inheritance in postgresql but it wasn't done like that.
The business requirement is to return all transactions for a given account_id in 1 list sorted by created in descending order (similar to an online banking page where your last transaction is at the top). Originally, they want to paginate in groups of 50 records, but I've got them to do it on date ranges (believing that I can do it more efficiently in the database than using offset and limits).
My intent is to create an index on each of these tables like this:
CREATE INDEX idx_table_1_account_created ON table_1(account_id, created desc);
ALTER TABLE table_1 CLUSTER ON idx_table_1_account_created;
Then finally to create a view to union all of the records from the 6 tables into one list and then obviously the records from the 6 tables will need to be *resorted" to come up with a unified list (in the correct order). This call will look like:
SELECT * FROM vw_all_transactions
WHERE account_id = 12345678901234
AND created >= '2014-01-01' AND created < '2014-02-01'
ORDER BY created desc;
My question is related to creating the indexing and clustering scheme. Since the records are going to have to be resorted by the view anyway is there any reason to do specify the individual indexes as created desc? And does sorting this way have any penalties when periodicially calling CLUSTER;
I've done some googling and reading but can't really seem to find any information that answers how this clustering is going to work.
Using PostgreSQL 9.2 on Heroku.